Best Reasoner Available?
We have some applications that are screaming out for a semantic web solution. I'm having a hard time finding the right tools to put together a demo to convince management this is the way to go.
Oracle 10g seems to have the best RDF storage engine around. We license and use Oracle heavily, so this isn't a problem.
The real selling point is the inferencer or reasoner. I haven't found a good, solid OWL and rules reasoner out there. Certainly nothing that can scale to the millions of triples, which any application we build would require (at a minimum).
This leads me to wonder if a good semantic web application should even expect to use an inferencer. The larger question here is, What is the tool stack and information process flow for a semantic web application?
For traditional relational applications, this is well known. For instance, you need a database, some sort of object/relational mapping layer, and a set of classes that implement your business logic. The key here is that your business logic is encapsulated in your object model.
With a semantic web application, much of the business logic is encapsulated in the Ontology and the rules. Where does that logic become executed? Would it be executed by a stand alone rule/inference engine? Or does the Ontology simply provide hints to the domain model?
I'd be very curious to see where the inference engine sits in the overall application. Is it a stand alone process, or is it integrated into the domain model?
Then, what inference engine is out there and capable of working with millions and millions of triples and in a OLAP environment (high read/write situations)?
I'd love to deploy OWL, RDF, and some inference engine for some of these data rich applications. Any tips or pointers would be most welcome.
Oracle 10g seems to have the best RDF storage engine around. We license and use Oracle heavily, so this isn't a problem.
The real selling point is the inferencer or reasoner. I haven't found a good, solid OWL and rules reasoner out there. Certainly nothing that can scale to the millions of triples, which any application we build would require (at a minimum).
This leads me to wonder if a good semantic web application should even expect to use an inferencer. The larger question here is, What is the tool stack and information process flow for a semantic web application?
For traditional relational applications, this is well known. For instance, you need a database, some sort of object/relational mapping layer, and a set of classes that implement your business logic. The key here is that your business logic is encapsulated in your object model.
With a semantic web application, much of the business logic is encapsulated in the Ontology and the rules. Where does that logic become executed? Would it be executed by a stand alone rule/inference engine? Or does the Ontology simply provide hints to the domain model?
I'd be very curious to see where the inference engine sits in the overall application. Is it a stand alone process, or is it integrated into the domain model?
Then, what inference engine is out there and capable of working with millions and millions of triples and in a OLAP environment (high read/write situations)?
I'd love to deploy OWL, RDF, and some inference engine for some of these data rich applications. Any tips or pointers would be most welcome.