(A speculative ramble… read at your own peril!)

One discussion in our 892 class on Semantic Web Programming got into the similarities between Resource Description Framework/Web Ontology Language (RDF/OWL*) and object-oriented programming. Yes, yes, RDF and OWL just capture meaning, while OOP makes stuff happen. There are a number of key similarities and differences.

But here’s what strikes my fancy this afternoon: object-oriented programming is cool in that makes it possible to build programs faster with with all the existing modules that have been designed to do specific things and plug into the all the potential programs that might need to do them. It’s like building a computer at home: you don’t manufacture your own chips and chassis and keyboard; you order all those components and focus your creativity on wiring them up in a new, creative way to meet your unique requirements.

Ditto programs: you don’t write your own sort routines. Well, you can, if you love to code, but OOP lets you use existing chunks of code that have already been looked over by lots of brains and tested in lots of situations. Brain power that you would exert coding and checking those basic functions can be focused on creating cool new stuff.

Ditto ontologies. For our second assignment in 892, I composed my own ontology for a library (books, magazines, authors, editors, call numbers…). The effort was worth it, because I need to learn how OWL works. But if I were doing a real project, it would be silly for me to rewrite a library ontology when I could just include an ontology that’s already been built to model that concept.

But consider this: who checks those ontologies? Sure, you can run your RDF through a validator to make sure you’ve got all your tags. But who validates the meaning?

Follow me for a bit: Ontologies are more complicated than a program module. A program is simply capturing some explicit process: for instance, that sort routine either alphabetizes properly or it doesn’t. An ontology is capturing meaning, and the Polanyi that I’m reading says meaning always has a tacit component. Maybe that’s not a big deal in capturing simple things like, “The library has a print copy of Isaac Asimov’s Robots and Empire on the second floor, in the PN section,” but it will complicate the creation of ontologies for complicated concepts fraught with subtleties and disagreements. It will take some serious and expertise in both Semantic Web techniques and the specific knowledge domain to create appropriate ontologies for some topics.

It seems to me the folks who are going to build these ontologies are likely going to be RDF/OWL experts first and subject experts second. They’ll be Ph.D./D.Sci’s in information systems and computer science who also happen to be enthusiasts in biochemistry or photography or bhakti-yoga (perhaps an Eastern approach to enlightened artifical intelligence). Will ontologies composed by non-specialists be good enough to capture the deep knowledge of the specialization?

I wonder if we have here an inherent limit on the capability of the Semantic Web to support intelligence. Practically speaking, it will be very hard to bring together the skills needed to compose really good ontologies to cpature complex meaning. It will be hard for the rest of us to check those captured meanings.

Maybe it won’t matter. Maybe we’ll be so impressed with the things our Semantic Web apps can tell us (and there will be plenty) that we won’t mind if they can’t answer our toughest questions about art, philosophy, economics, or other complex topics. Maybe they’ll save us so much time picking out groceries and arranging doctor’s appointments for us that we’ll have more time to hash out the problems that can only be solved by human application of the tacit knowledge RDF/OWL can’t get.

But even there, who will watch the Semantic Web programmers? Who will check the published ontologies to make sure they haven’t captured some bias, some value judgment unique to the developer but not applicable to all users? Or forget bias: as ontology repositories develop, who will monitor them to ensure that the meanings they have locked inot code still accurately reflect the fluid, evolving meanings of the human world? Will ontologies keep up with our intellect and culture… or hold them back?

Now watch: I’ll turn to the next chapter of our textbook and find they’ve already answered that question.

*Web Ontology Language –> OWL: Why not OWL? There is a story….