Tuesday, May 18, 2010

SOAR workshop so far

Daniel and I have just come back from the pre-workshop dinner. Our ride (another attendee) had a delayed flight, and we had to call a taxi; we ended up being about 45 minutes late for dinner. Daniel and I ended up getting seats at a separate table, so we didn't talk to anyone right away; eventually, though, we got some good discussions.

My primary thesis has always been the importance of expressive power in AI. This has been a reaction to the overwhelming majority of AI research being done with propositional methods.  However, this method tends to fall on deaf ears for several reasons.

  1. If I don't phrase myself carefully, it sounds like I'm arguing the anti-AI position that many people abuse Gödel's theorem to argue. This is thanks, for example, to my use of words like "uncomputability" (and if I invoke Gödel's theorem, of course, that just makes it worse). If I say "system X can't represent uncomputable mathematical concepts, but humans can," it really makes people think that I mean "computers can't handle uncomputable mathematical concepts, but humans can." I don't mean that. Rather, I want to soundly answer the anti-AI position via a formal theory of which systems are capable of climbing the Tarski hierarchy on their own (and how they do it).
  2. John Laird and the other fellow in the front seat when he was giving me a ride back to the hotel (sorry, don't remember the name! EDIT: Paul Rosenbloom) pointed out that worrying about expressive completeness is far less common than trying to find less-expressive subsets of existing systems which are more efficiently manipulated. Daniel put it another way: one shouldn't just worry about what a system can do in principle but about what it can do in reasonable amounts of time. Systems which are "more expressive" might be able to do far less in practice. My response to this is just to say that I prefer systems which use heuristics to stay in the space of efficient reasoning, rather than reduced expressiveness... this is a weak-sounding point, though I do think it is the right response...
  3. If I'm already talking to someone who believes that expressive completeness is an important issue, then the remainder of my pet theory consists of ways of generalizing from a propositional system to more expressive models. This typically becomes a situation of singing to the quire, because the person will already be past propositional models and into at least first-order. All I have left to argue is the seemingly reversed position that most reasoning should actually take place at the sub-first-order level (say, 1/2 n-tuple markov models, 1/4 context-free, 1/8 context-sensitive, 1/16 first-order, 1/32 2nd-order...). This is related to the response for the 2nd point.
  4. My project is really quite an abstract one, because at present my best guess is that so long as a system is doing probabilistic relational learning (ie, at least 1st order), and so long as it approximates Solomonoff induction in its ability to discover hidden relations in data, then it satisfies the requirements to climb the Tarski hierarchy. Because of this, I can't argue that there is a pitfall of expressive incompleteness waiting to trap anyone who isn't careful. So, it seems I should only be arguing for those basic requirements rather than arguing that a general theory telling us how to climb the Tarski hierarchy must be found. If I can argue for those points on separate grounds, a general theory saying that those are sufficient is more a nice factoid for those worried about the foundations of mathematics than a guiding principle for AI. (Though, there's still the possibility that the required structure for a system is significantly more specific than Solomonoff induction, in which case the theory may have definite value.)
So, should I give up what has been my primary thesis for 4 years? The banner "Expressive completeness is the central problem!" may not be the best way of organizing and presenting my ideas at present... in any case, it's clear that I need to think about my arguments more carefully.

    No comments:

    Post a Comment