Monday, March 22, 2010

Guidance of Inference

When I first switched to this new blog, I thought that the point was going to be to take it more seriously, writing in a more organised fashion and being more careful to only say things I know to be true.

The truth is, though, that just makes me not post. A more informal blog would do much better. One advantage of the change is that the blog's title reflects a much broader range of possible topics, such as the previous post that was just some playing with trigonometry.

So, here are some disorganised thoughts about inference guidance for AI.

What I want is a normative theory-- something that says "This is how it should be done, in principle." IE, a normative answer. Practical inference systems are of interest, but a guiding principle that tells us what sorts of practical implementations will tend to be good is very important.

This is somewhat related to my wondering about (normative) systems for handling mathematical uncertainty; the best way to guide inference may also involve, or be involved in, the best way to handle uncertainty about the potential result of an inference.

The main example of inference I'm thinking of is optimisation or constraint satisfaction.

Levin search is asymptotically optimal, but does not use the problem description in any way (like genetic programming). Jurgen's "Optimal Ordered Problem Solver" ("Oops," an unfortunate acronym) is optimal in a stronger sense, but still does not use problem descriptions. Hutter's asymptotically optimal algorithm for well-defined problems uses problem descriptions. So, several areas for improvement present themselves. Something that is optimal in Jurgen's sense ("bias-optimal") but which uses problem descriptions rather than searching blindly would be nice. Also, bias-optimality is a strong sort of optimality, but does not capture everything Oops is meant to do: Oops is meant to learn from its experience (hence "ordered problem" solver). Jurgen asserts that it is optimal in a certain sense, but I think there is room for improvement.

One way of using problem descriptions would be to just hand them to Oops as if it could understand them, and judge its output on that assumption. It would be forced to learn to use the problem descriptions. However, this would be quite inefficient.

A more interesting way would be to use Levin search at the inference-guidance level. Execute all possible inference-guidance programs in parallel, with the execution time they are given weighted by their simplicity. Solutions would no longer have to be checked, since results would automatically be correct; a proof of correctness would automatically come with the solution.

Oops could be modified in the same way. (Oops can be thought of as just an efficient implementation of Levin search with added features for learning from previous success.)

Now, Oops does several things to learn from experience. (I'll use the language of inference guidance, making the assumption that Oops has been modified to work via inference guidance rather than direct solution generation.)

  1. It attempts to apply the so-far-sucessful inference guidance program to the new problem, searching extensions of the program if it's not yet a complete program (ie, if the new situation causes the execution to reach the end of the code where before it didn't); half the attention is directed to this, while the other half searches for a fresh solution (to all problems, not just to the new one).
  2. Inference guidance programs are also allowed to provide search orderings for continuations, so that it's possible that a partial inference guidance program represents a good search heuristic for inference guidance programs; this is particularly useful in the situation mentioned above, when a program turns out to be incomplete for a new example.
  3. New programs are allowed to copy and modify old ones.
To me this appears haphazard and hacky, though it has its good points. What can we do for a more normatively forceful strategy of learning?

Oops takes execution time into account just by the fact that strategies which are faster will tend to be be found more quickly than slower strategies. This is because if it optimised for execution time directly, then it would quickly overmatch: once it found a solution at all, the fastest-executing program would simply be the one that spit out that answer when given that input. The situation may improve with the amount of experience of the system (since after a large number of instances, a lookup table may no longer be the fastest way of computing the answers). Still, this seems wrong; we want the system to use extra cycles to look for inference strategies that are optimized to be quick on probable questions, not just on previously-experienced questions.

It seems reasonable, then, to search for a probability distribution which would generate the questions observed so far. Using the current-best estimate, the system should look for the quickest solution not just to the current known problems, but to the expected future problems as well.

This could be done in several ways. Perhaps potential future instances are generated at random from the distribution, and inference methods are tested against them. Perhaps a strategy more like Hutter's asymptotically optimal one is used: the system might try to prove which strategies will be better. In any case, the objective will be clear: optimize speed for expected problems.

No comments:

Post a Comment