So, what did we do?
We looked at inverse resolution, the last bits of knowledge in learning.
The idea of inverse resolution is really very, very simple.
What you want to do is you want to basically deduce the classifications, remember positive
or negative, from the background of course, that's what we want to inject into learning,
that's the whole point of the thing, plus the hypothesis we currently have, plus of
course the descriptions.
But if those entail the classifications, then of course there must be a proof.
The idea is to look at that proof, just like in explanation based learning, but not by
making a proof in full and then deriving information or rules in that case from it.
What we do now is we do proof search essentially backwards and during that time we learn and
that has the advantage by that we can actually invent predicates, that's the main kind of
call to fame of inverse resolution that we can do this.
We've looked at resolution and the nice thing here is you can run a resolution proof backwards
by just basically taking the end, which is the empty clause, which since we're learning
rules I've put into this kind of true impulse implies false form and then you have the examples
here and then you can essentially back chain, chain with these using backwards resolution
and then eventually generalise, generalise, generalise to these general rules.
So normally in resolution what you do is you start with the general rules and instantiate
them with the information you have on the situation, come up with an empty clause and
then you've proven something, usually something observed or predictable about the situation
and here we do exactly the same thing backwards because we don't have the general rules yet,
but they appear in this backwards made proof, they appear in the situation, in the same
spot where we would have had them before.
The problem here is that if the combinatorics of resolutions are bad, the combinatorics
of backwards resolution is much, much, much, much worse.
Why is that?
Well, for instance, we can use unification, which is a deterministic thing in resolution,
we have to use something called anti-unification, which kind of goes backwards, which has a
huge search space by going backwards.
The trick here is to somehow control the combinatorics and basically what you do is you let go of
everything that's expensive.
Function symbols, bad for unification, they make all kinds of trouble in anti-unification.
Go away.
Full resolution, go away.
Horn clauses are nice, they kind of give you some kind of control, exactly the control
we used in logic programming.
People can predict what happens there, which is why we're using Prolog as a programming
language, not like a theorem.
You do a couple of things, which is essentially the upshot of that, is that you're getting
inductive logic programming, but you're not getting inductive first order logic.
The kind of paradigmatic thing here is that you can, in backwards resolution, forwards
resolution kind of gets rid of certain literals, which means if you do it backwards, you have
to invent a literal, a cut literal.
When you have to invent it, you can take one of the old ones, one of the known ones, or
you can just invent a new one and then in this kind of an inverse proof style, find
out rules about that new predicate from your observations.
You can invent a predicate and learn things about it.
That is usually something that is very useful.
Of course, the search spaces are humongous and you have to control them somehow.
Presenters
Zugänglich über
Offener Zugang
Dauer
00:06:56 Min
Aufnahmedatum
2021-03-30
Hochgeladen am
2021-03-31 11:26:45
Sprache
en-US
Recap: Inductive Logic Programming: Inverse Resolution
Main video on the topic in chapter 10 clip 7.