What we did basically go into is that from a successful run of DPLL,
you can actually generate a resolution proof,
which allows you to check up on your resolution,
on your resolution, on your DPLL run.
That's something that's also very important.
Sometimes you have a procedure that does reasoning,
and it'll tell you go left or go right.
Imagine as an agent,
you're standing on top of a cliff and the reasoning procedure tells you jump.
Then you have two options.
You can either trust the agent and jump,
or you can ask why.
The good thing about human agents is they can actually tell you why.
DPLL on its own cannot.
By the way, many of these currently hyped machine learning based things can't.
Sometimes that's inherent,
sometimes we just have to work a little bit harder.
Explainable AI is one of the big topics,
where you can actually demand explanation.
Here for DPLL, which in itself just gives you a yes or no answer,
as a decision procedure will,
you can tweak to make it into an explainable thing.
That will actually give you a resolution proof.
You can actually generate a resolution proof.
This is the trace of the run.
We have marked all the things we do splitting on,
and the clauses they simplify into,
you can make a resolution proof out of that.
That is something which actually you can compute whether it's
actually true within independent software.
You can see, that's why I should jump.
I have a backpack with a parachute or whatever.
Presenters
Zugänglich über
Offener Zugang
Dauer
00:02:44 Min
Aufnahmedatum
2020-12-17
Hochgeladen am
2020-12-17 16:09:40
Sprache
en-US
Recap: DPLL (A Restricted Form of) Resolution
Main video on the topic in chapter 12 clip 3.