20 - Artificial Intelligence II [ID:47311]
50 von 675 angezeigt

Okay, hello everyone. Bad news number one, Professor Kohlhaz is still not here. Good news,

I'm here. Bad news. I was sufficiently lately informed that I will have to take over today again,

that I'm woefully underprepared. I was prepared for yesterday and I did not do everything I was

prepared for, so to some extent will be fine. If I'm too fast at some point we will enter PowerPoints

karaoke territory, which is going to be fun. Don't worry, we're going to get through this

together somehow and obviously if anything should be unclear in that part. It's not like I don't

know the subject matter but I don't know the specific slides that might come up. If that's the case,

if anything should remain clear, unclear, I will just like tell Professor Kohlhaz to repeat that

parts next week. So that's not going to be to your detriment I would hope. Okay, let's do a recap of

what we talked about yesterday, which is three general techniques, first one being explanation-based

learning, second one being relevance-based learning, and the third one being inductive logic programming.

The first two really are like general generic techniques. I just said that a minute ago to someone,

it's not like there's a specific algorithm that we could like teach you, I.e. implement this

algorithm, that's how you do explanation-based learning. It's more like a general technique that

would have to be adapted to whatever situation you would want to use it in. So it's very much

use case specific, which is also why basically the only thing I can really do is like pick an

example, show you how you would do things in that particular example, and then hope that the

message comes across sufficiently well that if you find yourself in the situation where you have

to implement something like that in an analogous situation, by sheer mental ability to abstract from

the particulars, you would be able to actually implement that for whatever situation you will have

to do it in. So the example that we have here is symbolic computation, in particular symbolic

differentiation, i.e. we want to have an algorithm that we feed a function as a purely syntactic

expression, and we want that function to compute the derivative of that function with respect to

some arithmetic variable. In that case x squared with respect to x, which is relatively easy to do,

depending on what your set of inference rules you would use in that context look like,

if you want to implement your inference rules as generic as possible, as you would usually want

to do, just to keep the number of hard-coded inference rules you would have to do as minimal as

possible. Then it turns out that even in that relatively simple example it would take apparently

something like 136 proof steps with 99 that end branches. So it's a good idea to do something so

that whenever you have to do that kind of computation you have to ideally do it at most once,

memorize something about that process that will speed it up in the future.

By the way, this is actually something that most computer algebra systems do. The most naive

way to do that is just memorization i.e. we simply just store the input output pairs once we have

computed them in some kind of hash map, then we just need to look them up. Yeah, and the idea of

explanation-based learning is basically to go one step beyond. And in the process of computing a

particular input output pair, keep track of what you're doing in the algorithm, try to abstract

away from the particulars as much as possible, and then instead of memorizing the input output pair

itself, you would try to abstract away from that process and memorize the particular properties

of the input that are relevant to derive at the generic output. Okay, so if we do that for the

relatively simple case of the expression one times zero plus x, and we want to know what does

that simplify to? We would just construct a proof tree that we have to do anyway to come up with

the ultimate simplified variant, which is obviously just x. And while you're constructing that proof

tree, you might as well just keep track of the tree itself, abstract away all of the particulars

that don't matter. In this case, unfortunately, the only particular that doesn't not matter is the

name of the arithmetic variable that you have. And then from that, take the leaves of the proof tree,

concatenate them into a conjunction, throw out the ones that are always true anyway. So those

you can ignore, and then store the result of the simplification in terms of the variables occurring

in that expression. Right? So that way, once I have to actually fully compute the simplification

of the expression one times zero plus x, subsequently, I don't just learn the precise input output

pair for that particular expression, but I can also derive the general rule that whenever I have

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:24:11 Min

Aufnahmedatum

2023-06-21

Hochgeladen am

2023-06-23 18:09:08

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen