2 - 27.1. Logical Formulations of Learning (Part 2) [ID:30393]
50 von 223 angezeigt

So I would like to show you a couple of things here about learning.

There's three variants that have all kinds of nice three or four letter acronyms, which

you don't have to memorize.

And they really only differ in the way we solve or even express this learning equation

here.

Learning in logic is, if you have a hypothesis, then it better explain together with the data

what the classifications are.

It's just another way, a logic-based way, of writing down what a classifier should do.

This is exactly what we had with these point sets.

If we want to say, oh, some of them are bad and some of them are good, then we have the

hypothesis that everything that's left to here is good and everything that's right

over here is bad.

That's what this is.

And it predicts or explains, which is the same thing, the classifications.

In the variant that we call explanation-based learning, we kind of split this learning entailment

equation into two.

One is the one we've already seen, that the hypothesis plus the description of the situation

should predict the classification.

But we also want that the hypothesis is true.

You can always explain everything by just saying A and not A. Remember?

From A and not A, we can deduce anything.

So I have told you, or we discussed yesterday, that solving this equation is very easy by

just saying, oh, the hypothesis is just the classification, which sounds like cheating,

right?

And why does it rain?

Because it rains.

There's another way of cheating in logic, as you know.

You can always kind of utter a contradiction, and then everything follows.

And kind of the problem goes away in a puff of logic.

You're not supposed to do that, right?

Because it doesn't really help in learning.

So of course, you can learn everything by just making true and false coincide.

That's not what we want.

And explanation-based learning really says one way of getting around that is by saying,

well, the hypothesis must be consistent with the background knowledge.

You can explain that it rains today by saying 2 is equal to 3.

In the background knowledge, you always have 2 is not equal to 3.

So you have created a contradiction that way.

So there's various ways of cheating.

And really, what we want to do here is to derive consequences of our background knowledge

that explain the classifications we're seeing.

And what we're really doing is somehow a compilation effect.

So we're not inventing anything completely new.

That would be dangerous.

But we have all these background knowledge theories.

And they have consequences we might not be aware of.

In math, you have a couple of axioms about natural numbers, the piano axioms.

And one of the consequences of that is, hopefully, is something about prime triples.

It's not obvious.

But that, of course, explains other things.

Teil eines Kapitels:
Chapter 27. Knowledge in Learning

Zugänglich über

Offener Zugang

Dauer

00:21:38 Min

Aufnahmedatum

2021-03-30

Hochgeladen am

2021-03-30 17:57:58

Sprache

en-US

Explanation of different approaches of adding background knowledge to learning. 

Einbetten
Wordpress FAU Plugin
iFrame
Teilen