That was explanation-based learning.
And in a way, that's kind of how you learn from Zog grilling your food on a stick.
You have an example, you generalize a rule from it, and then you can kind of apply it
down to your basic situation.
And that is just what we usually do instantiation of variables.
Okay.
There is, however, a problem.
If you do those things, if you do explanation-based learning, you meet Fernando at the airport,
you'll just as happy come up with a rule, everybody is called Fernando.
And that's only very partially true in Brazil, I've been told.
So what we really want to do is to somehow distinguish between predicates here.
We know Fernando is a Brazilian, and we know his name, and we know the language he speaks.
We want to somehow get to grip that you can generalize about the language,
but you can't generalize about the name.
So what really is at work here is that we have some knowledge, some background knowledge
about nationalities and languages, essentially.
So you know, say, in languages that are not Switzerland,
you know something like everybody speaks their mother tongue.
And it's usually determined by your nationality.
Brazilians speak Portuguese, Argentinians speak Spanish, and Germans speak German.
And you could express that in something like this, right, if X has nationality,
and Y does too, then the languages of X and Y are probably the same.
So if you know that Fernando is a Brazilian, and you know that Fernando speaks Portuguese,
then you can use this thing here to get to know that every other Brazilian also speaks Portuguese.
There's a couple of casualties of people speaking multiple languages,
or Brazilians from the Amazon that speak all kinds of indigenous languages.
But by and large, that's the usual thing to do.
And things like this, they are important for relevance.
And we use those kind of rules all the time in learning to make determinations of what to learn and these kind of things.
So we're going to introduce a kind of a special syntax that basically says something like,
nationality determines language.
And the nice thing about these determinations here is that they kind of limit what the hypotheses are.
In our Brazilian example, we're not even going to try to learn things that are not determined by nationality.
We have a determination of nationality over language, but we don't have a determination of name, of nationality over name.
One can say things like, all the Germans are called Müller, but it's not actually numerically true.
It just might feel that way.
And of course, in real life, these determinations are sometimes a bit weak, right?
It's like this one, or at least this axiom, background knowledge axiom, is a bit of an overstatement.
But still, we're not going to use it to derive stuff with it logically.
We're going to use it to limit the hypothesis space.
We're going to limit, we're going to only be looking at the important features.
We usually have a relatively good feeling when we're learning stuff.
We're not even going to have to try in our learning any hypothesis that says,
well, if we want to learn something about the language that somebody speaks, we're not going to say, oh, and what is the influence of the weekday?
Because we know that you usually speak the same language every day of a week, or the moon phase, or things like that.
Or the hairstyle of Donald Trump typically doesn't determine what language I speak.
It might drive me towards German in this case.
We kind of use these determinations as humans to already limit the kind of hypotheses we build.
We're only going to specialize on things where we think this might be relevant.
And that's exactly what we're going to do.
Presenters
Zugänglich über
Offener Zugang
Dauer
00:26:56 Min
Aufnahmedatum
2021-03-30
Hochgeladen am
2021-03-31 08:16:36
Sprache
en-US
Explanation of Relevance-Based Learning and Learning Relevance Information. An algorithm for the latter is given and its complexity is analyzed. Additionally, an algorithm for Relevance-Based Decision Tree Learning is given and compared to standard Decision Tree Learning.