OK, so we're first going to look into a part of the fundamentals
of supervised learning.
And that's called inductive learning.
Inductive as a form of reasoning that is out for actually
learning things.
And I would like to make an example.
Essentially, inductive learning is what goes on in science.
Since we're in Alangen, I would like to make an example
with Ohm's law.
Remember, Ohm, guy from Alangen, you have three things.
You can measure the current.
You can measure the resistance.
And you can measure the current, the resistance,
and the voltage.
And what did Mr. Ohm do?
Well, he made experiments with different resistances
and different voltages.
And then he said, well, I have x1, x2, u1, r1, i1, u2, r2, i2.
And I'm sure he made a lot of those.
And at some point, he says, aha!
Aha!
That's almost right, right?
I think it's.
No, no, no, no, this is correct.
Good.
I always forget.
OK?
That's learning.
We're looking, that's inductive learning.
We're looking at examples, say, given by experiments,
or by being told, or whatever.
And then we come up with a function
that actually hypothesizes or tells us, in this case,
about how things relate to each other.
And the wonderful thing here is the following,
that once you arrive at this, you can make predictions.
Now, you don't have to measure three values.
It's completely sufficient to say, well, if this is three
and this is six, then this is.
We don't have to measure it.
Must be 18.
That's learning.
Knowing stuff you haven't been told.
OK.
And that's really the fundamental of it.
All right?
So an inductive learning problem is
to find a hypothesis H for a function.
Hidden underlying relationship.
Right?
Presenters
Zugänglich über
Offener Zugang
Dauer
00:21:43 Min
Aufnahmedatum
2021-03-30
Hochgeladen am
2021-03-30 15:27:25
Sprache
en-US
Definition of the hypothesis space and the target function as well as Ockham's razor. Also, it is explained how to choose the hypothesis space.