7 - 25.5. Evaluating and Choosing the Best Hypothesis (Part 1) [ID:30375]
50 von 222 angezeigt

ganz?

Em, so how do we evaluate our good, our hypothesis is...

in practice!

Where we don't know what the mechanism is.

So, it's again the problem about predicting the

future.

Cott hatten a past is relatively easy, predicting

the future is really, really hard.

And um, if we even want to attempt this we have to make

an assumption, namely that the mechanism we want to predict

isn't really going to change on us at some point.

Right, if we have data about some function and directly after our data ends, the mechanism

under this function completely changes, we have no basis to make predictions whatsoever.

So we have to make some kind of a stationarity example for the probability distribution under

this.

So what does that mean?

What we do is we model that.

Every future example, where is it just an xy pair, we treat that as a random variable.

So we have random variables for the future and we have evidence for the past.

And now we know what that is.

One of the things we want to have is that we have independent events, that there's no

dependence between the examples, because that would give us redundant data, would give us

one of these slow growth curves.

And we want to have that the priors don't change, because we want to get at the probability

distribution.

So we want to have non-redundant and independently distributed data.

If these are not met, there's nothing we can do.

We can't predict when we have sudden changes.

And that's what we call identities or openly distributed.

So we're going to assume that everything we do for the moment is already because otherwise

we have no chance.

A typical example is you have a die throw, and whether it's loaded or not, you can actually

make predictions about the die unless the die changes.

If you have a variable load die

and you can construct such a thing,

then there might be somebody with a remote control,

which we know whom we know nothing about,

and they just basically change

the mechanism as we go along and then we

cannot make any predictions.

So, if we have an IID problem,

then we can actually define what the error rate is.

Is essentially, we have a hypothesis H,

then we define the error rate of H to be the fraction of errors.

Essentially, just wherever if H differs from M,

then we can count those and basically take this function,

take the domain of the function as the sample size.

Now, what you really want to understand is that

a low error rate on our training set

doesn't mean that our hypothesis generalizes well.

Teil eines Kapitels:
Chapter 25. Learning from Observations

Zugänglich über

Offener Zugang

Dauer

00:22:00 Min

Aufnahmedatum

2021-03-30

Hochgeladen am

2021-03-30 16:36:33

Sprache

en-US

Explanation of Error Rates and Cross-Validation and how they work. Also, an algorithm for Model Selection is given. 

Einbetten
Wordpress FAU Plugin
iFrame
Teilen