48 - Recap Clip 8.10: Computational Learning Theory (Part 2) [ID:30451]
16 von 16 angezeigt

And to illustrate that, I've essentially shown you this example of decision lists,

decision lists essentially being a notational variant in a way of decision trees, which

allows me to make nice size bounds. And so you can turn the decision tree learning problem,

which we know needs to see all examples, into restricted subspaces of all decision trees,

namely the ones we can bound by the number of nodes here, by the length of the list,

and the number of conjuncts in here. And if we do that, we actually get better results.

So here we have what we call a sample complexity that is essentially, right, this here cancels

out, we get some kind of a linear set of examples. We have to show the algorithm. Before we know

that the probability is very small that one of the seriously bad hypotheses actually has

survived until now. Okay? We're doing theoretical learning theory here. So don't be shocked

to see big O's here. It's all about complexities of sample sizes. And we want to have the sample

complexity be small, meaning we want to be able to learn without actually showing lots

of examples. And in a way, this here is kind of the essence of PAC. How many examples do

I have to show before I can be reasonably sure that I'm not getting any bad hypotheticals?

Okay, now we're looked at an algorithm, and saw that the much simpler algorithm is almost

as good as decision tree learning, but much simpler.

Teil eines Kapitels:
Recaps

Zugänglich über

Offener Zugang

Dauer

00:03:04 Min

Aufnahmedatum

2021-03-30

Hochgeladen am

2021-03-31 11:08:16

Sprache

en-US

Recap: Computational Learning Theory (Part 2)

Main video on the topic in chapter 8 clip 10.

Einbetten
Wordpress FAU Plugin
iFrame
Teilen