55 - Recap Clip 8.17: Artificial Neural Networks (Part 4) [ID:30458]
50 von 82 angezeigt

And we get an algorithm which is compute forward for every example, propagate backwards layer

for layer for layer, which is why we actually have this layered design.

And then we've adjusted the weights for one new example.

And then of course, you do that until you run out of examples.

And here is, I'm not sure why.

No.

Go away.

Yeah, I thought so.

I'm sorry.

Full screen, maybe this.

There we go.

OK.

So we've basically briefly looked at the math in there just for reference,

just vector math.

And then of course, we're interested in evaluation.

So what we do is we get good learning curves.

In many examples, we get much better, of course, behavior on the restaurant data.

Remember, a single layer perceptron never got beyond 0.45 or something like this.

Because we didn't have, we couldn't express the right hypothesis.

But we're getting relatively good behavior here.

Not quite up to speed with decision trees.

Because there we had all these information theoretic pruning methods,

which is something we're not actually putting into the neural network.

All this information theory is kind of guidance and heuristics

that come from the outside, knowledge that we essentially

hardwired into the learning procedure.

And that is actually paying off here, giving us better performance.

Whereas the neural network, to be fair, is something that essentially

knows nothing about the domain.

Not even information theory stuff.

So this is not too surprising.

And just to give you an overview of what the state of the art does.

So we have a standard problem is handwritten, handwriting recognition,

for instance, for the postal service.

To look at the numbers in the addresses or the numbers in the postleitzal.

They need to know about numbers.

And these are typical, I think, of the postleitzal.

In the postleitzal, they need to know about numbers, and these are typical data sets.

They do have a couple more for learning.

And we have two problems to solve here, or two sources of noise.

One is that we have digitization problems.

If your ballpoint pen runs out, then you get very faint data in some places.

And of course, the natural variance of how people write numbers.

Which is pretty terrible.

If you only see this, would you know that this is a seven?

So these, by the way, is US data, where you don't even have the little bar we do in German.

This is easier to recognize than that.

What people do.

OK.

And you need something like, if you have a three-layer network with, I don't know,

Teil eines Kapitels:
Recaps

Zugänglich über

Offener Zugang

Dauer

00:08:24 Min

Aufnahmedatum

2021-03-30

Hochgeladen am

2021-03-31 11:47:27

Sprache

en-US

Recap: Artificial Neural Networks (Part 4)

Main video on the topic in chapter 8 clip 17.

Einbetten
Wordpress FAU Plugin
iFrame
Teilen