17 - 25.8. Artificial Neural Networks (Part 4) [ID:30385]
50 von 287 angezeigt

So what you really do, that's right.

We looked, we compute the deltas, these here, for the outputs by moving forward, by computing

forward through the networks with the current set of weights.

Then we go layer by layer down, we propagate the delta values back to the previous hidden

layer, and then we update the weights between the layers.

Okay?

Here's the algorithm.

It does exactly what we've said now with the equations we derived above.

So we have three phases, right?

We have, we first randomly initiate them, then we go over all the examples.

What do we do?

Well, we, from the input layer, compute the outputs, layer by layer by layer by layer,

and then when we're up at the output, we know, for the example, we know what the y's are,

the ideal output, and then we back propagate the deltas back down, and we do that until

we're happy.

Right?

Whatever happy means.

What might happen is B in this case.

Any ideas?

When would you stop?

Yes?

That's one way.

Yes?

Are those?

Your battery dies.

You run out of time.

That's a good reason for stopping.

Or you do things like k-fold cross-validation, right?

Where you kind of do k-fold holdout cross-validation, and at some point you see, oops, my error's

going up again.

Right?

You've reached a minimum.

All those kind of things you could do.

Which is why on this algorithm I have some kind of a stopping.

It really depends on your application of when you're stopping.

Say you have to learn very, very fast because the exam begins.

Right?

You forgot the exam.

It's 8 o'clock in the morning.

The exam is at 2.

Then you're better done with learning at 2.

Because if you don't show up for the exam, you're going to lose anyway.

So something like this.

So this is the central algorithm.

Where's my network?

No.

I had a network at some point.

There it is.

Right?

From the inputs, the thing to remember is here.

Teil eines Kapitels:
Chapter 25. Learning from Observations

Zugänglich über

Offener Zugang

Dauer

00:21:09 Min

Aufnahmedatum

2021-03-30

Hochgeladen am

2021-03-30 17:46:34

Sprache

en-US

Back-Propagation is defined and an algorithm is given. Derivation and properties of Back-Propagation are discussed. Also, there is a short summary for artificial neural networks. 

Einbetten
Wordpress FAU Plugin
iFrame
Teilen