15 - 25.8. Artificial Neural Networks (Part 2) [ID:30383]
50 von 214 angezeigt

Let's do the math of the networks.

So we have a neural network.

Remember that was just any directed graph of computational, of McCulloch-Pitts units.

And we're particularly interested in the easy case where these graphs are acyclic.

Makes the math very easy.

Just like in circuits, if the circuits are indeed DAGs, then they're deterministic and

all have all these good properties.

They remember nothing, but they just compute Boolean functions essentially.

So we're interested in what is called feed-forward networks.

And we're going to, in this class, only look at feed-forward networks.

Even though it is relatively clear that our brains are not feed-forward networks, which

you can see by the fact that you can remember things.

Feed-forward networks can never remember anything.

You need somehow loops.

But let's do only the feed-forward ones at the moment.

Controlling recurrent neural networks, doing interesting things with them, is something

that has only really started in the last 10 years.

They can do wonderful things, but to get them to do what you want them to do is really hard.

And recurrent networks, something that's still very much a research topic.

So we're going to look at feed-forward ones.

And typically, we organize the networks in layers.

Not every directed acyclic graph is actually organized in layers, but we actually do.

The idea here is that you have a graph, has an input layer, and has an output layer.

And the only thing you allow is connections between the layers.

You don't allow this.

It wouldn't be acyclic thing, but you also don't allow that, shortcuts and so on.

You go from layer to layer to layer.

The only reason for that is that it makes talking about the network much simpler.

We only do it because it makes understanding the network simpler.

Good.

In particular, we'll always assume we have at least an input layer and an output layer.

And there might be any number, including zero, hidden layers.

The input layer, you can control by setting the inputs.

That's what you do with the examples.

And the output layer, you can actually observe from the outputs.

And anything in between, you can't directly observe.

You can only do anything about the input-output behavior.

Well, actually, you can actually observe the inner layers nowadays.

You just open up your cranium, and then you stick in little electrodes into certain neurons,

and then you can actually read off things.

You can imagine that the people who get that done to them don't like it very much.

People don't even like it when we do it to cats or mice or something like this.

Good flies they're OK with, but anything that looks cute, you don't want to stick wires

into the brains.

OK, so these are, for all intents and purposes, hidden to us.

But of course, interesting stuff happens.

OK, well, that's what we're going to look at, layered acyclic feed-forward networks.

If you want to do anything more interesting, go and Google for Hopfield networks or Boltzmann

machines, which are basically more interesting recurrent networks.

But you have to make quite strong assumptions to be able to understand at all what they're

Teil eines Kapitels:
Chapter 25. Learning from Observations

Zugänglich über

Offener Zugang

Dauer

00:20:59 Min

Aufnahmedatum

2021-03-30

Hochgeladen am

2021-03-30 17:26:32

Sprache

en-US

Explanation of different structures for Neural Networks. Also, an example for Feed-forward Neural Networks and the expressiveness of perceptrons are discussed. 

Einbetten
Wordpress FAU Plugin
iFrame
Teilen