We looked only at feedforward networks, which are acyclic, and we've restricted ourselves
to a layered layout because that makes the network topology sufficiently easy so that
we understand the math.
There are alternatives to that, extremely attractive alternatives, if you know lots
of math.
The math of recurrent networks is much more interesting.
You can do all these things from circuits only with a vengeance.
You have state, you have memory, you have all these kinds of things, timing, and these
kind of things become issues that are critical.
We've looked at one-layer perceptrons, and they basically have a computation behavior
that we already know from linear regression.
That is because we can decouple a one-layer network into lots of single output networks,
and they're just doing linear regression.
Every output computes something like this.
You usually have more than two inputs, which is the only thing I can really show in here.
What you can do with single layers, you can do with multiple layers, nothing big happens
there, except that the learning procedure becomes a little bit more difficult, a little
bit more difficult than linear regression.
One-layer networks you can learn with simple linear regression or classification, whereas
in these multi-layered networks, you have to kind of do a little bit more, which turned
out to be backpropagation, which is the main algorithm.
We talked about single-layer perceptrons, and the interesting bit here really is that
single-layer perceptrons aren't Turing complete.
They can't even do an XOR.
In particular, if you can't do an XOR, you can't do a half adder, and if you can't do
a half adder, you can't add.
That's already a relatively big failure.
Presenters
Zugänglich über
Offener Zugang
Dauer
00:03:12 Min
Aufnahmedatum
2021-03-30
Hochgeladen am
2021-03-31 11:06:53
Sprache
en-US
Recap: Artificial Neural Networks (Part 2)
Main video on the topic in chapter 8 clip 15.