So, welcome back to AI2.
Very importantly, something I've completely forgotten the last couple of times.
Now I've actually have the evaluation things.
So, could you just pass them around?
And actually I think do the evaluation soon.
So that it's still on time.
I don't really remember what the deadline is.
So I apologize for making you think fast.
Yeah.
So, we were talking about artificial neural networks as an AI technique that's been quite
popular in the last decades and as a kind of a technique that promises a lot of things
that are sexy in computer science.
It promises to be a radically parallel computation technique.
It promises to be bio-inspired.
It promises to just basically give you intelligence like this.
So this is what we did.
So it's an interesting topic even though the techniques are relatively simple.
The idea is that we have simple computational devices, neurons that do simple stuff but
we have lots of them and together something emerges, intelligence on ice.
Going from biological neural networks, we go to artificial neural networks by essentially
modeling the units.
We model the units with a couple of inputs, a couple of outputs.
Computation happens via a weighted sum of the inputs which we pass through a threshold
function and if the threshold is passed.
Then we give an output signal or not which in turn excites the other neurons or not.
In here we essentially have a perceptron which is nice because perceptrons are things we
can adjust.
It has all these weights in the weighted sum and so we know how to deal with it.
The only new contribution over linear classification here is really that we're taking lots of them.
So the network expect is the interesting bit.
The neurons alone can do things like and and or and not but as we've seen not XOR.
So we looked only at feed forward networks which are acyclic and we've restricted ourselves
to a layered layout because that makes the network topology sufficiently easy so that
we understand the math.
There are alternatives to that, extremely attractive alternatives if you know lots of
math.
The math of recurrent networks is much more interesting and you can do all these things
you know from circuits only with a vengeance.
Like you have state, you have memory, you have all these kind of things and timing and
these kind of things become issues that are critical.
We've looked at one layer perceptrons and they basically have a computation behavior
that we already know from linear regression and that is because we can decouple a one
layer network into lots of single output networks and they're just doing linear regression.
So every output computes something like this.
Only you usually have more than two inputs which is the only thing I can really show
in here.
What you can do with single layers, you can do with multiple layers, nothing big happens
there except that the learning procedure becomes a little bit more difficult.
A little bit more difficult than linear regression.
One layer networks you can learn with simple linear regression or classification whereas
Presenters
Zugänglich über
Offener Zugang
Dauer
01:24:21 Min
Aufnahmedatum
2018-06-27
Hochgeladen am
2018-06-27 21:02:31
Sprache
en-US
Der Kurs baut auf der Vorlesung Künstliche Intelligenz I vom Wintersemester auf und führt diese weiter.
Lernziele und Kompetenzen
Fach- Lern- bzw. Methodenkompetenz
-
Wissen: Die Studierenden lernen grundlegende Repräsentationsformalismen und Algorithmen der Künstlichen Intelligenz kennen.
-
Anwenden: Die Konzepte werden an Beispielen aus der realen Welt angewandt (bungsaufgaben).
-
Analyse: Die Studierenden lernen über die Modellierung in der Maschine menschliche Intelligenzleistungen besser einzuschätzen.