22 - Artificial Neural Networks in a Nutshell [ID:17385]
50 von 345 angezeigt

Welcome everybody for today's lecture in which I will introduce artificial neural networks

in a nutshell. Before I start with my presentation slides, I would like to give special thanks

to Jonas Adler and Ozan Öktem from KTH Stockholm for allowing me to recycle and adapt some

of their presentation slides for our purposes. So the idea of artificial neurons dates back

to the year 1943 and the idea comes from McCulloch and Pitts who were inspired by biology and

by real neurons in which they made the following observations. So it was known back then that

information was passed along nerves which consists in fact of a cell body here on the

left hand side which is this thing here and it has some incoming connections called dendrites.

And these dendrites, they receive nerve impulses from other nerves which are electrical signals

and transfers them into the cell body where they were processed and then transferred to

other cells. So in fact it is that this cell, this single neuron receives input via their

dendrites coming from outside or from other neurons in our tissue and here in the nucleus

this is somehow processed and given that there is enough activation, enough electrical charge

being transferred into the cell body, there builds up an action potential and this action

potential may be transferred along this long nerve which is called an axon. So here we

have information possibly being transferred to other neurons and they also receive this

information, this electrical charge by their respective dendrites. So this is the biological

foundation and based on this the people were motivated to try to replicate this and say

hey human brains work like this so why can't we build a machine that thinks the same way

as these neurons work. So this was kind of a biological inspired way and of course today's

neural networks are completely different but this is how history came up with the first

artificial neurons so it's important to understand that this had some biological inspiration.

So the key observation is that biological neurons transmit the signal only if there's

enough activation energy which is required to build up this action potential otherwise

the neuron would not fire or not send a signal and if there is enough it will transport the

signal along its axon to other connected neurons. So it took 50 more years until 1958 until

the first real concept of basic neuron unit called a perceptron was born and this goes

back to Rosenblatt who came up with the idea of this perceptron and this word is a merge

of the word perception which is receiving and understanding things and neuron so perception

and neuron makes a perceptron. And well this is kind of a basic computational unit in a

neural network so we have to understand how it works. So this circle in the center here

is what's called the perceptron and we will try to disassemble it into its units of function.

So first of all it has of course some input just like the dendrites of the biological

neurons there's some input which we will denote by x1 until xn these might be n dendrites

to mimic and there comes some signal in and of course all these dendrites they have different

properties in biological tissue and this is something you would like to model in your

perceptron as well they might have different sicknesses different biochemical properties

they might be differently elongated so it makes sense to somehow give some different

properties and the easiest way to do so is to introduce weights these are our weight

functions w1 until wn and these somehow determine how important input from a certain

dendrite is in our artificial neuron. So we have this weighted input and down here we

will try to track what is the mathematical function that is being computed by this artificial

neuron so what happens now with this weighted input is it is being transferred to the cell

body of the artificial neuron this perceptron and all these weighted inputs are summed up

which is the first term here this is kind of what real neuron also does it receives

all the electrical signals and if this electrical charge builds up high enough then it will

fire so we have to sum up the weighted inputs and there's another term added which is a

so-called bias here this small term plus b and this somehow shifts the threshold of our

neuron to fire or not so by adding a term we can determine how likely it is that this

perceptron will transfer a signal or not so you could see this bias as its threshold shifting

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

00:40:58 Min

Aufnahmedatum

2020-06-08

Hochgeladen am

2020-06-08 17:06:48

Sprache

en-US

Tags

Perceptron deep learning Neural Network
Einbetten
Wordpress FAU Plugin
iFrame
Teilen