4 - Mathematical Basics of Artificial Intelligence, Neural Networks and Data Analytics II [ID:41374]
50 von 980 angezeigt

The last slide we had before the break was this one on the extension of the basic recounting

network to long memory behavior.

And at the end of the short before 2000 we have also tried to use this feature.

At this time point it was not helpful.

The reason for this is not something against the mathematics I have shown you on the slide

here.

The reason for this is only we had applications which had only 10 or 15 steps backward and

which had a handful of steps in the forward direction.

So this was not a dramatic feature we had to face there.

So therefore I would not like to say something against the idea.

I would like to say at this time point we didn't have had applications where this was

important.

This is different now but I will come back to this in a later time because then we will

have a broader picture on what does it mean to speak about memory and so on.

So let's go back to the 90s, end of the 90s.

I told you that the long memory stuff was not so much of a subject but there was another

idea which was a burden.

And the topic here is what's about if you have in your standard picture here the situation

that you do not have all the external drivers which you know are important.

You might not have them because they are not recorded or they are not available for you

or whatever but you don't have all the external drivers which are obviously important to explain

the dynamics here on the output side.

What can you do against it?

And around 2000, beginning of 2000 I found a solution to this and this time the explanation

of this is easier to do in the equation form.

So the black part here is exactly the backward approach, backward description, what we have

seen all the time now.

So this is the still transition equation, this is the output equation, the learning

of it.

And now I say, if I do not have all the inputs which are important then I will make a mistake

on the output side, unfortunately.

Making a mistake on the output side means that I see an error on the output side and

then I can take this error on the output side as an additional information that there must

be something going on in the dynamical system for which I do not have input factors here.

So I take my own error here as additional input variables.

Unfortunately you cannot take the error, you have to take the error from one time step

before but nevertheless you can take your error here and then you can take this error

as an accumulated information about whatever you have missed before.

So I would like to extend the equation here which I have originally, I would like to extend

this equation to the green part here.

So this thing and this outcome thing here is an error correctional network then.

By the way for this thing Siemens has a patent.

So at this, if you speak about pure mathematics, such things you cannot get patents on it.

You can get patents only on the application of the mathematics, not on the mathematics

itself.

So you are free to use it.

So again here we have the equations, how should we organize it as an architecture.

And the way I have done it in this time was the following one.

The black part of the star here is exactly what you have seen before for the black part

of the equations here.

Zugänglich über

Offener Zugang

Dauer

01:18:45 Min

Aufnahmedatum

2022-04-19

Hochgeladen am

2022-04-19 22:26:04

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen