7 - Mathematical Basics of Artificial Intelligence, Neural Networks and Data Analytics II [ID:41409]
50 von 860 angezeigt

So the next chapter we want to discuss is deep required neural network, especially deep

HCNNs.

So the HCNNs.

And it's clear if you have done all the efforts to improve your project in building up forecast

models, one next idea might be to say, let's try deep, because in the feedforward neural

networks, the deep was a natural extension of the shallow neural networks with only three

layers.

What does it mean to do deep here for recurrent neural networks?

That's not so easy.

First of all, if you look in Wikipedia or so, if you look at deep, what we want to solve

finally is we want to model a dynamical system.

Yes.

So starting point is the same.

But what is deep here in this connection?

Initially we had that in the past we had the input variables, the externals here.

So I speak about small, open dynamical systems.

Then we had this hidden layer where we have the internal eigen dynamics of the neural

network.

And then we had the output.

But what we are doing now is instead of going to the output here, to the final output here,

you will find the advice or the idea to say, no, no, let's go to another hidden layer.

Let's go to another hidden layer.

And then earlier or later, you go to the final output here.

Yeah, no.

Is this a good idea?

Not in my eyes.

First of all, it's not a good idea because at the end of the day, you have the same outputs.

You have the same inputs here.

Maybe the intermediate things is more complicated and why is this useful?

So let's think about it in a new way, starting on the HCNN story.

Now if I have a deep structure of HCNNs, I would say I will have a problem with the area

on the slide.

So therefore, I have to repaint the normal picture here of HCNN so that it's a little

bit more condensed.

So instead of having the observations here upside and then with a minus identity here,

I turn it around here so that you have the observations on the side here and the minus

identity here.

And this is always the case.

So therefore, I can show the same information in a smaller part of the area in the PowerPoint

slide.

Now we have a better chance to show several layers of such HCNNs.

Yeah, OK.

This is only a writing.

Here's not a new idea in it.

And to have a benchmark for the comparison of what we do next here, let's take the image

that we have seen before.

So this is HCNN with only one level, which means one branch of state vectors with 250

dimensions.

It's copper price forecast here.

And now let's remember how the deep neural network story was working for feed-forward

Zugänglich über

Offener Zugang

Dauer

01:40:50 Min

Aufnahmedatum

2022-04-20

Hochgeladen am

2022-04-20 20:06:04

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen