The data is now being written back.
For example, if it's a trap or a trap, it's not going to work.
Instead, it's being directed here.
It's not being taken, the value that has been spent,
but has been calculated.
Now comes the game we had before.
It's being returned.
Spent here, a data, and it's being distributed.
R3 with the result, that means in this case R3 would be back on.
At the right hand side.
And the 15th also has the 15th.
That's the result of 10 plus 5, which has been calculated.
And that's the signal at the end of the word.
And immediately after that, the next command comes.
The director has read out the data and can read out the list.
We also need a list of memory,
which can be read and read at the same time.
When we have the 15th, the data is being played back in.
Okay, let's explain the other trap a little bit.
This is the normal apple thread.
Let's do it.
We have, hold on, a load store.
The load store is the number of the register.
What does that mean?
I'll ask you later.
All operators with alphabetic and logical commands are registers.
You wouldn't call operators without the
register, if they are not registers.
Then they have to be explicitly loaded.
Explicitly by a load refill.
The results that can no longer be kept in registers
have to be stored in memory.
That's why load store architecture.
Okay, let's do a load refill.
Note R3 from memory at the address x.
Oh, you need a micro.
I need one.
Bro.
I think please.
It's already switched on, right?
So, then the recording will work.
Everything is wired here.
Yes.
Exactly.
So now it's about the load command.
Then a load command comes in.
It grinds, it needs the register, I don't even have to
access the register first.
But load means load something from the memory please.
And then it goes down here, is passed down here.
Presenters
Zugänglich über
Offener Zugang
Dauer
01:03:50 Min
Aufnahmedatum
2017-10-30
Hochgeladen am
2019-04-30 14:59:03
Sprache
de-DE
-
Organisationsaspekte von CISC und RISC-Prozessoren
-
Behandlung von Hazards in Pipelines
-
Fortgeschrittene Techniken der dynamischen Sprungvorhersage
-
Fortgeschritten Cachetechniken, Cache-Kohärenz
-
Ausnutzen von Cacheeffekten
-
Architekturen von Digitalen Signalprozessoren
-
Architekturen homogener und heterogener Multikern-Prozessoren (Intel Corei7, Nvidia GPUs, Cell BE)
-
Architektur von Parallelrechnern (Clusterrechner, Superrechner)
-
Effiziente Hardware-nahe Programmierung von Mulitkern-Prozessoren (OpenMP, SSE, CUDA, OpenCL)
-
Leistungsmodellierung und -analyse von Multikern-Prozessoren (Roofline-Modell)
- Patterson/Hennessy: Computer Organization und Design
-
Hennessy/Patterson: Computer Architecture - A Quantitative Approach
-
Stallings: Computer Organization and Architecture
-
Märtin: Rechnerarchitekturen