10 - Architectures of Supercomputers [ID:10245]
50 von 424 angezeigt

Music

So, let's continue with a few more supercomputers, which came out in the last, well, almost 10

years now.

Well, nine years.

So the next in the list is the Chagia supercomputer, which is also a very interesting one, since

it consists of, well, more or less regular AMD Optaron cores.

So in total we have 224,000 Optaron cores in that supercomputer, and they clock in at

a 1.75 petaflops Linpac performance.

This one was located at Oak Ridge National Laboratory, and it was a Cray XT5 system.

So let's look, let's take a deeper look at it.

So what we have here with that supercomputer is that we have per node, we have two AMD

Optaron chips.

Each of those chips have six cores, and they are connected via hyper transport.

And the network interconnect is also directly connected with the hyper transport technology,

and it's connected to a, well, custom Cray specific network interface controller, which

they called in total the C-Star network.

And what you can see is that we have six outgoing links here with almost 10 gigabytes per second.

And how those outgoing links are being used is what we'll see later on.

So the overall design is that we have four nodes per blade.

So as with the other large scale systems, we have a blade based design.

We have four nodes per blade, eight blades per chassis, and three chassis per rack.

So that's what you see here.

And we have a liquid cooling mechanism where we have the water flowing in from the bottom

and up to the, so going in from the top, taking a round trip, and being cooled again at the

top.

So that's the overall design.

And the network interconnect is made up such that we have one node, and since we have four

nodes per blade, those four nodes are more or less directly connected.

So they have, they use those links.

And when we connect them up inside of a rack, we get this two dimensional mesh-like network.

And this mesh network is then connected to the entire system.

So we have one node, then take four times one node into a blade, then 24 blades into

one rack, and then 200 of those racks, and that makes up the total performance.

So what we see is that the interconnect is, well, it looks like a mesh on the rack level,

but we have still a few additional outgoing links, and those links can then be used to

form a complete three dimensional torus network.

And this is a visualization of the topology of the entire system, and what you see is

that you have some nodes colored differently.

So here are orange nodes, blue nodes, and red nodes.

And those orange nodes are the I-O nodes.

So those I-O nodes are dedicated nodes that talk to the storage system, and the storage

is connected via an InfiniBand network, and in addition to that, we have some empty nodes

in the entire mesh infrastructure.

So that's quite interesting.

And the entire system is completely integrated into the entire infrastructure at the Oak

Ridge National Laboratory.

So you not only have the big checkier system, but you also have all the XT4 system, the

storage system, special log-in nodes, which then dispatches to the specific clusters or

supercomputers.

So you have an additional visualization cluster, you have application development cluster,

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

00:53:17 Min

Aufnahmedatum

2019-01-09

Hochgeladen am

2019-04-04 00:19:06

Sprache

en-US

Einbetten
Wordpress FAU Plugin
iFrame
Teilen