Yay.
Let's start.
So now we covered basically all the basic node architecture stuff.
So what's next, what's the content of today is to go through some, through the networking
stuff.
So since most of the supercomputers and clusters we have nowadays are distributed memory machines,
this lecture will give you an introduction on how those computers are connected usually.
We will have a more in-depth discussion once we look at the actual supercomputers and how
they are actually interconnected.
But today we are going over a very brief overview.
So a network is usually some kind of hardware and a protocol with it.
And that means that we have wires and the electrical signals that go over the wire need
to be interpreted somehow and that's the protocol.
So what we had so far with the CPUs, with the chips, is that we had usually some kind
of on-chip network, which interconnected the different cores and the different peripherals
and the main memory, which is what we usually call a bus or something like that.
We saw some more complicated networks with the CEN architecture and the Skylake architecture
and now we are focusing on the off-node communication.
So the networks that you probably are most familiar with are Ethernet.
So Ethernet is more or less the wired connections.
Please come in.
Are some wired connections that go over where you plug some cable in your computer and into
a switch and then that's it, right?
And other networks that haven't survived is for example the MirriNet network and current
networks are gigabit Ethernet, 10 gigabit Ethernet, 20 gigabit, 40 gigabit or InfiniBand.
And InfiniBand is what's currently used the most.
In most systems we have some kind of InfiniBand hardware.
And the design goals is that we kind of get a general purpose network, which is compatible
with each other, is fault tolerant.
So that means that whenever we plug the cable off or a switch dies or a node dies that the
network doesn't break down and it needs to be flexible.
Flexible in the terms that we can just easily plug in another computer or so.
So Ethernet is quite flexible.
The stuff that's built around Ethernet and on top of Ethernet and other networks is for
example the TCP stuff and the Internet is built on top of TCP among other protocols
and that's pretty flexible.
And it can be extended with Wi-Fi or whatever.
What we require in order to get a high performance network is that we get low latency, low overhead
and of course high bandwidth.
So that's our main goals.
How much speed are we talking about in InfiniBand?
In InfiniBand we currently get I think the fastest are 56 gigabits per second.
Well inside of a cluster, inside of a supercomputer for example.
So most data centers, for example the cloud data centers, the different machines they
have in the data centers for cloud computing are usually not connected via InfiniBand.
So they usually use some kind of Ethernet because it's cheaper.
Because it's mass production and therefore it's cheaper.
But some do have InfiniBand as well.
Fibers optical?
Depends.
Presenters
Zugänglich über
Offener Zugang
Dauer
01:22:11 Min
Aufnahmedatum
2018-12-12
Hochgeladen am
2019-04-04 05:19:03
Sprache
en-US