Welcome everyone to today's HPC Cafe.
My name is Pierre Carga.
Today's topic is power and energy consumption of HPC systems, because we think that this
is something that comes up in discussions a lot.
Also when talking to the general public about HPC, they always ask how much power does that
need and where does the power go and oh my God, it goes up your chimney.
So it's really important to know the basics of this.
This is not a scientific talk.
I could give a scientific talk about power consumption, but that would require much more
time and much more preparation.
So I'll stick to the basics.
So first of all, we're taking a specific point of view here.
If you take the point of view of an end user of computing devices like mobile devices,
your cell phone, your tablet, then of course the stuff that's important to you is different
from an HPC user.
Here's a marketing image, marketing picture from Qualcomm from more than 10 years ago.
And it shows quite clearly what's important as an end user.
You want a long battery life of your cell phone.
You are ready to make compromises regarding performance.
It doesn't matter for me at least whether the browser opens in 0.2 or 0.3 seconds.
I don't care.
But I do care if my phone lasts two days or four days.
So I'm very happy for a long lasting phone, even if the performance is not that brilliant.
But that's just me.
I think this is valid for most end users of mobile devices.
So this is the specific point of view of a mobile device user.
That's not the point of view we take in high performance computing.
So what's the point of view in high performance computing?
So first of all, the scientists, I call them nerd, they want to run some simulations so
they get a CPU time or GPU time allocation.
I've chosen here Margaret Hamilton as the prototype nerd.
Do you know who that is?
She led the team which developed the software for the Apollo flight computer, which brought
people to the moon.
Very significant achievement.
She's also a software engineer, self-taught.
I'm putting her here because she's like a prototype nerd.
She's not the type of nerd that has put a lot of papers.
But I found that picture to be really nice with that stack of code, actually the code
of the Apollo flight computer firmware.
And what you want to do as a scientist is you want to use your CPU time allocation to
produce science and new insights.
So that means in our world, you want to write papers.
So to boost your recognition, to push the boundaries of science, to put a little bit
more philosophy.
OK, so that's your goal.
You get the CPU time, you run your jobs, you get some insights, you run models, whatever,
and then you publish it.
That's a very simplistic view of the whole story.
As computing center people, and I call us naggers, as computing center people, that's
Presenters
Zugänglich über
Offener Zugang
Dauer
00:55:41 Min
Aufnahmedatum
2024-09-17
Hochgeladen am
2024-09-24 15:36:03
Sprache
en-US
Speaker: Dr. Georg Hager, NHR@FAU
Slides: https://hpc.fau.de/files/2024/09/2024-09-17_HPC_Cafe_Energy.pdf
Abstract:
In this talk we will show how much power and energy is consumed by HPC clusters and how the runtime and performance of applications influence these numbers. We will also give some general insights about the tuning knobs that are at a user’s disposal for reducing the energy cost of computing, and how computing centers can impose restrictions on hardware and software usage to enforce energy-saving measures.
Material from past events is available at: https://hpc.fau.de/teaching/hpc-cafe/