Yeah, it's actually very nice eventually to speak here and present some of the work, some
of the results that have accumulated over the past 26 years that I have been at Erlangen.
You see here that my affiliation Erlangen is still my affiliation.
I will go into retirement end of March.
I used to be the team leader at Surfax for the ALGO team, parallel algorithms team.
I've also stepped down by the end of last year, but that's still there as an affiliation
because some of the work is also with collaboration from there.
And then I'm more recently a little bit affiliated with the Technical University of Ostrava.
I've actually been last week for another conference.
And so this is kind of the outlook also what I'll be doing after retirement.
A little bit here, a little bit there and so on.
So my work has centered around efficient solvers for partial differential equations, in particular
also very high performance computers and a good deal of the audience that's present here
and some also remotely happen to be my PhD students.
And I should take reference to that in two different ways.
First of the work that I'm presenting is of course to a very good deal, essentially all
of it based on what the PhD students have been doing over the last years.
And that's very good that they are here.
So if there are any detailed questions, the experts are partly sitting here and can then
help me to answer any questions.
And on the other hand, an apology to the PhD students because of course a good deal of
what I'll be showing is known to you.
So I hope that you are not too bored.
What's going to happen in the next 45 to 60 minutes?
I would like to start with some preamble and this is exactly what my PhD students know
what's now coming.
My question to the audience here is what is the fastest Poisson solver?
So the Poisson equation is ubiquitous in many scientific applications.
So what is the algorithm that you would say is the fastest one for this problem?
The context of course is that scientific computing is about efficient methods.
And there's always a tradeoff between accuracy and cost.
So there's nothing for free.
If your accuracy is irrelevant, cheap algorithms are trivial to find.
Just take JetGPT, ask it what is the solution for the Poisson equation.
It will say five.
It's very fast and wrong.
But if the cost is irrelevant, also the accuracy is trivial to achieve because you just take
an infinitely fine mesh and don't worry what it costs.
It's just Gaussian elimination as it comes and no problem at all.
So the real problem is to find the right compromise.
And that means understanding that tradeoff.
To understand that tradeoff, we need metrics for the cost.
And there is this notion of algorithmic complexity and of accuracy, which is the magnitude of
the error.
That's easily said, but once you start thinking about it, both are surprisingly unclear.
So cost could be that you count just the number of unknowns.
This is also a cost metric, probably not a very good one.
You could count the number of floating point operations that it takes you to deliver a
solution.
But then, of course, there's also memory consumption and issues of memory traffic that could be
Presenters
Zugänglich über
Offener Zugang
Dauer
01:04:00 Min
Aufnahmedatum
2025-02-18
Hochgeladen am
2025-02-18 17:56:04
Sprache
en-US
Slides: https://hpc.fau.de/files/2025/02/2025-02-04-Perflab-Ruede.pdf
Abstract