The following content has been provided by the University of Erlangen-Nürnberg.
In the beginning of this talk I will just as an introduction talk a little about adaptive finite elements,
which is something that Anahat just did a lot in the last 15 to 20 years and something which got a lot of attraction.
But I will just use it as an introduction and then say what is open about this topic and how to apply it to what I call complex slope problems
and in particular here to fluid stride and tech.
So really very, just very short, I think everyone will know that the simulation of complex problems is usually leading to huge systems of equations
that have to be solved, which perhaps nowadays billions of unknowns and these equations are just simply too large.
And if you just do naive approaches then it's very often the case that you cannot get what you want,
because what is usually some kind of accuracy which might be given to you by the application, by some engineers,
some measurement, to a certain accuracy and usually you cannot reach this usually just to computations and your memory is exhausted,
your computer is exhausted somehow.
So adaptivity is just one of many techniques, but one very general technique to reduce problem sizes.
And the main in what I am talking about is what we call goal oriented adaptivity
and this is exactly where Wolf Rannacher and Roland Becker got some great successes.
And the idea is the following that usually in technical applications it's not like in math where we have some problem
and we have a solution and we have an approximation to the solution and usually in mathematics we measure errors in norms,
but here the idea is we want to have some more general measures of errors and this could be some, we call it output function also.
An output function could be something like the force that is acting on an airplane.
So this could be an output function and the goal is now to reduce the error in this output function.
We do not want to get the complete solution, we just want to measure this functional.
And the basic idea for this kind of error estimation is the same as for all kinds of error estimations.
We just do some computations, we estimate the error, we try to localize the error,
which means we try to find out where is the large contribution to this error and then we try to adapt our discretization,
which in finite elements means we refine measures where it's necessary.
And then in goal-oriented adaptivity it's possible that with one and the same problem,
just with two different goal functionals and then you will see the optimal discretization,
so the optimal mesh could look like this, but it will also look like this,
so depending on the functional that you're looking like.
And so the main result, which was now nearly 20 years old, is the so-called dual weighted residual method.
And it's something very general, it's not limited to partial differential equation,
the idea is just whenever you have a problem that is given in some kind of variational formulation,
so you're looking for a solution in a function space and the solution is described by a variational formulation.
And whenever you look at discretizations of this problem in terms of Galerkin discretization,
so just taking a subspace and you have your discrete solution, UH, given by the same variational formulation,
then you can express the error between real and discrete solution and this functional in terms of residuals here.
F minus A is the residual of our problem and J prime minus A prime is the residual of an adjoint problem.
It's called dual weighted residual because the residual is weighted here with the Z
and that is the solution of an adjoint problem,
which is just given here as the linearized adjoint of the variational formulation.
And so this error estimate holds, so it's not an estimate, it's more like an error equality,
but of course the equality is not exact, we have a remainder term,
and this remainder term is somehow just simplified, a third derivative of our variational formulation.
Okay, and this has been applied to a vast number of different problems,
not only partial differential equations, ordinary differential equations,
it can also be used to estimate errors in iterative solvers like multivariate or conicate gradients and so on.
And there are really a huge number of contributors to this method.
I just imagined some name where I knew they were working on it and the list could be extended forever.
Now the question is, is this all done? Because this technique is very easy.
So what you have seen, this error representation, it's very basic calculus to prove it.
You just write the error as an integral over this linearized adjoint,
Presenters
Prof. Dr. Thomas Richter
Zugänglich über
Offener Zugang
Dauer
00:45:46 Min
Aufnahmedatum
2014-07-12
Hochgeladen am
2014-10-20 23:44:27
Sprache
de-DE