The following content has been provided by the University of Erlangen-Nürnberg.
So then, good morning everybody. Once again I act for Professor Knapner since, at least for today, he wants to preserve his voice.
So, on Tuesday, Professor Knapner introduced the so-called gradient method, which is an iterative method to determine approximations for a linear system where the matrix A is symmetric and positive definite.
And the idea is to minimize this, unfortunately I have no pointer for today, but I think it should also work.
The idea is to minimize this functional f, which is equivalent to solve this linear system A x equals to b.
And the idea is to find a direction in the gradient method, it's the minus of the gradient, so the idea is to get the minima in one direction that is the direction of the steepest descent.
And the iterative in the case of the gradient method is
the iterate xk plus one equals to xk plus any scalar number times the search direction xk.
And in the case of the gradient method, this scalar alpha k was defined as dk transpose times dk divided by the energy scalar product of dk, the search direction of itself.
So well, and we also have seen that the convergence rate here is not that good, it's quite similar to the Jacobi method.
And that the reason is, or one reason is that these different search directions do not be orthogonal in complete.
That for example, such as you see here at this figure, such zigzag phenomena can appear, and to avoid that, we come to so-called conjugate methods.
What does that mean? Well, we define as follows, vectors from d0 to dl are called conjugate in the case that they are in some sense orthogonal to the, with respect to the energy scalar product that is defined here below.
And that means if we are defined this searching directions in that case that they are conjugate, our method or iterative method is called conjugate method.
And one very nice property of these conjugate methods is that they lead to the exact, indeed the exact analytical solution in the case of we at least apply m steps, and m is the dimension of the space.
Well, I want to prove that today it's quite extraordinary lecture in that sense that the most of this lecture will take place at the blackboard.
And that's the reason is we only have four slides for the topic for today. So please let me know when you can't read what I'm writing here.
So I want to call it Lemma 5.2.1, because it's the chapter 5.2. So for a given initial vector in Rm,
we consider the iterates and the iterates are defined similarly than here for the gradient method.
That is nothing but the kth iterate plus the scalar alpha k times the search direction and with alpha k is defined as minus.
And now it's more or less like this alpha k but a little bit different. Gk and Gk is the gradient of our functional F that is nothing but the reciduum of our linear system.
So it's the matrix A, I write it down here, Gk is nothing but A xk minus B. This is the gradient at point xk of our functional F.
Gk transpose dk divided by dk transpose Adk. This is alpha k and Gk is defined here as follows.
This is nothing but the gradient of x at point xk. And conjugate directions d0 up to dm minus 1.
So at the moment this is a very general iterative method with the only really required assumption that these searching directions are conjugate in this sense I already mentioned.
In that case a method of conjugate directions is exact after at most m steps.
And m is nothing but here the dimension of our space.
That means that the mth iterate is nothing but the exact solution I denoted here by x star. So x star is the exact analytical solution.
Well that's a very nice property but in fact it's a very useful theoretical property.
So in practice you have for example due to the finite element method you have very big linear systems with several hundred thousands or millions variables.
So you terminate your iterative method earlier and take your iterate as an approximation of this exact solution as usual.
But at least for the theory it's a very nice result.
So how we can prove it? Of course since our conjugate directions, well they are conjugate and in particular that means they are linear independent.
And we have m, linear independent vectors that means that our conjugate directions span the complete space Rm.
So for example we can represent x star minus the initial vector x0 by a linear combination of the conjugate directions.
That means that applying now on this vector, this matrix and then apply again on this i conjugate direction.
This is nothing but alpha i di transpose a di due to the conjugateness somehow since di transpose a dj when j differs to i is 0.
This holds and that means that alpha i is nothing but di transpose a x star minus x0.
And since a is positive definite this guy here is greater than 0 so we can divide by it.
Okay and a x star is nothing but b so we can rewrite this as follows. This is nothing but a x0 minus b divided by this guy.
Okay so well the only thing we have to show is that what still remains is that this term here or not only this term but this alpha i coincide with this alphas with this scalars.
And then you see that we can due to the fact that here our iterate can also be rewritten as this is the initial vector plus the sum of alpha j dj from j equals to 0 to k.
And so in the case that k equals to m indeed our iterate xm equals to x star.
Okay so what remains to show is that gi transpose di this equality.
But gi transpose di is nothing but due to the definition of gi xi minus b.
Okay this is only the definition of gi and but there holds di transpose axi and that is this guy here.
So we have now to show that di transpose axi that means di in the energy scalar product with xi is the same than di and xi in this energy scalar product.
Okay but this here is nothing now I rewrite the ith iterate xi as follows this is nothing but the initial data plus the sum alpha j dj from j equals to 0 to i minus 1.
Okay but due to the fact that the directions are conjugate this is indeed nothing but the first term.
Since the sum here is sum up to i minus 1 so every summand here is 0 with respect to the energy scalar product with this guy di.
Presenters
Zugänglich über
Offener Zugang
Dauer
01:32:24 Min
Aufnahmedatum
2015-12-18
Hochgeladen am
2015-12-21 14:55:48
Sprache
de-DE