Let's start. So we've now come to the maybe more advanced theoretical part of this
lecture. We would like to prove order of convergence estimates. More precisely,
Priori order of convergence estimates. That is, we would like to estimate the
We would like to estimate the deviation between the unknown, exact solution of the continuous
boundary value problem we are looking at on the one-hand side, that is of u and of the
approximation, the finite element solution we are computing, that is, u h.
So, we would like to estimate u minus u h.
So first thing is we need an appropriate norm. So we need a norm such that u and u h both
belong to the corresponding space. And of course the stronger the norm is the better
the estimate is. So an L infinity estimate would be better than an L2 estimate. An H1
estimate would be better than an L2 estimate. There's some relation between H1 and L infinity
as we know in one dimension. Okay so what can we do? And it's an a priori estimate
that is, let's see whether we have it somewhere here, that is we would like to
prove something like that. There is a constant depending on the problem, on all
the parameters of the problem, geometry whatsoever, on the solution only in
terms of a semi-norm which we want to specify later on such that we can
estimate this error by this constant times the semi-norm times a certain power
of H. H is the discretization parameter which we have so typically in our case
in two-dimensional triangles it would be the maximal length of an
edge or in general it's a diameter, the maximal diameter of all the appearing
elements. So the H indicates how many elements we have, how many degrees of
freedom we have, how large the problem is. So there's some relations, some inverse
relation between the smallest of the H and the size of the problem. And of
course we would like the smaller the edge the more effort we have to do and
of course we would like to have this reflected in the smallness of the error.
The first thing of course we would like to know that the procedure converges in
principle. That tells us that at least asymptotically what we are doing is
correct. And then of course that would be a proof of convergence in a
certain norm. An error estimate, an a priori error estimate tells us more, it
tells us how the behavior, the error behaves in the worst case for example if
we halve the H. And then of course the order power here plays a role. So two, so
one would be a typical case but two is better than one and three is better than
two and so on. Okay so which kind of norms? That is, they're opposite to
these a priori error estimates, there are also a posteriori error estimates.
Yeah before coming to saying something about these maybe we should think
about this constant here. In some simple cases we will be able to compute this
constant C. But here we will not try to do so. So we have no idea how big or how
small this constant is. Typically it's big. What we have seen already
that one important ingredient of this constant is this quotient M over alpha.
M being the boundedness constant from the continuity of the bilinear form and
alpha being the constant from the V ellipticity of the bilinear form. So this
comes and Sears Lemma tells us or at least the way doing these error
estimates via Sears Lemma brings in this quotient and of course this means if
this quotient is large then in any case we have a large constant there. Later on
we will look at situations where this quotient is large and typically this will
be if problems become convection dominated. If the convective part, the
first order part, is the dominating part compared to the second order to
diffusive part then this alpha is small and therefore M over alpha is large. That
does not mean that the methods, all the methods we are discussing here are not
convergent anymore or do not have this order of convergence which we are
Presenters
Zugänglich über
Offener Zugang
Dauer
01:34:10 Min
Aufnahmedatum
2015-11-24
Hochgeladen am
2015-11-25 14:24:22
Sprache
de-DE