Okay, thank you, Philipp.
Thanks very much for the nice introduction and for having me here, for inviting me to
give a talk.
Can you hear me?
Yes.
Okay, good, good.
Okay, just to check.
So, it's a pleasure to talk about some research together with several collaborators here.
One is Daniel Rudolph from Göttingen.
The other collaborator is Claudia Schillings from Mannheim.
And the last one is also Philipp.
And we were investigating noise-level robust Monte Carlo methods for basin inference or
basin inverse problem with highly informative data where the posterior measure is then highly
concentrated.
Usually, informative data is, of course, a good thing in terms of inference or identification
problems.
But this high concentration of the posterior can also pose a serious challenge for naive
numerical methods.
And I will talk a bit about how you can derive robust methods, which also work quite fine,
even for very, very accurate or very large amounts of data.
So, let me introduce the basic setting, which we are considering, basin inverse problems.
We would like to infer an unknown x belonging for this talk to a finite dimensional space
Rd.
And we would like to infer it based on noisy observations of a measurable forward map from
Rd to Rk.
For these measurement noise, we assume here this additive model.
And suppose that the measurement noise is normally distributed with mean zero, particular
covariance matrix sigma, and here a scaling parameter n, a natural number, which steers
the level of the measurement noise in the data.
If we then have some prior information in terms of a prior measure, mu zero, which for
most of the time will be considered this mean zero Gaussian measure with C0 as covariance
operator or covariance matrix, we obtain by the basin approach a resulting posterior measure,
which we denote by mu n.
So, it depends in particular on this noise scaling parameter n.
And it's of this particular form.
So, you have here a normalization constant Zn, then e to the minus n phi.
That's basically the likelihood term phi corresponds for this Gaussian measurement
noise to this quadratic data mismatch function concerning the forward map G.
And then we have here again, the prior measure.
What we would like to compute or what we would like to do is to sample approximately from
the resulting posterior and compute posterior expectations of quantities of interest F.
And we would like to analyze or study this objective in the case of increasing precision
n tending to infinity in the data.
Now, I probably don't have to motivate the basin approach to inverse problems that much,
but let me introduce on the next slide a motivational example, which also introduces the basic model
problem, which we will consider in most parts of the talk.
So, this is now an example for uncertainty quantification and groundwater flow modeling.
We are here considering a deep geological repository for radioactive waste located in
the USA.
It's called WIPP site.
Zugänglich über
Offener Zugang
Dauer
00:45:56 Min
Aufnahmedatum
2020-06-29
Hochgeladen am
2020-06-30 11:36:27
Sprache
en-US
Abstract: The Bayesian approach to inverse problems provides a rigorous framework for the incor-poration and quantification of uncertainties in measurements, parameters and models. However, sampling from or integrating w.r.t. the resultung posterior measure can become computationally challenging. In recent years, a lot of effort has been spent on deriving dimension-independent methods and to combine efficient sampling strategies with multilevel or surrogate methods in order to reduce the computational burden of Bayesian inverse problems.
In this talk, we are interested in designing numerical methods which are robust w.r.t. the size of the observational noise, i.e., methods which behave well in case of concentrated posterior measures. The concentration of the posterior is a highly desirable situation in practice, since it relates to informative or large data. However, it can pose as well a significant computational challenge for numerical methods based on the prior or reference measure. We propose to employ the Laplace approximation of the posterior as the base measure for numerical integration in this context. The Laplace approximation is a Gaussian measure centered at the maximum a-posteriori estimate (MAPE) and with covariance matrix depending on the Hessian of the log posterior density at the MAPE. We discuss convergence results of the Laplace approximation in terms of the Hellinger distance and analyze the efficiency of Monte Carlo methods based on it. In particular, we show that Laplace-based importance sampling and quasi-Monte-Carlo as well as Laplace-based Metropolis-Hastings algorithms are robust w.r.t. the concentration of the posterior for large classes of posterior distributions and integrands whereas prior-based Monte Carlo sampling methods are not.