Thank you, Antonio, and thank you, Leon, and all of the organizers of this seminar.
This is great.
Also, thank you to Andrea.
It's, of course, an honor to be sharing kind of this platform with him.
And OK, so as Santonio said, so I'm going to talk about graph Laplacian's regularity
theory and these things.
And then the way that the talk is structured, so the title is this one.
It's about regularity theory.
But in fact, I'm only going to show regularity theory in the last 10 or 15 minutes.
But then the first part of the talk will be kind of like a path to get there.
So somehow I'm going to show you a bunch of results that they're going to be connected
one to the other.
And then they will culminate or they will finish with this regularity theory.
So please stay with me and let's see where this takes us.
OK, so then just kind of in more detail, the description of what I'm going to talk about,
I have this talk into these four acts.
So in the first one, I'm going to give you like a very brief motivation and show you
some of the very basic and let's say first results on studying this graph Laplacian's.
Then I'll tell you a little bit about asymptotic spectral consistency of this graph Laplacian's.
I'll make sure that the setup is clear by that point.
Then we're going to talk about quantitative convergence rates first in the L2 sense.
And then in the last part, I will talk about a stronger notion of convergence and also
finer properties of the eigenvectors of this graph Laplacian's.
And so this is more or less the outline of the talk.
And so OK, so then let's start by the beginning.
So OK, so then the context where this graph Laplacian's arise, at least for me, the motivation
I have for my work is in data analysis.
And in data analysis, we have data.
And then we want to learn from data.
And so there are at least two big frameworks or big paradigms of learning that are of interest.
There are more than two, but there are two that are very popular.
And those are the unsupervised learning and supervised learning.
So in unsupervised learning, the idea is that given the data set, you want to find a course
structure, find clusters.
And in the other setting, the supervised one, it's well, you have the data, but you also
have labels.
Typically, these labels are given to you by an expert, this one being a doctor, an engineer,
or an artist, for example.
And the idea is to find a regression function so that whenever you get a new data point,
you can label correctly.
So there are these two frameworks that are out there in learning.
Now the idea is that, of course, like you want to use structure from the data to try
to fulfill one of these tasks.
And there is a framework for learning that is known as graph-based learning, where the
idea is that you are going to inform your clusters or your regression function by basically
a graph that you have on your data set.
So the data set comes, we're going to imagine that it comes with a notion of similarity
between data points.
And then, of course, the idea is to use this similarity to fulfill those learning tasks.
So then, for example, if we have this data set of handwritten numbers, the way we have
Zugänglich über
Offener Zugang
Dauer
00:46:09 Min
Aufnahmedatum
2020-06-15
Hochgeladen am
2020-06-15 20:26:44
Sprache
en-US
Graph Laplacians are omnipresent objects in machine learning that have been used in supervised, unsupervised and semi supervised settings due to their versatility in extracting local and global geometric information from data clouds. In this talk I will present an overview of how the mathematical theory built around them has gotten deeper and deeper, layer by layer, since the appearance of the first results on pointwise consistency in the 2000’s, until the most recent developments; this line of research has found strong connections between PDEs built on proximity graphs on data clouds and PDEs on manifolds, and has given a more precise mathematical meaning to the task of “manifold learning”. In the first part of the talk I will highlight how ideas from optimal transport made some of the initial steps, which provided L2 type error estimates between the spectra of graph Laplacians and Laplace-Beltrami operators, possible. In the second part of the talk, which is based on recent work with Jeff Calder and Marta Lewicka, I will present a newly developed regularity theory for graph Laplacians which among other things allow us to bootstrap the L2 error estimates developed through optimal transport and upgrade them to uniform convergence and almost C^{0,1} convergence rates. The talk can be seen as a tale of how a flow of ideas from optimal transport, PDEs, and in general, analysis, has made possible a finer understanding of concrete objects popular in data analysis and machine learning.