Essentially, you see that we are trying to DNN to learn or to identify shapes, especially
here we use the image segmentation as one example.
Then how can we guarantee the output shape is a star shape or convex shape?
How can we preserve volumes?
How can we add the spatial regularization?
So this is the thing we are going to talk about today.
This is a joint work with Professor Jun Liu from Beijing Normal University and Professor
Shousheng Luo from Mohamad University.
And Xiangyue Wang, he's a student with Professor Liu and Jia Fan is a student with me in Hong
Kong BU.
So we will take the image segmentation as an example.
So essentially, I have an input image and then I will build up a neural network in between.
Here is the segmentation.
So I give you many images and then here is the ready segmentation.
So the segmentation is ready.
And then I will train the network.
So I use those given images and their segmentation to train the network.
Once the network is trained and then when you have a new image, you just let it go through
the network, you will get the corresponding segmentation.
So we use this example to represent the idea.
So how do you guarantee star shape?
So essentially how do you guarantee your output is a star shape or convex shape?
Is there something even more?
So here is a network.
So we use a U-net.
But later in the experiment, we also use a sec-net.
We use a deep lab and we compare with also others.
So you see that and those networks for image segmentation or shape reconstruction for many
other applications, it has been demonstrated is very, very successful.
But you see there are some problems.
So those are problems I think people haven't started to pay much attention or maybe we're
not yet there.
So the problem is about regularity or stability.
So essentially you see if your image is completely clean and you go through a network, so mostly
we'll get something very close to the ground truth.
So now what we did, you see we just add a little noise to the image and still ask the
image goes to the network and this is what you will get.
And then you can see that it still looks like it's close to the ground truth, but really
you see that it's very, very noisy.
There is no regularity.
So essentially we see that this mapping is unstable.
And that's also the reason you see people study GAN to use it to design attacks to make
the network to be more stable.
So this is one of the problems you see that the neural networks normally are not so stable
if you have noise.
And then you see that there are different approaches which has been used to tackle this
kind of the problems.
One of the problems is you add the regularization to the network.
But mostly you see that regularization was added as a post-processing because it's not,
those regularization are not in the training process.
Zugänglich über
Offener Zugang
Dauer
00:59:05 Min
Aufnahmedatum
2020-05-20
Hochgeladen am
2020-06-30 23:36:27
Sprache
en-US