22 - Beyond the Patterns - Udaranga Wickramasinghe - Voxel2Mesh: 3D Mesh Model Generation from Volumetric Data [ID:30491]
50 von 862 angezeigt

Welcome back to Beyond the Patterns. Today I have the great pleasure to announce another

invited talk and we have invited Uda Rangel Bikrama Singhe from EPFL Lausanne, Switzerland

and he will present on his latest work that he also published on Micae Conference 2020.

So he is a PhD student at CV Lab EPFL and is advised by Professor Pascal Hua.

His research focuses on 3D surface extraction from volumetric images and ways to introduce

prior knowledge into deep neural networks.

Prior to joining the CV Lab, he completed his master thesis in Computer Science at EPFL

and in his bachelor's degree in 2014 in Electronics and Telecommunication Engineering at the

University of Moratuba, Sri Lanka.

So it's a great pleasure to announce him here as a speaker and the presentation will be

entitled Vaxel to Mesh 3D Mesh Model Generation from Volumetric Data.

Uda Vange, it's a great pleasure to have you here and the stage is yours.

Hello everyone.

Thank you very much, Andres, for the invitation.

It's really a pleasure to talk to you all about my work on Voxel to Mesh.

So the work I did, the work I presented during last Micaille conference was on the idea was

Voxel to Mesh.

It's about 3D mesh model generation from volumetric data.

So before directly moving on to our architecture, let's a little bit talk about an overview

about shape representation in general.

So there are two main categories that we use to represent shape in computer vision or computer

graphics.

One is the volumetric representation side and the other is the surface representation.

At the moment, the most popular way to represent shape is actually the volumetric representation.

So a few of the ways, the techniques that fall under volumetric representation is occupancy

grids, signed distant fields or octrace.

So here I have shown an image of a lever extracted from CT image, CT scan, and in volumetric

representation, which is represented using an occupancy grid.

So occupancy grid basically is a 3D tensor that has one if there is an object and zero

if it does not have the object of interest.

Similarly, then we have a signed distant field.

It is a slight extension of the occupancy grid.

Instead of the occupancy, an element or a voxel in this grid will have the distance

to the surface.

Octrace is not that common still in computer vision literature.

There are a couple of papers trying to develop deep learning techniques with octrace, but

it is a more memory efficient way of representing 3D objects.

So all these things fall under volumetric representation.

Then we have the other part, which is this surface representation, which was my interest.

So in surface representation, instead of representing the volume, we are representing the surface

of the object.

So this surface representation, the most common way of representing surface is using measures,

which is a collection of vertices and faces.

And also you can say point clouds also fall under surface representation, but I put it

under brackets because it does not necessarily have the surface information or the face information

as in a mesh, but still it has the points on the surface.

So these are all types of ways or more common types of ways representing objects or shapes

in 3D.

Now if we look at the current literature, there is no argument that the volumetric representation

is dominating the field.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:16:36 Min

Aufnahmedatum

2021-03-31

Hochgeladen am

2021-03-31 19:07:29

Sprache

en-US

It’s a great pleasure to welcome Udaranga Wickramasinghe from EPFL, Lausanne, Switzerland at our lab for an invited talk!

Abstract: CNN-based volumetric methods that label individual voxels dominate the field of biomedical image segmentation. However, 3D surface representations of the segmented structures are often required for tasks like shape analysis. They can be obtained by post-processing the labeled volumes which typically introduces artifacts and prevents end-to-end training. In this talk, I introduce Voxel2Mesh, a novel architecture that goes from 3D image volumes to 3D surfaces directly without any post-processing and with better accuracy than current methods when using smaller training datasets. I will discuss in detail about the motivation, design choices, strengths and limitations of the architecture. I will also discuss how this can help to accelerate the adoption of deep learning techniques for shape analysis in medical imaging.

Short Bio: Udaranga Wickramasinghe is a PhD student at CVLAB – EPFL advised by Prof. Pascal Fua. His research focuses on 3D surface extraction from volumetric images and ways to introduce prior knowledge into deep neural networks. Prior to joining CVLAB, he completed his master’s degree in Computer Science at EPFL, Switzerland in 2017 and his bachelor’s degree in Electronics and Telecommunication Engineering at University of Moratuwa, Sri Lanka in 2014.

Links & References
Voxel2Mesh on arxiv: https://arxiv.org/abs/1912.03681
Voxel2Mesh code: https://github.com/cvlab-epfl/voxel2mesh
Heart segmentation: https://arxiv.org/abs/2102.07899
Deep Active Surface models: https://arxiv.org/abs/2011.08826

This video is released under CC BY 4.0. Please feel free to share and reuse.

For reminders to watch the new video follow on Twitter or LinkedIn. Also, join our network for information about talks, videos, and job offers in our Facebook and LinkedIn Groups.

Music Reference: 
Damiano Baldoni - Thinking of You (Intro)
Damiano Baldoni - Poenia (Outro)

Tags

beyond the patterns
Einbetten
Wordpress FAU Plugin
iFrame
Teilen