13 - Interventional Medical Image Processing (IMIP) 2011 [ID:1615]
50 von 622 angezeigt

Okay, so good morning everybody. We have still five weeks left and it's getting to be way

more interesting in the upcoming lectures. Today we still have to consider something

that is very much related to linear algebra and optimization and what we are currently

considering is the so-called hand-eye calibration. And the hand-eye calibration is a topic that

is very important that was considered for many years to be solved but still there are

people out there coming up with new ideas how to solve these problems related to hand-eye

calibration, way better in the sense of numerical robustness and so on. And hand-eye calibration

is something that is required everywhere where robots and cameras and autonomous systems

for instance are implemented because you have your camera coordinate system, your camera

is mounted on a device like a robot arm in automobile industry or like a vehicle equipped

with a camera you need a transformation between the vehicle coordinate system, the robot hand

and the image coordinate system. This is done by using hand-eye calibration and we have

motivated this problem domain by looking at 3D ultrasound where we have the ultrasound

probe, we acquire images and we have a reference coordinate system of our markers. I have also

shown to you last week the motivation having an endoscope where you have your markers and

you have your image coordinate system and you want to compute the transform. So that's

the hand-eye calibration and that's one part of the story here in medical, interventional

medical image processing. Let me briefly summarize what is the story that we have covered so

far. We started out by the motivation why is interventional image processing so important,

what is the difference to diagnostic procedures and then we looked into preprocessing operations

or operators in the sense of algorithms and we looked into methods that allow us to compute

edges in the image using a discrete approximation of the gradient. We also heard about methods

to decide whether we are in a homogeneous region, whether we are in an edge region or

whether we are close to a corner. And the basic idea in this type of method was we look

into a local neighborhood, we compute the gradients and we look how the gradients behave,

whether they have, if you merge them and consider them as feature vectors, whether they have

all basically zero values and it's a flat region, whether they all point into the same

direction then we have an edge and if they are basically pointing into orthogonal directions

we know that the principal axis of the gradient vectors are basically showing the same length.

That was the core idea. We called this also structure tensor and that's basically something

where we combine the gradient with the gradient transpose, that's a rank one matrix and we

sum this over a local neighborhood and consider basically these projection matrices. You remember

how to read this, these are projection matrices where you project vectors onto the gradient.

What else did we consider in this context, James? The Huff transform. And the Huff transform

is a very powerful technique. It was one of the first algorithms by the way in the field

of image processing that were covered by a patent. So Huff was smart enough to put a

patent and to file a patent and he is extremely rich now. No, but it's nice to see that you

can file these things as patents and Huff transform is basically making use of the following

core idea. You have a parametric form of the structure you are looking for and then you

compute for each pixel in the image the parameters that would belong to this parametric curve

if this pixel belongs to it and then at the end of the day you look which parameter combinations

show the highest probability in the image and then you say these structures are present

in the currently considered image. Formulated in a very abstract manner, we have considered

straight lines and the detection of straight lines and we know straight lines are defined

by the normal vector, the orthogonal vector, the normal vector that is in 2D basically

given by the angle of the normal vector with the x-axis, so one parameter and the other

parameter is the offset, so the translation. And for hand-eye calibration we will also

need the or we can also make use of the Huff transform to compute the circles of the calibration

pattern in the image and how is the circle defined and let's just look at the core idea,

I mean that's straightforward circle detection using Huff transform. What is the parametric

Zugänglich über

Offener Zugang

Dauer

01:24:20 Min

Aufnahmedatum

2011-06-27

Hochgeladen am

2011-07-06 13:19:53

Sprache

en-US

Tags

Mustererkennung Informatik Bildverarbeitung Medizinische
Einbetten
Wordpress FAU Plugin
iFrame
Teilen