So welcome to today's lecture.
Professor Hornegger is not able to hold the lecture today, so I will show you something
from our practical work today from the computer vision group, especially an example for our
topic image forensics.
I think this is a nice example, although we did not develop it ourselves, and it's exposing
digital forgeries in complex lighting environments.
First of all, something about the computer vision group.
We have several people in there, and many of them work on reflectance analysis on real
scenes.
Angelo Pullo does that with multispectral imaging and the analysis of specularities.
Eva Eibenberger works on skin detection and illumination color estimation within a scene.
Johannes Jordan works on image forensics, and I myself work on image forensics and illumination
color estimation.
What I show you here in most of the pictures that are given in this talk, built upon a
seminar contribution by Dominic Schulthaus and Thomas Richter from the last semester
of forensics seminar.
What are we talking about?
In general, we are talking about revealing tampered images, images that have been doctored
or manipulated, and there are several possibilities to examine such an image.
You can examine properties within the world, like the lighting environment, which we are
talking about today.
You can also exploit properties of the imaging system, the lens, the sensor, or the camera
internal processing in order to detect irregularities.
Or you can work directly on the image and look if, for instance, bit statistics look
strange.
That's very related to a steganography detection.
These are the possibilities.
Okay, but since we are doing reflectance, we look at properties in the world, mainly.
So what are we talking about if we are talking about lighting environments?
We have a situation where objects are spliced that come from two images.
So the original, let's say, image is here, Tom and Katie are faking it.
And the first author of this paper, Kimo Johnson, put himself instead next to Katie.
So Kimo and Katie.
And our approach or the approach we're looking at is that if two objects are spliced together,
then probably they are illuminated from different directions.
And that should actually not be the case if they were captured at the same scene under
the same conditions.
So first of all, we need to talk about Lambertian reflectance.
If you attended the computer graphics course, for instance, then this should be well known
to you.
Basically, this states that the amount of light that is reflected from a point in the
scene depends only on the angle between the surface normal and the light source.
This is a very simplifying assumption.
But for our course, this will suffice.
Then what we have to consider is our illumination model if we are talking about real world properties.
So what we have here is that we say, OK, we have a scene irradiance that we model as the
integral over the illumination radiance and the reflectance of a surface point.
And we integrate over the half sphere over that point because light could come from all
directions.
And now if we take this Lambertian surface reflectance, we can simplify our reflectance
Presenters
Zugänglich über
Offener Zugang
Dauer
00:37:59 Min
Aufnahmedatum
2009-06-02
Hochgeladen am
2017-07-05 12:51:26
Sprache
en-US