Our system consists of a target object, a color camera, a depth camera and a protector.
The target object is tracked in real time by the depth camera.
The setup is geometrically calibrated and recreated in this virtual environment.
The target of our system is to match the live camera image to a given target appearance,
which in this demo we can set over here.
In regular intervals our system takes samples from the live camera image and feeds them
to a solver which then estimates the current projector gamut among other parameters.
The projector gamut as currently estimated is shown over here.
Right now it's just set to sRGB which is our default value.
We can add new samples to the system right here.
Let's for now start with the random colorful target appearance and hit add sample.
The estimated projector gamut updates immediately.
Let's add new samples.
Of course in a real setup the system takes samples automatically every couple of milliseconds,
so let's enable this.
Here on the left we show the current error between the camera image and the target appearance.
Let me now toggle between the uncorrected version and the corrected version.
Again this is the uncorrected version, this is the corrected version.
We can also see this with the texture, again this is the corrected version and this is
the uncorrected version.
And you can clearly see the difference.
In addition to the projector gamut we also estimate the current environment light.
In this setup there's a window on the left side of the room which casts a strong light
from here.
This is captured in this setup.
This works by placing virtual light probes in the scene which we show here.
The probes are positioned at these red dots and are modeled by spherical harmonics.
The influence of these probes is then interpolated over the target object.
In addition to the light probes we also employ a local refinement step to further improve
the estimation of the environment lighting.
In this case for example in the live camera image we can see actual highlights right here
and if I now toggle on the local refinement we can actually see them right here.
Again this is without the local refinement and this is with.
The last step is gamut mapping.
Here we have to consider that every surface point might have a different range of displayable
colors due to external factors and internal projector processing.
We therefore again place probes on the object surface similar to our light probes but this
time we estimate the projector gamut for each of these probes.
We can show this here.
The difference between these probes is hard to see in this setup.
Please refer to our paper for a cross section of these gamuts for comparison.
Now we can actually enable the gamut mapping.
Gamut mapping is performed before applying the inverse projector response curve.
After I enable it the projector image and therefore the camera image here is actually
getting a bit darker.
This is because the input image is scaled into the projector's dynamic range.
This also means that the error between the camera image and the target appearance is
actually getting slightly higher when applying the gamut mapping.
However it prevents clipping artifacts and therefore leads to a more pleasing appearance
in the real world.
Presenters
Zugänglich über
Offener Zugang
Dauer
00:07:12 Min
Aufnahmedatum
2021-11-12
Hochgeladen am
2021-11-12 21:06:02
Sprache
en-US
Projection mapping augments a real-world object’s appearance by projecting digital content on its surface. However, a remaining obstacle to immersive projection mapping is the limitation to white Lambertian surfaces and uniform neutral environment light, if any. Violating one of these assumptions results in a discernible difference between the source material and the appearance of the projected content. For example, some colors may not be visible due to intense environment lighting or pronounced surface colors. We present a system that actively subdues many of those real-world influences, especially environment lighting. Our system supports dynamic (i.e., movable) target objects as well as changing lighting conditions while requiring no prior color calibration of the projector nor any precomputed environment probing. We automatically and continuously estimate these influences during runtime in a real-time feedback-loop and adjust the projected colors accordingly.
From the paper
Kurth P., Klein V., Stamminger M., Bauer F.:
Real-Time Adaptive Color Correction in Dynamic Projection Mapping
IEEE International Symposium on Mixed and Augmented Reality (ISMAR) (Recife, Brasilien, November 9, 2020 - November 13, 2020)
In: IEEE (ed.): Proceedings of the IEEE International Symposium for Mixed and Augmented Reality 2020 (to appear) 2020
DOI: 10.1109/ISMAR50242.2020.00039
BibTeX: Download