Welcome for computer graphics,
so last week we started with texture mapping,
so you learned about the idea of texture mapping
Which at first glance is pretty simple when we restart as an object, we – make it visually
more interesting by applying a texture, a texture is a simple image for now – typically
2D, and it can be a photograph or can be painted by an artist or other things – which we
will – also learn today.
And we – when we – rawestorize than an object.
What we have to do is that we assign, then when we rasterize an object, we have to assign
texture coordinates to the vertices of the object, and these texture coordinates will
then be interpolated.
The interpolated texture coordinate tells us where to fetch the color of a single pixel
yeah, during rasterization.
And these texture coordinates, usually are provided as part of our 3D model.
So it's an additional attribute that comes with all the vertices.
And we can use textures for different purposes.
So in that example, we have texture for a color of the object.
That's the typical case that we compute, that we take the color of an object from such a
texture.
Normally, we also apply lighting.
That means it's not the final color, but for instance, it's the diffuse color that we take
of the normal lighting that we take from this texture.
So we have the texture and the lighting on top.
But there can also be very different stuff stored in the texture that should be very
over the object.
And we will see other examples also today.
And what we spoke about on Thursday is the interpolation of the texture coordinates.
As I said, texture coordinates are applied to the vertices, and that means if we rasterize
a triangle, we will have three texture coordinates for the vertices.
And yeah, you learn that doing a normal linear interpolation, as we learned it for Gouraud
shading, in that case will fail, or it will not fail, but it will deliver wrong results,
like here, yeah, where we see that a linear interpolation, like we have here on the right,
obviously gives a weird result.
This is what we would expect in that case.
And yeah, the reason for this is that linear interpolation on an object is not the same
as a linear interpolation on its image.
And that's a property of protective mappings.
And yeah, we can explain where that comes from.
And at the end, we were also looking at a technique to solve that, for an approach that
delivers a perspectively correct interpolation, that's how it is called.
And the idea is that instead of interpolating attributes like texture coordinates, in this
case S and T, so here at this vertex we have SATA.
Here, this is vertex B, we have texture coordinate SBTB.
we have a CTC, so these are the texture coordinates at these vertices, and instead of interpolating
these, we divide these by the set value of that vertex. Yeah, so the distance from the
camera, the set component of that vertex, we divide that one, and additionally we take
the value 1 over set b, and you see that already looks pretty much like homogenous coordinates.
a little bit and we interpolate these linearly, like we learned and then we get an interpolated
S over set T over set 1 over set and if we interpret this as homogeneous coordinates,
then we divide by 1 over set and then we get a prospectively correct interpolated S-T value.
Presenters
Zugänglich über
Offener Zugang
Dauer
01:14:54 Min
Aufnahmedatum
2019-11-18
Hochgeladen am
2019-11-19 12:09:03
Sprache
de-DE