18 - Computergraphik [ID:12540]
50 von 811 angezeigt

Good morning, welcome for Computer Graphics.

We will today speak about secondary effects in ray tracing.

So on Monday I was explaining the idea of ray tracing, in particular ray casting, which

mainly is a different way to generate the same images which we could generate with rasterization.

But I also said that having this ray tracing operator available allows us to achieve many

more nice effects, which just increase realism and make images looking better and improve

on quality.

And while last time we mainly looked at eye rays, today we will look at secondary effects

at the secondary rays.

So today we will not only shoot rays from the eye into our scene, but also within the

scene to do lighting computations.

And yeah, so while last time we were only casting rays through the single pixels and

then shading the single hit points, today we will, yeah okay, that's just a repetition

of what we saw before.

So we traverse the image pixel by pixel and cast the rays.

And last time we only had eye rays, but today we also have that situation that we shoot

rays and then cast secondary rays.

For instance, to obtain shadows, but also to get reflections or refractions.

We already also saw this and you also saw these images.

This allows us to get all these nice reflection effects, reflecting spheres and so forth.

Okay.

So this you've all seen.

And now a basic ray tracer, and this reminds me that I should have started because last

time I skipped two slides for time.

You took time constraints, but we can do that now very well.

Okay.

How do we, sorry.

Okay.

We also saw this, yeah.

And so this is pure ray casting, yeah.

This is the state, kind of the algorithmic state that we had at the end of the last lecture.

Yeah, so we have a loop that goes over every pixel in an image, generates eye ray.

Then here in the loop, it iterates over all the scene objects, intersects the eye ray

with the scene objects.

And then if we found a hit point, this O min is kind of the object that has been hit first,

yeah, that has the closest hit point, the hit point with a minimal ray parameter.

And then we compute the lighting at this hit point.

So you see this code is very pseudo, yeah, it's very abstract.

And in practice, what we typically need now at this point to compute the lighting, yeah,

at the hit point is that we need information about the hit point, yeah, to do the lighting.

If you remember in rasterization, we had the situation that we have a triangle and we have

normals at the vertices.

And then during rasterization, for instance, the normals are interpolated.

And then we have a normal at this point, and then we can do the lighting here, yeah.

This would be fong lighting.

That's the normal way for pixel lighting to do lighting.

Now the situation is a little bit different.

We have our triangle, and now we cast the ray, and we get exactly this hit point.

And now we have to do the lighting at that point.

And first of all, that means that we have to compute the interpolated normal at that

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:35:00 Min

Aufnahmedatum

2019-12-12

Hochgeladen am

2019-12-16 00:09:03

Sprache

de-DE

Einbetten
Wordpress FAU Plugin
iFrame
Teilen