20 - Computergraphik [ID:12608]
50 von 674 angezeigt

So, welcome, please excuse the delay.

I hope the recording will work now.

Today we have the last lecture and first of all I would like to finalize the lecture from

Monday, which I didn't manage to bring to the end and then we will look into a smaller

final chapter, tone mapping.

But first of all we will speak about or we will finish with KD trees.

KD trees are a very interesting, very efficient spatial data structure to accelerate ray tracing

and the idea of KD trees is the following.

We sort all our triangles into a single root node, we compute the bounding box of our entire

scene of all the triangles in our scene and then we top down build a hierarchy and we

build the hierarchy by always splitting our nodes into two halves and we always split

along the longest dimension of the bounding box and the position of the split plane can

be varied.

We heard about that on Monday that we have different heuristics to how to determine the

optimal splitting plane position and once we found that split position we subdivide a node

into a left and a right child and continue with the subdivision then recursively on the

children until our nodes have less than threshold number of triangles, something like 10 for

instance, as soon as they have less than 10 triangles we make that node leaf and this

is our spatial data structure, a KD tree.

And the one nice thing about KD trees is that for every node, for every internal node we

only have to store very little information.

In fact we only have to store per node only which is the dimension in which we split and

what is the split position.

And this exactly defines what is the left, what is the right half and so this is all

we have to store.

So here we see such an example, so this is our scene and you see here for every node

what we store internally is just in that case we split in X direction at position 5.0 and

this node, the children have been generated by splitting in Y direction at position 3.0

and so forth.

So it's only very little information and in fact this allows us also to do the intersection

computation very fast or very efficiently.

It becomes a very simple traversal of our data structure and yeah, this is how it works.

So first of all we need the notion of a near and a far child.

What is meant with that we can see here in this example.

We have a node, that's here the larger box, we split that node with that plane here and

this is our array and now obviously this array first hits the left and then the right box,

the first the left and then the right children.

If our array would go into the opposite direction like this, then this would be the first child

that the array sees and this would be the second.

So it depends on the direction of the array and depending on that direction for every

inner node in our hierarchy we can determine the near and the far child.

The near child is the child that is hit first.

And we usually try or it's preferable to always traverse the near child first because if we

find an intersection here in the near box we can skip the far box because we are usually

interested in the first intersection.

So we try to first find out whether there's an intersection here because if this is the

case we can skip the far child.

So we don't traverse the hierarchy here always with a depth search and always first go to

the first child and then go to the second child but we always traverse the children

in the order in which the array hits them.

Teil einer Videoserie :

Zugänglich über

Offener Zugang

Dauer

01:10:53 Min

Aufnahmedatum

2019-12-19

Hochgeladen am

2020-01-07 15:19:04

Sprache

de-DE

Einbetten
Wordpress FAU Plugin
iFrame
Teilen