You are viewing the course site for a past offering of this course. The current offering may be found here.
Lecture 10: Ray Tracing - Acceleration (6)
JefferyYC

Would like to confirm my understanding. So we take a random point on the plane and calculate the direction from this point to the origin. We then compare this with the ray direction. If they are both greater or less than 90 degrees then the ray would intersect with the infinitely expanding plane. If the ray direction form a different angle (1 greater than 90, the other less than 90), then the ray would not intersect with the infinitely expanding plane. In the case for the graph in the slide, the ray would be pointing upwawrds.

phoebeli23

How significant are the extra calculations required for the general bounding box?

kevintli

I'm curious about the significance of the number of elementary operations required for each case listed here. In some of the earlier lectures on antialiasing (e.g. when we talked about bilinear interpolation, and mipmaps) we also counted the number of lerps, additions, multiplications, etc. when analyzing the efficiency of each method. I guess my questions are:

(1) In practice, does the number of operations matter when working on large-scale graphics applications? Typically I'm used to seeing algorithms talked about in terms of asymptotic runtime, in which case these difference would mostly be ignored.

(2) Are there certain operations that are more expensive than others? Here I would assume that multiplies/subtractions/divisions all take roughly the same amount of time, but in earlier lectures and in Project 1, we counted the number of lerps, mipmap level/texture lookups, etc. I wonder if, for example, looking up texture values becomes an expensive operation when the textures are very large and cache performance starts to matter.

joshua16266261

I think when working on large-scale problems, the number of elementary operations like multiplies and adds matters a lot because for example, in the general case on this slide, there are 10 operations but for the axis-aligned case, there are only 2 operations required, which would give us something like a 5x speedup. This means that a program which would normally take 1 hr to run now might only need 12 mins (I'm simplifying a lot lol). I think as the problem gets larger, the constant factor speedup is probably even more important because it translates to more time saved. This is like how in 61C, we talked a lot about parallelization and vectorization, which gives us a lot of speedup without really changing the asymptotic complexity. In fact, there are many algorithms (https://en.wikipedia.org/wiki/Galactic_algorithm) that have good asymptotic complexity but are just not applicable to real-world problems.

greeknerd1

If we are first checking whether the ray hits the bounded volume, and then checking if it hits the object depending on the first answer, by how much are we increasing the time to evaluate?

You must be enrolled in the course to comment