Lecture 9: Ray Tracing & Acceleration Structures (12)
jayc809

This slide shows that a relatively simple world can require a lot of computation when ray tracing is added. It got me thinking how mist works in video games. Obviously light scatters in all directions in a mist, which would mean the reflection/refraction is technically infinitely more complex than simply bouncing off a surface. It would be even harder if the mist is not uniform. I wonder if there is some sort of estimation or approximation to model this type of light scattering effect/physics.

euhan123

I think this slide goes to show how there are so many aspects to consider when generating images and objects that we can see. It made me realize how powerful computers are since they would need to be running so much computation in a split second in order to smoothly generate these views for us.

lycorisradiatu

I find this useful video about Ray Triangle Intersection which helped me understand this slid: https://www.youtube.com/watch?v=EZXz-uPyCyA&t=475s

keeratsingh2002

In practical rendering scenarios, how do we determine the optimal number of recursive bounces for ray tracing to create a realistic image without overly taxing the computational resources?

emily-xiao

I'm curious to know if there are alternative methods of ray tracing; in this example, the assumption is that the primary ray is always normal to the image plane. However, is this always the case? I'm assuming light can get scattered at hit pixels in the image plane at non-perpendicular angles.

AnikethPrasad

@emily-xiao I was looking into alternative ray tracing methods and came across photon mapping. With this process, photons are emitted from light sources and traced through the scene. Those that hit diffuse surfaces are stored in a spatial data structure called the photon map. During rendering, indirect illumination is computed by tracing rays into the photon map to estimate radiance at a given point based on stored photon information. From what I've read, this method takes longer than vanilla ray tracing but is much less resource intensive than traditional Monte Carlo Ray Tracing.

aidangarde

What kind of granularity does this go to. As in I wonder how many refractions, or how at what point an object is not translucent enough to refract, and how that affects the final product, against the computing and space efficiency. Is it better to have more complicated rays with a smaller threshold for refracting, or vice versa?

RishSharma7

This slide really exemplifies how ray tracing can become extremely complicated in a hurry. Something that we might not think twice about in real life (like mist, as Jay mentioned, or vortex mirrors) can be a terrible pain to try to model. From the simple side of things to exploring some of the more advanced applications of ray tracing, I found this quick ray tracing intro to be pretty helpful: https://www.youtube.com/watch?v=0FMlPUEAZfs&t=7s.

razvanturcu

@keeratsingh2002 I was also wondering the same thing. When we think of rays and reflections in the real-world, they are continuous (i.e. there is no finite, discrete number of rays from a light source in reality I think?) Physics solves this in optics by looking at rays individually I think. But in our case, there should either be a minimum number of rays that depict a real image through some formula or a way to compute ray tracing continuously. In practice, I think that if you adjust the number of rays through trial-and-error, you will probably come up with realistic images, but it would be nice to have some sort of assurance (for eg: like we had with Nyquist frequency).

antony-zhao

Seeing how complex it is definitely explains how it was super difficult to use until recently, but I was curious about just how much slower it was. While I wasn't able to confirm any like benchmarks, I did discover that apparently "Until the late 2010s, ray tracing in real time was usually considered impossible on consumer hardware for nontrivial tasks." (https://en.wikipedia.org/wiki/Ray_tracing_(graphics)) so I'm curious what exactly changed.

GH-JamesD

I believe the real leaps came not in the computational or algorithmic methodology, but the advent of processor architecture specifically designed to accelerate ray tracing. NVidia's 2000 series ampere architecture was the first to bring to consumers, and was arguably a relatively unused feature at first due to the performance hit, but now with the maturation of the architecture real-time ray tracing is getting used a lot more.

sueyoungshim

I wonder how this translates into actual computation. There isn't a finite number of rays emanating from a light source, and the surfaces that could be hit by these rays are virtually countless. How do graphics algorithms reconcile with this seeming infinity to render a scene realistically? It seems like a simplification in the diagram, but what's happening under the hood in a real rendering situation? How do the computations manage the infinite possibilities of light paths and surface interactions to create a convincing image within a finite amount of time?

sueyoungshim

[deleted]

sueyoungshim

[deleted]

sueyoungshim

[deleted]

el-refai

When we say final pixel color is weighted sum of contributions along rays I assume that means like the luminence of the object right? Cause the underlying color is still there (i.e. red or green) it's just how much we're scaling it by correct?

508312

I am wondering how parallelizable this is. To me it seems that this would be quite slow on gpu, or are they optimized for such cases?

rcorona

I recently learned about ray marching, which is a related technique used to find the intersection between a ray and a scene by iteratively marching until an object is hit (or some maximum marching distance/iteration is reached). It uses signed distance functions, which give a signed distance from a current point to an object in the scene, in order to march a ray along a maximal distance in its direction where we can assert that no object will be passed.

Something I found really cool about ray marching is that one can use set arithmetic in order to create complex geometries as the combination of basic geometric primitives (such as spheres or rectangular prisms).

This video goes into some of the details on it:

https://www.youtube.com/watch?v=BNZtUB7yhX4

jananisriram

Recursive ray tracing seems to help us with recursive bounces of light, which better render an image by adding the reflections of light between different objects. It's interesting that we can use this technique to even model the reflection of different wall colors, for example, on an object, like we implemented in the homework.

ShonenMind

I encourage everyone to also check out ray marching! It has the same function as ray tracing but instead of directly calculating an intersection, it will instead "march" forward iteratively more and more and more UNTIL it hits an object, OR until it shoots off into space without intersecting anything (in which case, there is no intersection).

You must be enrolled in the course to comment