You are viewing the course site for a past offering of this course. The current offering may be found here.
Lecture 9: Raytracing (9)
hershg

From lecture: "simple ray casting w/ local shading (such as is Blinn Phong) basically equivalent to rasterization". We can have richer and more complex renders by using a more comprehensive ray-based algorithm (like recursive ray-tracing, in following slides)

andrewdcampbell

I thought this paper about the technical details of Disney's renderer (called Hyperion) was interesting. Relevant to ray tracing is the section talking about how they developed a way to do ray tracing without keeping the geometry of the scene resident in memory, by shaing one surface at a time.

tyleryath

If anyone is interested in the history of how ray tracing was developed, this blog post by Nvidia is definitely worth the read: https://blogs.nvidia.com/blog/2018/08/01/ray-tracing-global-illumination-turner-whitted/

Staffirisli

The professor mentioned that the picture is grainy because it is a scan of the printed research paper (published in 1980). It's quite amusing how the original paper image got lost. I searched for this paper and couldn't find the original pdf. Perhaps it was never released in pdf form since the internet wasn't even around back then! Actually, pdfs weren't even invented until 1993. Postscript wasn't even around until 1982.

Grayscale scan of original paper: http://artis.imag.fr/Members/David.Roger/whitted.pdf Re-rendered paper but with messed up typography and still grainy renderings: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.107.3997&rep=rep1&type=pdf

arilato

High quality ray tracing is still very computationally expensive, especially in rendering graphics in games which must happen in real time. Recursive ray tracing adds to this problem, as the number of recursive calls linearly impacts this computation time.

Nvidia recently came out with a ML model for speeding up this process, where they take a grainy, partially rendered (noisy) scene and apply their model to "denoise" the scene into much higher qualities. Before, denoising models typically were time inefficient for real time rendering, but NVidia employed an autoencoder (a neural network that tries to output the input through lower dimensional representations). You can read more about it here: https://blogs.nvidia.com/blog/2017/05/10/ai-for-ray-tracing/

You must be enrolled in the course to comment