You are viewing the course site for a past offering of this course. The current offering may be found here.
Lecture 9: Raytracing (6)
kingdish

As we can see from the picture, this method is not good enough if we want a more realistic image. Lots of information is lost when we only limit the scope of the ray to local (no bouncing around, ray from other sources, etc.).

kingdish

Comment above sent twice, deleted.

KhadijahFlowers

Just to make sure, the vector we use to perform shading is the normal vector to the point that we hit and not the ray that we sent into the scene, correct?

jgforsberg

Khadijah. I believe you are correct that we use a normal vector for shading. Recall in Project 2 we used normal unit vectors to illuminate our meshes.

killawhale2

I don't think it's exactly the normals. I recall hearing in the lecture about taking the dot product of the normal with the direction of the light source.

arilato

How would this algorithm run if the object in question was a mirror? IE, if we're doing raytracing within a game and the camera moves to a mirror - would the ray bounce behind the image plane to grab pixel colors?

AronisGod

This is a very concrete and clear foundational model to base all preceding refinements off of. From here we layer complexity with supersampling rays per pixel, considering reflectivity/transmission/absorption. We can think of multiple light sources and their relative intensity when interacting with an object with frequency dependent r/t/a/& dispersion

You must be enrolled in the course to comment