You are viewing the course site for a past offering of this course. The current offering may be found here.
Lecture 21: Light Fields (55)
melodysifry

Is this different from the conceptual view of changing focus from earlier where we basically used ray tracing to determine where the rays for a desired plane of focus would have fallen if the sensor was at a different distance in order to simulate changing focus? In that explanation we were essentially using that ray tracing information to figure out which pixels on the actual sensor we need to combine to find the value for the corresponding pixel on the desired/hypothesized sensor, but it's not immediately clear to me how that corresponds to this shift-and-add view where it looks like we're superimposing different sub-aperture images.

You must be enrolled in the course to comment