You are viewing the course site for a past offering of this course. The current offering may be found here.
Lecture 16: Light Field Cameras (14)

Why is that for every pixel on the sensor plane, it will only evaluate the radiance from the ray that passes through one particular microlens?


Is this a correct way of imagining this? If you had 4 pixels, instead of accumulating light from the entire lens into each pixel, you combine the 4 of them and have each one only capture light from a fourth of the lens.


I think that the entire lens aperture generates the little disks for each microlens (or mini-camera?) and then on that disk image each particular pixel corresponds to a specific ray pinpointed by one location on the aperture and one location on the sensor plane.


Very recently, a paper published in 2016 addresses a variety of benchmarking and evaluation methodologies for light fields. In other words, they've come up with a way to measure the quality of a light field, taking into account observations like robustness to noise, texture sensitivity, and foreground fattening. Check it out here:


Would a metaphor for this be like looking at how the light refracts on the lenses?

You must be enrolled in the course to comment