You are viewing the course site for a past offering of this course. The current offering may be found here.
Lecture 11: Radiometry & Photometry (47)
stexus

I don't quite understand how this 2d-array of cameras can produce a dataset that's equivalent (?) to the model we saw earlier with the rabbit. Where do we get the theta and phi values?

brywong

This may not be entirely correct, but I believe that each camera in the 2D array captures a different perspective of the original image (in this case, each camera gets an image of the statue but slightly shifted vertically/horizontally). Combining all of those images together (via shifting towards a focal point and averaging together) allows you to focus on a certain part of the image. Some examples are linked here (you might need to download a SWF file viewer): http://lightfield.stanford.edu/lfs.html

saltyminty

This was actually a project in CS 194-26 where you would take lightfield camera data from the stanford dataset brywong linked above and then manipulate the data to either change the focus depth or simulate changing camera aperture. You can see the project here: https://inst.eecs.berkeley.edu/~cs194-26/fa17/hw/proj5/

You must be enrolled in the course to comment