You are viewing the course site for a past offering of this course. The current offering may be found here.
Lecture 21: Light Fields (14)
BohanYu

I'm a little confused about these rectangles on the right hand side. I understand that as we increase the resolution, those rectangles could get divided up such that visually they appear to be like a fine grid of squares. But why does decreasing resolutions disproportionally stretches u and x instead of making them larger squares?

ethanweber

Commenting on the phrase "plenoptic", the plenoptic function is a way to describe light at any time in any position and direction. It's a function that takes in 7 inputs: position, viewing direction, wavelength, and time. It outputs the intensity of the light rays coming into that point.

nobugnohair

Why are these rectangle instead of squares?

melodysifry

Do we deduce the u position just based on our knowledge of how the main lens refracts rays and figure out which u position a ray landing on a particular sensor position would have had?

kevintli

I had a lot of trouble understanding these diagrams (particularly what the ux plane and the rectangles represent), but the discussion 11 worksheet framed it in a slightly different way that I felt was really intuitive/helpful! From what I understand from that worksheet, the issue is that we basically need to come up with some way to measure and represent rays rather than 2D sensor readings. To accomplish this, plenoptic cameras use two planes instead of one, so that they can measure the ray intersection points on both planes (which turn out to be (u, v) and (x, y)) and figure out the direction of the ray. Intuitively, we are taking many 2D images of the same scene but from different viewpoints, which then enable the refocusing and view changing operations we discuss later.

You must be enrolled in the course to comment