Why do we ignore wavelength? I would think that wavelength would contain important information about what colors we see while compressing it to one scalar would only give us one luminance value.

JayShenoy

How is this related to the 4D light field function we learned about for light field cameras? In a sense, I suppose that one of the spatial dimensions is fixed when observing the camera alone as opposed to free space, but I may be wrong.

eliot1019

From this pdf of slides from fall 2016 where Prof Ren was a guest lecturer:
"In a region of free-space, 5D plenoptic function
simplifies to 4D because light is constant along a ray"
https://inst.eecs.berkeley.edu/~cs194-26/fa16/Lectures/light_fields.pdf

aparikh98

Is this function just used to theorize about "the set of everything we see". Seems like it is too data and computationally intensive to compute or even approximate

Why do we ignore wavelength? I would think that wavelength would contain important information about what colors we see while compressing it to one scalar would only give us one luminance value.

How is this related to the 4D light field function we learned about for light field cameras? In a sense, I suppose that one of the spatial dimensions is fixed when observing the camera alone as opposed to free space, but I may be wrong.

From this pdf of slides from fall 2016 where Prof Ren was a guest lecturer: "In a region of free-space, 5D plenoptic function simplifies to 4D because light is constant along a ray" https://inst.eecs.berkeley.edu/~cs194-26/fa16/Lectures/light_fields.pdf

Is this function just used to theorize about "the set of everything we see". Seems like it is too data and computationally intensive to compute or even approximate