So basically, 4D light fields allow us to refocus by simply resampling the already sampled (x, y, u, v). A ray can be represented as a vector pointing from (u, v) on the lens to (x, y) on the sensor. By calculating new intersections between rays and the moved sensors, we can resample radiance and derive the irradiance at different pixels on the new image plane.
pixelled
[deleted]
pixelled
[deleted]
pixelled
[deleted]
pixelled
[deleted]
pixelled
[deleted]
pixelled
[deleted]
pixelled
[deleted]
pixelled
[deleted]
pixelled
[deleted]
pixelled
[deleted]
pixelled
[deleted]
pixelled
[deleted]
alexkassil
Yes, we capture all the required data in the 4d lightfield, and sample it accordingly to get a new image
So basically, 4D light fields allow us to refocus by simply resampling the already sampled (x, y, u, v). A ray can be represented as a vector pointing from (u, v) on the lens to (x, y) on the sensor. By calculating new intersections between rays and the moved sensors, we can resample radiance and derive the irradiance at different pixels on the new image plane.
[deleted]
[deleted]
[deleted]
[deleted]
[deleted]
[deleted]
[deleted]
[deleted]
[deleted]
[deleted]
[deleted]
[deleted]
Yes, we capture all the required data in the 4d lightfield, and sample it accordingly to get a new image