You are viewing the course site for a past offering of this course. The current offering may be found here.
Lecture 21: Image Sensors (44)

Are there any notable differences between pixel mosaicking and taking multiple shots at different exposures other than pixel mosaicking having lower resolution?


How are cameras able to capture so many exposure levels at such a fine granularity (ie having exposure T at one pixel, and 64T right next to it) — if it's shot through one lens, I doubt shutter speed can be adjustable unless the "blinds" of the lens are somehow very complex, and aperture adjustments doesn't seem right either due to the changes in depth of field. Are the different exposures stored done through ISO levels, or is there something else happening?

You must be enrolled in the course to comment