You are viewing the course site for a past offering of this course. The current offering may be found here.
Lecture 21: Light Fields (69)
bernardmc8

Modern devices also often have multiple lenses, like in the case of the newer iPhones, which use three lenses in order to allow users to change the perspective like in fish-eye lens pictures

micahtyong

I've never seen this graphic/breakdown of the modern smartphone camera before––this is really cool! In response to the above comment, I wonder if smartphones with multiple cameras use any advanced techniques that combines the output of those cameras to generate some desired effect on an image. For example, on iPhones with multiple cameras (e.g., 12 pro), we can shoot a video using two cameras to get two different views of the scenes. However, the phone fuse output together in some way?

micahtyong

In response to my own question, I learned that iPhones with multiple camera-m cameras have the ability to fuse images together (9 images to be exact, using two of the cameras) using a technique called "Deep Fusion". According to Apple, deep fusion results in “images with dramatically better texture, detail, and reduced noise in lower light.”.

adityaramkumar

Facebook/Meta recently announced that they will be using "pancake lenses" in their new Oculus device codenamed Project Cambria. While that will be for people to look into their VR display, its interesting to see the amount of innovation that's still going on in this space.

shreyaskompalli

I had a very similar question to @michahtyong, and really enjoyed reading their response about the deep fusion technology. To add on to that point, I'd really love to learn more about the software involved in implementing Deep Fusion, and how the developers managed to combine inputs from multiple cameras to form a cohesive image.

You must be enrolled in the course to comment