Is this similar to how portait mode works on phone cameras?
modatberkeley
I think apple takes a more ML-based approach (image segmentation) rather than physically modifying the sensor to selectively blur. This article is pretty interesting if you want to explore further: https://machinelearning.apple.com/research/panoptic-segmentation
patrickrz
Thanks for sharing the article above! I've always wondered how Portrait Mode works on Apple devices. I wonder why it's necessary for to use a software approach rather than camera settings (such as lowering the f-stop on a DSLR) for phone cameras today--is it due to the size limitations of the lenses/sensors on a mobile phone?
Staffmcallisterdavid
Yup, focal length is inversely proportional to depth of field and sensor size is proportional to field of view. Phones need a short focal length in order to get a standard field of view, meaning they have a wide depth of field. This prevents you from doing optical background blur.
Is this similar to how portait mode works on phone cameras?
I think apple takes a more ML-based approach (image segmentation) rather than physically modifying the sensor to selectively blur. This article is pretty interesting if you want to explore further: https://machinelearning.apple.com/research/panoptic-segmentation
Thanks for sharing the article above! I've always wondered how Portrait Mode works on Apple devices. I wonder why it's necessary for to use a software approach rather than camera settings (such as lowering the f-stop on a DSLR) for phone cameras today--is it due to the size limitations of the lenses/sensors on a mobile phone?
Yup, focal length is inversely proportional to depth of field and sensor size is proportional to field of view. Phones need a short focal length in order to get a standard field of view, meaning they have a wide depth of field. This prevents you from doing optical background blur.