In modern visual effects, some studios have the budget to create a Virtual LED Set where walls of screens with real-time scenery surround the actors and capture the appropriate lighting conditions. Thus, eliminating the need for environment maps. This can be most famously seen in The Mandalorian where the protagonist's chrome helmet does not need to have the reflections added in post-production.
keatonfs
For environmental maps, how do we ensure that they stay up to date with their surroundings? For example, if something moves in the environment how do we ensure that this is reflected in the texture?
adham-elarabawy
A really cool application of a high-quality level of environmental lighting is in the training of machine learning models. Most deep learning models require large datasets of (in the case of computer vision) images annotated with some label/segmentation. Doing this on real images is expensive, since some labeler/annotator has to go through and manually mark and label each image. However, in some cases, if the target data can be simulated in a 3d environment with a high-enough degree of photorealism (as a result of proper environmental lighting), we can automatically create "perfect" training data using these methods!
In modern visual effects, some studios have the budget to create a Virtual LED Set where walls of screens with real-time scenery surround the actors and capture the appropriate lighting conditions. Thus, eliminating the need for environment maps. This can be most famously seen in The Mandalorian where the protagonist's chrome helmet does not need to have the reflections added in post-production.
For environmental maps, how do we ensure that they stay up to date with their surroundings? For example, if something moves in the environment how do we ensure that this is reflected in the texture?
A really cool application of a high-quality level of environmental lighting is in the training of machine learning models. Most deep learning models require large datasets of (in the case of computer vision) images annotated with some label/segmentation. Doing this on real images is expensive, since some labeler/annotator has to go through and manually mark and label each image. However, in some cases, if the target data can be simulated in a 3d environment with a high-enough degree of photorealism (as a result of proper environmental lighting), we can automatically create "perfect" training data using these methods!