Lecture 13: Global Illumination & Path Tracing (7)
RishSharma7
Maybe I'm just a bit confused on this topic, but in the environment map lighting picture, what exactly is the reflection that we're seeing on the black ball, and is it the texture of the green ball that prevents it from giving off a similar reflection? How does that texturing translate from just designing a differently textured ball to how differently textured balls reflect different types of light (point light vs environment map lighting)?
zy5476
I thought this comparison really illustrated how much better environment map lighting looks and how much it helps for realism as the left picture looks like something clearly artificial and unrealistic.
brianqch
@RishSharma7 I think that environment map lighting involves projecting the environment map onto the objects in this scene. So in this case we might have some information already about what is behind this camera's POV. But I think that if we don't have any information about the environment we can just choose a neutral color that appears as the "reflection."
anavmehta12
Like in the previous slide there are is no single light source all of the light is originating from outside i.e the environment. For the spheres with environment map lighting, there is a much distributed light source giving the left image a much nicer image.
AlsonC
@brianqch Really interesting and thank you for this! I wonder how we model in what is behind the camera's POV into code. I know in blender we can have different camera points set and capture images/videos from a certain perspective, is there a way to do that in code as well?
AnikethPrasad
How do we account for different textures in the image on the right resulting in different reflective properties? The green ball appears to be less reflective so how do we use the reflective properties of elements in a scene to have this effect?
sparky-ed
To answer AnikethPrasad question, on how should we account for different textures, we can decide by using materials properties. From what I understood is that for each object, it can be assigned with certain materials and each materials has its own properties such as roughness, reflectance, and etc. Based on it's properties, we can set the green ball to be less shiny and less reflective! I believe a really good thing to think about is how should we assign a material to a object? how do we account for which object is assigned to certain materials in this case.
DreekFire
Environment mapping is very efficient but one drawback of it is that it treats the incoming light as if it is coming from infinity, whereas a real environment is obviously at a finite distance. Regarding objects behind the camera's POV, the objects shown in the environment map are not actually modeled. The environment map is like a skybox, a precomputed image that can be obtained from a photograph or pre-rendering a base scene.
Maybe I'm just a bit confused on this topic, but in the environment map lighting picture, what exactly is the reflection that we're seeing on the black ball, and is it the texture of the green ball that prevents it from giving off a similar reflection? How does that texturing translate from just designing a differently textured ball to how differently textured balls reflect different types of light (point light vs environment map lighting)?
I thought this comparison really illustrated how much better environment map lighting looks and how much it helps for realism as the left picture looks like something clearly artificial and unrealistic.
@RishSharma7 I think that environment map lighting involves projecting the environment map onto the objects in this scene. So in this case we might have some information already about what is behind this camera's POV. But I think that if we don't have any information about the environment we can just choose a neutral color that appears as the "reflection."
Like in the previous slide there are is no single light source all of the light is originating from outside i.e the environment. For the spheres with environment map lighting, there is a much distributed light source giving the left image a much nicer image.
@brianqch Really interesting and thank you for this! I wonder how we model in what is behind the camera's POV into code. I know in blender we can have different camera points set and capture images/videos from a certain perspective, is there a way to do that in code as well?
How do we account for different textures in the image on the right resulting in different reflective properties? The green ball appears to be less reflective so how do we use the reflective properties of elements in a scene to have this effect?
To answer AnikethPrasad question, on how should we account for different textures, we can decide by using materials properties. From what I understood is that for each object, it can be assigned with certain materials and each materials has its own properties such as roughness, reflectance, and etc. Based on it's properties, we can set the green ball to be less shiny and less reflective! I believe a really good thing to think about is how should we assign a material to a object? how do we account for which object is assigned to certain materials in this case.
Environment mapping is very efficient but one drawback of it is that it treats the incoming light as if it is coming from infinity, whereas a real environment is obviously at a finite distance. Regarding objects behind the camera's POV, the objects shown in the environment map are not actually modeled. The environment map is like a skybox, a precomputed image that can be obtained from a photograph or pre-rendering a base scene.