Lecture 13: Global Illumination & Path Tracing (21)
ShivanPatel2025
The BRDF concept is pretty fascinating because it’s all about understanding how light bounces off surfaces in different directions. It’s cool to think about how it factors in both where light comes from and where it goes after hitting a surface. This gets me wondering about the challenges in making simulations look real, especially with materials that have unique shines or textures. How does tweaking the BRDF change the look of an image? And is there a way to keep things looking good in complex scenes without using up too much computational power?
marilynjoyce
It's pretty interesting how popular BRDFs (Phong, Lambertian, Cook-Torrance, etc)can account for material and how the material looks. I also wonder how tweaking the BRDF slightly changes the material look. For example, how much change in the distribution function would change a pure mirror surface into a slightly less mirror surface? Also interested in what combining different BRDFs can do to material appearance.
danielhsu021202
It was quite difficult for me to understand what the BRDF is and why we need it. I stumbled upon this video on ray tracing and found that it offered a really good explanation and animation of what a BRDF is:
https://youtu.be/gsZiJeaMO48?si=S94oBeheSxcHT_1B
BRDF is discussed within the first 6 minutes of the video. It presents it in a simpler concept, which for me was easier to comprehend.
Zzz212zzZ
In this function, seems like the reflected light should never exceed the emitted light. This obeys energy laws and our intuition.
Edge7481
I was wondering how it would generalize to transparent objects and it turns out the BRDF is a part of BSDF (scattering) which combines BRDF and BTDF (transmittance). BTDF accounts for the refraction and absorption of light within a material
The intuition here is that multiple beams of light will bounce off the surface and map to the same reflected point (which is the camera)
litony396
How is a BRDF made? From how I understand it, different materials would reflect light differently, so they should have different BRDFs. Can this be calculated somehow?
ArjunPalkhade
For anyone looking for a simple explanation on how BRDF works, here's a concise video: https://www.youtube.com/watch?v=4QXE52mswIE
sparky-ed
To answer lithny396, from what I understand, the L decides how much light there is in particular direction before and after interacting with the surface, so for example, if we have some material that reflects less light, the value that is before and after L would have a some gap in between them. https://www.youtube.com/watch?v=4QXE52mswIE In this youtube video, it talks about intensity at the point and it is defined using a material function that takes the amount of light and ray that was shot from.
aidangarde
I wonder how easy it would be to estimate the BRDF given an image. I know that BRDF is used in computer vision in order to recognize some objects, but that means that the function has to be estimated from raw pixel data. Why would reverse engineering this function give more insight that the raw pixel data.
GarciaEricS
I wonder how BRDFs are calculated for different material types. What I imagine is that we basically play with the BRDF values until we find some function that when applied on a material, makes that material looks like something we are trying to model it after. But maybe there is some tests in real life that we can run on a material, maybe shining light on it in certain ways to measure the amount of light that is reflected in different directions, and use these values to construct our BRDF function?
grafour
tldr: if we have some light coming into some object of certain material. the brdf tells us how the object behaves and relfects the light after hitting.
Zzz212zzZ
A microfacet BRDF model implies that it has multiple normals, thus producing a glossy surface.
The BRDF concept is pretty fascinating because it’s all about understanding how light bounces off surfaces in different directions. It’s cool to think about how it factors in both where light comes from and where it goes after hitting a surface. This gets me wondering about the challenges in making simulations look real, especially with materials that have unique shines or textures. How does tweaking the BRDF change the look of an image? And is there a way to keep things looking good in complex scenes without using up too much computational power?
It's pretty interesting how popular BRDFs (Phong, Lambertian, Cook-Torrance, etc)can account for material and how the material looks. I also wonder how tweaking the BRDF slightly changes the material look. For example, how much change in the distribution function would change a pure mirror surface into a slightly less mirror surface? Also interested in what combining different BRDFs can do to material appearance.
It was quite difficult for me to understand what the BRDF is and why we need it. I stumbled upon this video on ray tracing and found that it offered a really good explanation and animation of what a BRDF is:
https://youtu.be/gsZiJeaMO48?si=S94oBeheSxcHT_1B
BRDF is discussed within the first 6 minutes of the video. It presents it in a simpler concept, which for me was easier to comprehend.
In this function, seems like the reflected light should never exceed the emitted light. This obeys energy laws and our intuition.
I was wondering how it would generalize to transparent objects and it turns out the BRDF is a part of BSDF (scattering) which combines BRDF and BTDF (transmittance). BTDF accounts for the refraction and absorption of light within a material
https://en.wikipedia.org/wiki/Bidirectional_scattering_distribution_function
The intuition here is that multiple beams of light will bounce off the surface and map to the same reflected point (which is the camera)
How is a BRDF made? From how I understand it, different materials would reflect light differently, so they should have different BRDFs. Can this be calculated somehow?
For anyone looking for a simple explanation on how BRDF works, here's a concise video: https://www.youtube.com/watch?v=4QXE52mswIE
To answer lithny396, from what I understand, the L decides how much light there is in particular direction before and after interacting with the surface, so for example, if we have some material that reflects less light, the value that is before and after L would have a some gap in between them. https://www.youtube.com/watch?v=4QXE52mswIE In this youtube video, it talks about intensity at the point and it is defined using a material function that takes the amount of light and ray that was shot from.
I wonder how easy it would be to estimate the BRDF given an image. I know that BRDF is used in computer vision in order to recognize some objects, but that means that the function has to be estimated from raw pixel data. Why would reverse engineering this function give more insight that the raw pixel data.
I wonder how BRDFs are calculated for different material types. What I imagine is that we basically play with the BRDF values until we find some function that when applied on a material, makes that material looks like something we are trying to model it after. But maybe there is some tests in real life that we can run on a material, maybe shining light on it in certain ways to measure the amount of light that is reflected in different directions, and use these values to construct our BRDF function?
tldr: if we have some light coming into some object of certain material. the brdf tells us how the object behaves and relfects the light after hitting.
A microfacet BRDF model implies that it has multiple normals, thus producing a glossy surface.