I am a bit confused about what this equation says: Is Phi_e the quantity defined on the previous slide, or is it something different since it takes in a wavelength
sunsarah
I believe Phi_e is the spectral flux that describes distribution of energy by wavelength (but from https://en.wikipedia.org/wiki/Radiant_flux it seems that the spectral flux is a type of radiant flux per unit wavelength).
hannahmcneil
The luminosity function attempts to model human sensitivity to each wavelength (https://en.wikipedia.org/wiki/Photometry_(optics)). I find it interesting how this might relate to the concept of "just-noticeable difference", where two different light sources with quantifiable absolute differences in radiant energy may be judged as the same by the human eye because they aren't quite different enough: https://en.wikipedia.org/wiki/Just-noticeable_difference
yzyz
From the luminous efficiency curve, we can see that human eye is most sensitive to green light. This fact is the reasoning behind the Bayer filter (https://en.wikipedia.org/wiki/Bayer_filter), where a camera sensor has twice the number of green pixels compared to red or blue pixels. Since the human eye is more sensitive to green light, getting a higher resolution green channel improves the perceived image quality.
gprechter
I think it's really important to take humans into account when considering the effectiveness of computer graphics. I think learning from how human vision works, and taking into account the human factor can really aid in making graphical systems more efficient and accurate. One thing that comes to mind is a paper I read recently on using human eye tracking to make ray tracing more efficient: https://pdfs.semanticscholar.org/0545/9f453f18c4bce2a32725b1072d3ab6c34e31.pdf
This also reminds me about the fact that there are 2 green pixels in displays for every red and blue pixel, since our eyes are more receptive to green colors -- as an example of how the human visual system is accounted for in the graphics pipeline, even at a hardware level.
fywu85
Does photometry model the dynamic aspect of human vision? Specifically, does it model adaptive nature of pupils to the incoming light intensity? Can photometry model different types of color blindness too?
I am a bit confused about what this equation says: Is Phi_e the quantity defined on the previous slide, or is it something different since it takes in a wavelength
I believe Phi_e is the spectral flux that describes distribution of energy by wavelength (but from https://en.wikipedia.org/wiki/Radiant_flux it seems that the spectral flux is a type of radiant flux per unit wavelength).
The luminosity function attempts to model human sensitivity to each wavelength (https://en.wikipedia.org/wiki/Photometry_(optics)). I find it interesting how this might relate to the concept of "just-noticeable difference", where two different light sources with quantifiable absolute differences in radiant energy may be judged as the same by the human eye because they aren't quite different enough: https://en.wikipedia.org/wiki/Just-noticeable_difference
From the luminous efficiency curve, we can see that human eye is most sensitive to green light. This fact is the reasoning behind the Bayer filter (https://en.wikipedia.org/wiki/Bayer_filter), where a camera sensor has twice the number of green pixels compared to red or blue pixels. Since the human eye is more sensitive to green light, getting a higher resolution green channel improves the perceived image quality.
I think it's really important to take humans into account when considering the effectiveness of computer graphics. I think learning from how human vision works, and taking into account the human factor can really aid in making graphical systems more efficient and accurate. One thing that comes to mind is a paper I read recently on using human eye tracking to make ray tracing more efficient: https://pdfs.semanticscholar.org/0545/9f453f18c4bce2a32725b1072d3ab6c34e31.pdf
This also reminds me about the fact that there are 2 green pixels in displays for every red and blue pixel, since our eyes are more receptive to green colors -- as an example of how the human visual system is accounted for in the graphics pipeline, even at a hardware level.
Does photometry model the dynamic aspect of human vision? Specifically, does it model adaptive nature of pupils to the incoming light intensity? Can photometry model different types of color blindness too?