In more plain English, I think the concept of this slide can be described as follows:
We only care about the apparent luminosity as it pertains to how humans perceive it. We can derive the apparent luminosity by evaluating how well the actual radiance of the incoming light is perceived by the human eye at each wavelength. However, since wavelength is a continuous notion, we need to integrate over all possible wavelengths.
How is an efficiency curve generated? Is it by heuristics or is there some model and formula that we use to show the efficiency of the human eye?
@YoungNathan, it appears from the graph that they are generated from experiments; I see few black dots that are not on the curve and so the curve might just be a normal curve.
I think it's really cool that light and physics equations get incorporated into computer graphics, as it is used to model light and the human visual system. It's important to account for human conditions, as different things to a computer may look exactly the same to a person, as computers and humans are different in this sense. This reminds me of the concept in psychology of the "just-noticeable difference" (https://en.wikipedia.org/wiki/Just-noticeable_difference), which is the amount something must be changed in order for someone to notice it.
What are the limitations of photometry? For example, when does photometry fail to accurately represent human vision? Is there such an instance? Does it account for human conditions (i.e. myopia/farsightedness)?
Can the luminous efficiency curve be thought of as a measure on lambda? Similarly, can computing the photometric quantities be thought of as integrating radiometric quantities (which are a function of wavelength) with respect to the measure V(lambda) d lambda? Not sure if this comparison gives any better intuition.