Lecture 21: Image Sensors (18)
arjunpat

There's a 1:1:2 ratio of red to blue to green pixels. I wonder if that ratio matches up well with humans and to what degree. Is this just a rough approximation or do we actually have double sensitivity in the green portion.

KevinXu02

I think the ratio is just a rough approximation, but it is much easier for demosaicking and for circuit designing comparing with a pattern that is closer to the human eyes.

snowshoes7

From what I can tell, it seems to match pretty well with human color perception--but it's not perfect, and this kind of thing also highlights the importance of accounting for differences in color perception (i.e. colorblindness) which make up a larger portion of the population than you might think!

colinsteidtmann

This makes me think of really old cameras from like the 1800s and how everything use to be in black and white. I forget why they couldn't capture color and now I wonder how their camera sensors were different than ours.

jacky-p

I had never thought that humans were most sensitive to green light, but as the first bullet point states does make complete sense because green is in the center of the visible light spectrum. Therefore, color filter arrays being mostly green is understandable.

ericlu28

^ Agree with someone said above in this thread. One of my close friends has red-green colorblindness, and highlights that looking at the computer screen rendering images of scenes isn't identical to real world scenes. If we are using more green pixels than red and blue, that may cause different types of perception for certain groups of people.

jasonTelanoff

How did they come to this ratio? Did they try 1:1:1 and notice it didn't look quite right, so they tried other things? Or did they know the theory and this just proved it?

noah-ku

This slide presents different types of color filter arrays, also known as mosaics, which are used in digital camera sensors to capture color images. The Bayer pattern, being the most common, has twice as many green pixels compared to red or blue. This is because the human eye is more sensitive to green light, which is also depicted by the luminosity efficiency curve. The Sony RGB+E pattern aims for a wider color gamut, capturing more color information. The Kodak RGB+W pattern includes white pixels to increase the dynamic range, allowing the sensor to capture greater detail in both bright and dark areas. The inclusion of more green pixels in these patterns mimics the human eye's sensitivity, leading to images that are closer to what we naturally perceive.

Liaminamerica2

If people are more sensitive to green light it would make more sense to have fewer green pixels since any amount would be sensitive to our eyes.

SKwon1220

Is there a more thorough optical/neuroscience based explanation as to why humans are most sensitive to green pixels as opposed to red or blue? My initial guess is that green being in the middle of the color spectrum and also having the highest luminous efficiency curve contributes to how well our eyes process the light being reflected onto them.

cvankeuren

I had no idea that there were usually more green pixels than red or blue when we display images. I would assume that since our eyes are more sensitive to green then there would be less green pixels and more red and blue pixels, so I'm curious why the inverse is true here. I'm also curious to see the process it took in order to get the most accurate ratio that most closely aligns to human vision.

TiaJain

To answer jasonTelanoff's question, the ratio of green pixels to red and blue in the Bayer pattern wasn't arrived at through trial and error but is rather based on the human eye's sensitivity to different colors. As the slide mentions, human vision is most sensitive to green, moderately to red, and least to blue, which is reflected in the "luminosity efficiency curve". The pattern was designed with this in mind to mimic the human eye's response for more accurate color reproduction in digital imaging.

yykkcc

I just have a rough guess that the fact that humans are most sensitive in the green portion of the visible spectrum is due to human evolution. So I asked ChatGPT instead of doing a research. Surprisingly it confirms my idea: "It's thought that during human evolution, being sensitive to green light was advantageous because our ancestors lived in environments with a lot of green vegetation. Detecting shades of green could have helped in identifying edible plants and detecting predators or other dangers in the green foliage.", which makes a lot of sense.

saif-m17

In a similar vein to several of the other questions, we sort of skipped over the discussion of why there are more green pixels then blue, and it was mentioned in class that the idea that humans are more sensitive to green isn't necessarily true. I'm curious what other schools of thought there are and what the issues with that statement are, as well as ways me might alter/have altered these arrays if that isn't true.

Alina6618

The Bayer pattern's ratio of green to red and blue pixels in digital imaging color filter arrays is indeed a rough approximation of the human visual system's sensitivity, designed not only to mirror our heightened sensitivity to green light due to evolutionary adaptations but also to accommodate the technical efficiencies of image processing and hardware design. While this pattern does not perfectly match individual variations in color perception, such as colorblindness, it offers a practical balance between biological fidelity and the practicalities of sensor design, resulting in the effective reproduction of color within the limitations of technology. Ongoing research into the human visual system and advancements in sensor technology suggest the potential for future CFAs that could more closely align with or adapt to individual color sensitivities.

GarciaEricS

It's amazing how much human biology factors into computing. It could have easily been the case that humans would be more sensitive to the blue part of the color spectrum, and we would see our color filter arrays looking completely different. It's interesting to think about how if our vision looked completely different, maybe with a fourth dimension to color, then our monitors would look completely different. Or if a dog made a monitor, their monitors would only have two different colors for pixels (haha)

RishSharma7

To respond to Colin's comment/question, I'm pretty sure that the reason those old timey cameras couldn't capture color was because their exposure times were too long in comparison to the cameras that we have today (obviously), and photographic materials sensitive to the whole range of the color spectrum were not yet available. I think the 1930s is when those types of materials became commercially available.

RishSharma7

In response to Eric's comment, I think about this all the time as well. It's trippy to think that from some other being's perspective, the way we see color is straight up "wrong". A dog might think that dark purple is a shade of gray, whereas we know it to be purple. And mantis shrimp probably see something way cooler than purple, and would laugh at us if they could for calling it as such. I wonder how we would develop the Bayer pattern for dogs though, since you brought it up. Is there an alternative color ratio for them as well? How would we find that out, exactly?

jananisriram

Why are human eyes most sensitive in the green portion of the visible spectrum? Does it have something to do with wavelengths?

You must be enrolled in the course to comment