All of these examples of varying color perception based on surrounding colors makes me think about whether we can train vision models to emulate how human eyes perceive colors. Based on what we've previously learned in this class, computers basically just represent the image with pixels (RGB), but I wonder if there exists some way to develop more "adaptive" embeddings of images.
s3kim2018
I think it is interesting how the surround effect is much more pronounced in images where the pixel width is narrower. Maybe this is our eyes adapting to high signal frequency that amplifies the surround effects.
All of these examples of varying color perception based on surrounding colors makes me think about whether we can train vision models to emulate how human eyes perceive colors. Based on what we've previously learned in this class, computers basically just represent the image with pixels (RGB), but I wonder if there exists some way to develop more "adaptive" embeddings of images.
I think it is interesting how the surround effect is much more pronounced in images where the pixel width is narrower. Maybe this is our eyes adapting to high signal frequency that amplifies the surround effects.