Seeding a question: mechanistically, why does the resulting thumbnail image at the bottom look so artifacted? For example, why do the strands of hair and eyes look the way they do in the thumbnail?
Zc0in
In my naive perspective, compare to the face, eye is a part that pixels change greatly in a small area, which means its frequency is higher and need a higher sampling frequency to fully recover. Sampling every 16 pixels is too low.
waleedlatif1
I agree with the above statement that the sampling frequency is far too low for something with as much variability as the eyes. As a result, the surrounding pixels (during sampling) result in a color that is either the intended color, or another nearby color. In the eye, we can see that it is dominated by the black pupil which in the image at the bottom, seems to cover the entire surface of the eye. By increasing the sampling frequency, this would result in an image that better captures the fast changing signal.
Seeding a question: mechanistically, why does the resulting thumbnail image at the bottom look so artifacted? For example, why do the strands of hair and eyes look the way they do in the thumbnail?
In my naive perspective, compare to the face, eye is a part that pixels change greatly in a small area, which means its frequency is higher and need a higher sampling frequency to fully recover. Sampling every 16 pixels is too low.
I agree with the above statement that the sampling frequency is far too low for something with as much variability as the eyes. As a result, the surrounding pixels (during sampling) result in a color that is either the intended color, or another nearby color. In the eye, we can see that it is dominated by the black pupil which in the image at the bottom, seems to cover the entire surface of the eye. By increasing the sampling frequency, this would result in an image that better captures the fast changing signal.