Why do we prioritize low frequencies again? If I recall correctly high frequencies correspond to edges. Don't we want to preserve the edges/shapes of objects precisely? Like if you used this method to capture a bunch of ripples in a pond, if you don't preserve where the crests are, wouldn't we just see a smooth pond?
GKohavi
I think the reason in this case is because we do preserve edges with the Y` channel since that is not being downsampled. We can afford to get rid of many of the high color frequencies since people are a lot less sensitive downsampling in the color channels than the luminance channel.
kevinliu64
I've never really understood the jpeg quality option before this and how it affects the resolution of the image, but the next slide provides a great visual representation of why we get visible pixel boundaries. Having a lower resolution quality essentially that we are 'downsampling' and thus losing information.
GitMerlin
I was wondering how the matrix division is done. Looks like it's element-wise division.
andrewdcampbell
This website has the quantization matrices used in JPEG compression for a large list of common consumer cameras. It's interesting to see that they all differ from one another - I wonder how they are chosen?
Why do we prioritize low frequencies again? If I recall correctly high frequencies correspond to edges. Don't we want to preserve the edges/shapes of objects precisely? Like if you used this method to capture a bunch of ripples in a pond, if you don't preserve where the crests are, wouldn't we just see a smooth pond?
I think the reason in this case is because we do preserve edges with the Y` channel since that is not being downsampled. We can afford to get rid of many of the high color frequencies since people are a lot less sensitive downsampling in the color channels than the luminance channel.
I've never really understood the jpeg quality option before this and how it affects the resolution of the image, but the next slide provides a great visual representation of why we get visible pixel boundaries. Having a lower resolution quality essentially that we are 'downsampling' and thus losing information.
I was wondering how the matrix division is done. Looks like it's element-wise division.
This website has the quantization matrices used in JPEG compression for a large list of common consumer cameras. It's interesting to see that they all differ from one another - I wonder how they are chosen?