Lecture 22: Image Processing (14)
vivek3141

Intuitively, I don't fully understand why prioritizing the low frequencies are enough for our eyes to view the image.

Is there a slide describing the relation to the human eye that I'm missing?

anavmehta12

We prioritize low frequencies since if we threw a lot of the low frequency information i.e the critical information of an image , we would not be able to recognize the image. However, details like edges and textures aren't necessary to recognize the image which is why we prioritize low freq.

myxamediyar

vivek3141, the way that I imagine it is like: high frequencies correspond to rapid changes, and low frequencies are slow changes. As an extreme example, if you blur things out a little bit, you can still see the image even if some of that sharpness is lost. So I guess humans are more sensitive to larger patterns

llejj

A cool way to compress images that I learned in linear algebra is to treat the image as a matrix of pixel values, and do an SVD and only keep the largest singular values, which reduces the rank of the matrix

eugenek07

I might be missing something here but where do the quantization matrix values come from? While the matrix holding the result of DCT might be simply from the DCT equation on the other slide, I can't seem to figure the quantization matrix out. I have a feeling ti will have something to do with the quantization level you set.

he-yilan

is this process similar to low-rank approximation? i found this page which says JPEG compression does not use low-rank approximation but the underlying ideas are similar: https://www.math.colostate.edu/~hulpke/PicProc.html#:~:text=Low%20rank%20approximation%20for%20Lossy%20compression&text=If%20we%20now%20form%20a,rank%201%2C%20rank%202%20etc.

angelajyzhang

Similar to what eugenek07 mentioned, I was a little confused as to how we can come up with the numbers in the quantization matrix. Are these values given somewhere or do we have to compute some matrix operations in order to get those numbers to ensure that many of the DCT coefficients zero out as a result?

jaehayi25

Seems like the quantization matrix values are determined empirically such as looking at the signal to noise ratio or image quality: https://stackoverflow.com/questions/29215879/how-can-i-generalize-the-quantization-matrix-in-jpeg-compression

nicolel828

It's interesting that in images fine details and textures are often represented by high frequencies. Therefore, when filters are applied to blur the image, these aspects are the image are usually the first to disappear.

You must be enrolled in the course to comment