Does more of the loss come from downsampling or from quantization? I'd imagine it would be from downsampling since it occurs first, but they also both lose on the higher frequencies of the image.
Alina6618
The JPEG compression process involves several steps that contribute to information loss, namely downsampling and quantization. The degree of loss from each step can depend on the content of the image and the parameters chosen for compression. Downsampling can lead to a loss of detail, especially in the color information, because it reduces the resolution of the chrominance channels based on the assumption that human vision is less sensitive to color details than to luminance. Quantization, which follows the discrete cosine transform (DCT), aggressively reduces the data size by simplifying the amount of information in the higher frequency components—often where subtle details and textures are found. While downsampling occurs first, quantization typically results in a more significant loss of image information because it directly affects the precision of both luminance and color details across the entire image. The visible impact of each can vary: downsampling might be noticed in color transitions, while quantization can manifest as blocky artifacts in detailed areas. Thus, while downsampling contributes to loss, quantization often has a more pronounced effect on perceived image quality, especially at higher compression levels.
NothernSJTU
Downsampling refers to reducing the spatial resolution of an image. This is often done by removing pixels from the image. In downsampling, every time you reduce the resolution, you lose spatial details. This affects the ability to distinguish fine details within the image, particularly affecting the higher spatial frequencies.
When you downsample an image, you're effectively removing information before you even get to the step of quantization. The extent of loss due to downsampling depends on the ratio of the original to the reduced resolution. The more aggressive the downsampling (i.e., the greater the reduction in resolution), the more significant the loss of detail and information.
NothernSJTU
Quantization, on the other hand, involves reducing the number of bits used to represent each pixel's value. It primarily affects the color depth of the image. By reducing the number of bits per pixel, you decrease the number of possible colors each pixel can represent. This process introduces errors typically visible as color banding or posterization in the image. Higher frequencies in the color space might be lost, but it primarily affects the depth of color information rather than spatial detail.
DTanxxx
In lossy compressions, I wonder if there's a way to decompress such that it restores the information that are lost during compression. Perhaps with the power of generative AI it can be made possible (or if such technology is already mature in a different form)?
Does more of the loss come from downsampling or from quantization? I'd imagine it would be from downsampling since it occurs first, but they also both lose on the higher frequencies of the image.
The JPEG compression process involves several steps that contribute to information loss, namely downsampling and quantization. The degree of loss from each step can depend on the content of the image and the parameters chosen for compression. Downsampling can lead to a loss of detail, especially in the color information, because it reduces the resolution of the chrominance channels based on the assumption that human vision is less sensitive to color details than to luminance. Quantization, which follows the discrete cosine transform (DCT), aggressively reduces the data size by simplifying the amount of information in the higher frequency components—often where subtle details and textures are found. While downsampling occurs first, quantization typically results in a more significant loss of image information because it directly affects the precision of both luminance and color details across the entire image. The visible impact of each can vary: downsampling might be noticed in color transitions, while quantization can manifest as blocky artifacts in detailed areas. Thus, while downsampling contributes to loss, quantization often has a more pronounced effect on perceived image quality, especially at higher compression levels.
Downsampling refers to reducing the spatial resolution of an image. This is often done by removing pixels from the image. In downsampling, every time you reduce the resolution, you lose spatial details. This affects the ability to distinguish fine details within the image, particularly affecting the higher spatial frequencies.
When you downsample an image, you're effectively removing information before you even get to the step of quantization. The extent of loss due to downsampling depends on the ratio of the original to the reduced resolution. The more aggressive the downsampling (i.e., the greater the reduction in resolution), the more significant the loss of detail and information.
Quantization, on the other hand, involves reducing the number of bits used to represent each pixel's value. It primarily affects the color depth of the image. By reducing the number of bits per pixel, you decrease the number of possible colors each pixel can represent. This process introduces errors typically visible as color banding or posterization in the image. Higher frequencies in the color space might be lost, but it primarily affects the depth of color information rather than spatial detail.
In lossy compressions, I wonder if there's a way to decompress such that it restores the information that are lost during compression. Perhaps with the power of generative AI it can be made possible (or if such technology is already mature in a different form)?