A demosaicking algorithm is the
variable number of gradients algorithm. It works better in reducing artifacts than basic algorithms but is far more computationally expensive.
This algorithm takes different correlations of pixels in the image into account. It computes gradients around the pixel of interest, either in the spatial domain or in the spectral domain. The goal is to use lower gradients, which represent smoother parts of the image to estimate the color.
Here is a website with comparison of different demosaicking algorithms with demos. https://thedailynathan.com/demosaic/algorithms.php?image=raw.png
frankieeder
It seems that demosaicing algorithms could in effect estimate color values at a higher precision (e.g. if you average four neighboring pixels, you are likely to actually get a decimal and not an integer). Is this effect ever used in practice? It seems like it should/would be but I imagine a bottleneck would be compression.
mishywangiepie
The green pixels would seem to have a good averaging scheme, but the blue and red pixels are interspersed non-evenly to have a distinct 4 nearest neighbors. For example in the pixel directly between two red sensor pixels, would the simple demosaicking algorithm interpolate the nearest 2 or 6 pixels?
arjunsrinivasan1997
what are some examples of more complicated interpolating techniques, and what makes them better than just basic bilinear interpretation
wjgan7
Based on this https://en.wikipedia.org/wiki/Demosaicing#Simple_interpolation I think they use the nearest 2 instead of the nearest 6.
fywu85
What happens at the boundary? In practice, do people crop the CMOS data such that the edge cases will never be an issue or do they actually do some clever math at edges to make use of every sensor?
A demosaicking algorithm is the variable number of gradients algorithm. It works better in reducing artifacts than basic algorithms but is far more computationally expensive. This algorithm takes different correlations of pixels in the image into account. It computes gradients around the pixel of interest, either in the spatial domain or in the spectral domain. The goal is to use lower gradients, which represent smoother parts of the image to estimate the color.
Here is a website with comparison of different demosaicking algorithms with demos. https://thedailynathan.com/demosaic/algorithms.php?image=raw.png
It seems that demosaicing algorithms could in effect estimate color values at a higher precision (e.g. if you average four neighboring pixels, you are likely to actually get a decimal and not an integer). Is this effect ever used in practice? It seems like it should/would be but I imagine a bottleneck would be compression.
The green pixels would seem to have a good averaging scheme, but the blue and red pixels are interspersed non-evenly to have a distinct 4 nearest neighbors. For example in the pixel directly between two red sensor pixels, would the simple demosaicking algorithm interpolate the nearest 2 or 6 pixels?
what are some examples of more complicated interpolating techniques, and what makes them better than just basic bilinear interpretation
Based on this https://en.wikipedia.org/wiki/Demosaicing#Simple_interpolation I think they use the nearest 2 instead of the nearest 6.
What happens at the boundary? In practice, do people crop the CMOS data such that the edge cases will never be an issue or do they actually do some clever math at edges to make use of every sensor?