In the context of modern graphics technologies like NVIDIA's Deep Learning Super Sampling (DLSS), how does the traditional supersampling method for antialiasing hold up? Does DLSS achieve the same level of quality as supersampling with less computational cost, and if so, what makes it more efficient?
weinatalie
I ended up looking into some other types of antialiasing like multisample antialiasing. MSAA is essentially a less intensive version of supersample antialiasing, as it samples only the edges within the frame as opposed to the entire frame. There are even less intensive forms of MSAA, such as temporal antialiasing. Whereas MSAA samples each pixel multiple times within a frame to produce an averaged pixel value, TAA samples each pixel once per frame but at different locations in different frames. The differences between MSAA and TAA can be observed in video game graphics—older games tended to use MSAA because they were forward rendered, meaning that each object was rendered one at a time. Meanwhile, newer games tend to use deferred rendering, where multiple passes are required for each object and its shading, lighting, effects, etc. Applying MSAA to each pass is quite costly as a result, and TAA is favored despite having “blurrier” results. Because MSAA only operates on edges, it’s also less effective at dealing with transparencies or objects like foliage.
gfjvgufkt
I don't seem to figure out the relationship between removing frequencies above Nyquist and supersampling. Are they two different steps of antialiasing which work together to do better, or are they actually doing the same thing?
0-0-00-0
@gfjvgufkt
I think supersampling is one of the steps in the approach to antialiasing. Supersampling is combined with averaging the supersamples to smooth/blur the picture. And it does so by reducing the variance of the sampled outcome.
Alescontrela
I'm curious to see how professional VFX houses optimize their aliasing code, especially for very high resolution renders.
Zzz212zzZ
Although adding a convolution kernel seems to help to reduce antiliasing by making a blurry effect, however, it can't filter out the high frequencies.
In the context of modern graphics technologies like NVIDIA's Deep Learning Super Sampling (DLSS), how does the traditional supersampling method for antialiasing hold up? Does DLSS achieve the same level of quality as supersampling with less computational cost, and if so, what makes it more efficient?
I ended up looking into some other types of antialiasing like multisample antialiasing. MSAA is essentially a less intensive version of supersample antialiasing, as it samples only the edges within the frame as opposed to the entire frame. There are even less intensive forms of MSAA, such as temporal antialiasing. Whereas MSAA samples each pixel multiple times within a frame to produce an averaged pixel value, TAA samples each pixel once per frame but at different locations in different frames. The differences between MSAA and TAA can be observed in video game graphics—older games tended to use MSAA because they were forward rendered, meaning that each object was rendered one at a time. Meanwhile, newer games tend to use deferred rendering, where multiple passes are required for each object and its shading, lighting, effects, etc. Applying MSAA to each pass is quite costly as a result, and TAA is favored despite having “blurrier” results. Because MSAA only operates on edges, it’s also less effective at dealing with transparencies or objects like foliage.
I don't seem to figure out the relationship between removing frequencies above Nyquist and supersampling. Are they two different steps of antialiasing which work together to do better, or are they actually doing the same thing?
@gfjvgufkt I think supersampling is one of the steps in the approach to antialiasing. Supersampling is combined with averaging the supersamples to smooth/blur the picture. And it does so by reducing the variance of the sampled outcome.
I'm curious to see how professional VFX houses optimize their aliasing code, especially for very high resolution renders.
Although adding a convolution kernel seems to help to reduce antiliasing by making a blurry effect, however, it can't filter out the high frequencies.