Lecture 3: Antialiasing (114)
ShonenMind

Depending on the picture, it could be possible that certain parts of the pixel may hold more "weight" than other parts of the pixel? For instance, if maybe the samples near the center of the pixel matter more than the boundary of the pixel, we can give more "weight" to it when we downsample back down, so for the whole pixel, we could have some sort of linear combination of all these samples, giving more weight to samples that are placed in more important positions?

muuncakez

I had the a similar question and wanted to follow up with, wouldn't it be better to sample just the edges of the photo as mentioned in the beginnings slides with the red triangle example (correct me if I am wrong though)? This way only the edges are blurred and otherwise maintains the rest of the photo? Or does this method lead to other issues for more complicated photos?

jinweiwong

I wonder whether this randomized algorithm will achieve better results compared to deterministic algorithms as we take more and more points, and why it will be so.

Staffi-geng

@ShonenMind that's an interesting idea for supersampling! It may also depend on what assumptions you make about the signal you are sampling (the pattern you are rasterizing), to say that certain areas of pixels should be given more weight when averaging samples.

Staffi-geng

@muuncakez Edges are typically areas where the signal is changing very quickly (high frequency), so that's why we see blurring around edges when we perform antialiasing. In the triangle example, we do have very clear edges in the original signal. However, for more complicated signals (and photos), it may be difficult to make general assumptions about where the edges are.

You must be enrolled in the course to comment