If we pre-filter the image before, doesn't the image we rasterize simply become something of less quality? I can imagine that a blurred image can be easier to rasterize but isn't that just a tradeoff of quality?
Keep in mind that in this case we're sampling at a lower max frequency than the max frequency of the true signal/image. I can see that blurring has the connotation of reduced quality, but here we apply the blur in order to smooth out high-frequencies in the signal such that when we sample at our lower frequency, we don't lose information due to aliasing. In some sense the resulting image is of higher quality as a result.
For more about the relationship between high frequencies and aliasing, you can reference the lecture material on the Nyquist-Shannon theorem. Alternatively, Reddit (lol?) suggested this online textbook: http://www.dspguide.com/ch3/2.htm