Lecture 3: Antialiasing (60)
muuncakez

Is size of a picture and its componenents the only determining factor for which is better?

buggy213

It depends on the extent of your filter as well. For a general filter of size k×kk \times k on a signal of size n×nn \times n, you'd need O(k2n2)O(k^2n^2) multiplications, so for large filters and high resolution images, this can be quite expensive. You can do tricks for common filters (e.g. Gaussian) by noticing that they are separable and applying them separately along each axis, which is quite a bit faster (O(n2k)O(n^2k)). Also interesting is the fact that libraries like scipy and numpy will automatically choose which one is faster using empirical measurements (https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.choose_conv_method.html)

matthewlee626

@buggy213: Thanks for the link! I was going to comment that we could do some theoretical analysis but it might just depend on random constants on the specific implementation + computer. I found this quote interesting:

"There is a 95% chance of this ratio being less than 1.5 for 1D signals and a 99% chance of being less than 2.5 for 2D signals."

which I thought was a somewhat rigorous heuristic for being experimentally defined!

You must be enrolled in the course to comment