You are viewing the course site for a past offering of this course. The current offering may be found here.
Lecture 3: Sampling and Aliasing (54)
reinaw1012

Empirically, do we usually perform option 2 over option 1, and if so, why? Is the runtime of option 2 better than option 1 (multiplication is faster than convolution)?

jonathanlu31

Sorry Reina, not directly related to your question, but I found this cool article called "The Fourier Transform - Diagonalizing The Convolution Operator" that shows how the Fourier Transform can be thought of as a change of basis like in linear algebra, except instead of vectors, convolution acts on signals/functions. Initially in the time domain, the basis vectors are the kronecker or dirac delta (discrete or continuous time respectively) but using the Fourier transform, the basis vectors are now the sin/cos/complex exponentials. This disentangles the frequencies, so like with eigenvectors, the convolution operation now just scales each frequency individually. Anyway, the article explains it much better, but I just thought it was a cool parallel and that it might be fun to share.

stexus

@reinaw1012 I did a little poking around and they are both used in the real world. It seems in general, large convolutions are faster when you transform, multiply, and inverse transform (because we have access to the fast fourier transform). Take a look at scipy's implementation of the fft convolution

LucasArmand

This strikes as very interesting, since convolutional neural networks which are traditionally used in image classification problems use convolution to generate inputs for FFNs. Is there then some connection between the multiplication of frequency domains to the idea of image classification?

You must be enrolled in the course to comment