Lecture 4: Transforms (35)
jayc809

The performance improvement gained from pre-multiplying transform matrices can be evident when there are many points/pixels to transform. The reason why this is often the more efficient approach is because computers are better at handling one big matrix operation than sequential operations. For instance, matrix multiplication is often done in parallel (e.g. SIMD) and sometimes have chips specifically designed for these tasks (e.g. GPUs, TPUs). This idea is quite similar to convolution from previous lectures, where instead of convolving the image with each filter sequentially, it is more efficient to combine the filters into one and then convolve the image with the combined result.

Zzz212zzZ

Because the matrix multiplication satisfies Associative Law, we write as this form. When we want to calculate the composition matrix of all the transforms, the multiplication order is from the rightmost transformation matrix to the leftmost transformation matrix.

You must be enrolled in the course to comment