Is this suggesting that the transform from the image space to the texture space is a convolution? Is this generally true (can convolutions generally represent all transformations)?
To me, this slide isn't fully "transforming" image space to texture space. It's sampling the image space and interpolating between the sampled points defined by some reconstruction filter.
It isn't true that convolutions generally represent all transformations. convolutions can be represented as a circular matrix that's multiplied to the image matrix. Multiplication -> linear, so it can only represent linear transformations. For example, convolutions wouldn't be accurate for perspective transforms (since they are nonlinear).
I'm not sure if I completely follow what this slide is explaining.
So from what I understand is that the f(x, y) 2D function will give us information about our image at (x,y). And I understand that we are using the samples we get from f(x, y) to interpolate the values for other samples.
What I don't quite understand is why can we draw the desired samples at (u, v), which is in texture space, with our 2D function f.
From my understanding, what's happening is that for each location (u, v) we're constructing a new value for it by taking the value of it at f(x,y) but applying a filter to it first, (to help with aliasing?) and then setting this new value to the location (u,v) in the new image?
@jordanwyli The reason we can't draw samples at (u, v) is that the image doesn't provide that information. Within a rasterized triangle, we only have information about integer pixel locations. If we want the "subpixel" color, we need to guess it by looking at the closest integer pixels (interpolating).
@kevinliu I'm not sure if aliasing is relevant (which function is being confused ("aliased") with another?). Applying the continuous filter converts our discrete function to a continuous one, which is important since we can sample every real value in a continuous function.