Ben mentioned procedurally generated textures in lecture on this slide; a (pretty cool) example of something that does use procedurally generated textures rather than stored bitmap files is .kkreiger (https://en.wikipedia.org/wiki/.kkrieger), which was a fully texture mapped 3D FPS game/demo that took up less than 96KiB due to extensive use of procedurally generated assets.
I'm a little confused by the reconstruction of a continuous 2D function. Is the slide referring to something like a smoothing filter such as a Gaussian, or are they talking about bilateral/bicubic? Is there a particular reason we want to use a continuous function instead of discrete step function? I assume it's to prevent antialiasing, but isn't there a tradeoff in detail when we remove high frequencies from a texture?
I think the reconstruction step is to ensure that we can sample the texture correctly in the next step. Because the initial texture is given as data points, it's not very easy to sample without any interpolation. That means if one of the screen points doesn't exactly match up with one of the texture sample points, there's no value to assign it. To fix this, we reconstruct the continuous function, and then we can be sure that any sample will work.