For most of this course we've engaged with anti-aliasing as a method for reducing noise within a single time step of a signal (i.e. an image). I'm wondering if there are methods for dynamically anti-aliasing corresponding image regions across frames in a video?
I know that for scenes that are assumed to remain relatively still, one can simply assume correspondence across pixels in the same location and then just aggregate across frames.
However, I'm wondering if (perhaps with ML) one could find dynamic correspondences across frames in order to anti-alias? Perhaps by using optical flow or object tracking, one could estimate corresponding pixels across frames, such as by tracking the tennis racket in this slide, and then aggregating pixels across frames that are determined to be matching.
I feel like something like this would be much more computationally intensive and error prone than existing methods, but I'm curious if something like this has been done before.
For most of this course we've engaged with anti-aliasing as a method for reducing noise within a single time step of a signal (i.e. an image). I'm wondering if there are methods for dynamically anti-aliasing corresponding image regions across frames in a video?
I know that for scenes that are assumed to remain relatively still, one can simply assume correspondence across pixels in the same location and then just aggregate across frames.
However, I'm wondering if (perhaps with ML) one could find dynamic correspondences across frames in order to anti-alias? Perhaps by using optical flow or object tracking, one could estimate corresponding pixels across frames, such as by tracking the tennis racket in this slide, and then aggregating pixels across frames that are determined to be matching. I feel like something like this would be much more computationally intensive and error prone than existing methods, but I'm curious if something like this has been done before.