When I initially saw this slide I mistook it for being very similar to NVIDIA's DLSS3, but I was very wrong. To clarify my misconception:
Reprojection uses motion vectors from optical flow API. Through this, an estimate is made for where some parts of the image will be by warping for the next frame which may lead to warped lines or some variation of artifact.
DLSS3 frame generation generates a completely unique frame, and it's not simply the same frame repeated and repositioned.
When I initially saw this slide I mistook it for being very similar to NVIDIA's DLSS3, but I was very wrong. To clarify my misconception:
Reprojection uses motion vectors from optical flow API. Through this, an estimate is made for where some parts of the image will be by warping for the next frame which may lead to warped lines or some variation of artifact.
DLSS3 frame generation generates a completely unique frame, and it's not simply the same frame repeated and repositioned.