When demosaicking/denoising animated images such as videos, another important factor is temporal stability. If each frame is denoised in isolation, temporal artifacts like those fence lines merging or deforming might occur. To prevent this we can train neural networks with data from neighboring frames as inputs as well to obtain some temporal context. Might be less related but this seems like an active topic for real-time ray tracing: https://research.nvidia.com/publication/2020-05_Neural-Temporal-Adaptive
When demosaicking/denoising animated images such as videos, another important factor is temporal stability. If each frame is denoised in isolation, temporal artifacts like those fence lines merging or deforming might occur. To prevent this we can train neural networks with data from neighboring frames as inputs as well to obtain some temporal context. Might be less related but this seems like an active topic for real-time ray tracing: https://research.nvidia.com/publication/2020-05_Neural-Temporal-Adaptive