I wonder if this stereo approach could be applied to raytracing, in the sense of having two separate cameras render the same scene in parallel to create the stereo effect real time?
jacklishufan
Hi Goodell in principle that is definitely feasible. Actually, this is how traditional CGI/Animation in 3D movies is rendered. Given a fixed scene, the filmmaker actually renders two separate camera views using a raytracer. However, in the context of VR/AR, the cost of real-time raytracing is too expansive even with top-tier consumer-grade Nvidia GPUs
I wonder if this stereo approach could be applied to raytracing, in the sense of having two separate cameras render the same scene in parallel to create the stereo effect real time?
Hi Goodell in principle that is definitely feasible. Actually, this is how traditional CGI/Animation in 3D movies is rendered. Given a fixed scene, the filmmaker actually renders two separate camera views using a raytracer. However, in the context of VR/AR, the cost of real-time raytracing is too expansive even with top-tier consumer-grade Nvidia GPUs