I think one potential opportunity to get lower latency is to predict potential movements of the user and then precompute the potential scenes in advance. I think the major challenge with this approach would be how precisely you need to predict the movements such that the precomputed scene is approximately what the user would actually see based on their exact movements.
han20192019
So I wonder is the hardware technology one of the major difficulties in the production of VR glasses? I understand that it needs some theories to back up the production. But I wonder does it have a very high requirement for the hardware perspective?
ja5087
This is pretty crazy considering modern display pipelines already have ~15ms of lag from signal to render. I wonder what are the current bottlenecks?
I think one potential opportunity to get lower latency is to predict potential movements of the user and then precompute the potential scenes in advance. I think the major challenge with this approach would be how precisely you need to predict the movements such that the precomputed scene is approximately what the user would actually see based on their exact movements.
So I wonder is the hardware technology one of the major difficulties in the production of VR glasses? I understand that it needs some theories to back up the production. But I wonder does it have a very high requirement for the hardware perspective?
This is pretty crazy considering modern display pipelines already have ~15ms of lag from signal to render. I wonder what are the current bottlenecks?