I remember the HTC Vive had plenty more sensors embedded along its exterior than the Oculus shown here. To what extent do more sensors improve the experience of the headset?
kujjwal
I'm curious about this as well, but I think there's definitely a tradeoff. With more sensors it's definitely possible to have a more accurate estimate of the 3D structure of your surroundings, but the number of CPU cycles and the amount of processing power involved in performing computations for extra sensors is pretty high, especially if running some version of the SLAM algorithm – so there's definitely a balance depending on how fast you want the VR to be versus how accurate in understanding your users' surroundings.
NothernSJTU
More sensors allow for better detection of the user's movements in three-dimensional space. This is crucial for achieving high precision in tracking the position and orientation of the headset. The HTC Vive uses a system of external "lighthouse" base stations that emit infrared signals detected by the sensors on the headset. This setup allows for very accurate tracking over a larger area and minimizes tracking errors.
pranavkolluri
Yeah as far as I recall, the Vive is still more accurate in terms of tracking, particularly whole body tracking since the lighthouses can be paired with body-worn trackers so that you can effectively mocap. With the quest, keep in mind that it only needs to be "good enough", since cost is a massive consideration for a device in its class. The vive iirc was $799 new without lighthouses (I think). The Quest 2 launched at $300 and can now be found routinely at $200-$250.
anavmehta12
Some drawbacks of the Inside Out tracking is that it struggles with occlusion as it can lose track of the controllers if the user is not currently keeping them in frame.
dhruvchowdhary
Considering the balance between cost and performance as mentioned by pranavkolluri and the occlusion issues highlighted by anavmehta12, how do developers optimize the placement of cameras and LEDs to ensure the system is cost-effective yet minimizes blind spots? And could future designs possibly integrate environmental feedback to improve tracking when controllers are out of sight?
tom5079
I have this controller but I never noticed the infrared LEDs, this is pretty interesting
yykkcc
I think a relatively more uniform environment or lack of distinctive features can be small issues for Inside Out Tracking but the number of infrared LEDs can solve that with more computations.
I remember the HTC Vive had plenty more sensors embedded along its exterior than the Oculus shown here. To what extent do more sensors improve the experience of the headset?
I'm curious about this as well, but I think there's definitely a tradeoff. With more sensors it's definitely possible to have a more accurate estimate of the 3D structure of your surroundings, but the number of CPU cycles and the amount of processing power involved in performing computations for extra sensors is pretty high, especially if running some version of the SLAM algorithm – so there's definitely a balance depending on how fast you want the VR to be versus how accurate in understanding your users' surroundings.
More sensors allow for better detection of the user's movements in three-dimensional space. This is crucial for achieving high precision in tracking the position and orientation of the headset. The HTC Vive uses a system of external "lighthouse" base stations that emit infrared signals detected by the sensors on the headset. This setup allows for very accurate tracking over a larger area and minimizes tracking errors.
Yeah as far as I recall, the Vive is still more accurate in terms of tracking, particularly whole body tracking since the lighthouses can be paired with body-worn trackers so that you can effectively mocap. With the quest, keep in mind that it only needs to be "good enough", since cost is a massive consideration for a device in its class. The vive iirc was $799 new without lighthouses (I think). The Quest 2 launched at $300 and can now be found routinely at $200-$250.
Some drawbacks of the Inside Out tracking is that it struggles with occlusion as it can lose track of the controllers if the user is not currently keeping them in frame.
Considering the balance between cost and performance as mentioned by pranavkolluri and the occlusion issues highlighted by anavmehta12, how do developers optimize the placement of cameras and LEDs to ensure the system is cost-effective yet minimizes blind spots? And could future designs possibly integrate environmental feedback to improve tracking when controllers are out of sight?
I have this controller but I never noticed the infrared LEDs, this is pretty interesting
I think a relatively more uniform environment or lack of distinctive features can be small issues for Inside Out Tracking but the number of infrared LEDs can solve that with more computations.