Lecture 23: Virtual Reality (17)
colinsteidtmann

My first thought was, "wow, how do we physically send different images into each eye?" I assumed VR headsets had one display but I think they have two screens, one for each eye. Not 100% sure though (never used a headset myself)

el-refai

@colinsteidtmann I do believe that it does have to be two images since if you consider how vision works for us each eye is taking in it's own "image" and our brain is combining the two together. This is why when you close close your left eye but have your right eye open what you see is slightly different compared to what you see when you close your right eye and have your left eye open

el-refai

An interesting thing about parallax is that if the user is facing head on and moves straight that the views we get are not substantially different as features are not moving a lot relative to one another so this makes pose estimation just from images a lot more difficult.

weinatalie

It’s interesting that VR displays must send different images into each eye in order to create an immersive view. This is because in human visual systems, the brain combines inputs from the left and right eyes to create a final view with depth. If a VR display projected the exact same image into each eye, would the VR environment look flat and 2-D instead?

carolyn-wang

@weinatalie I think if the VR display projected the same image into both eyes there would actually be focusing issues. Both eyes see the same thing in real life only when the object is really far away and both eyes are looking straight forward.

saif-m17

I think @weinatalie's point is very interesting. From what I understand, it is the different perspectives from each eye that allow us to perceive the world as 3D (through a process called stereoscopy). I do think that means that if both eyes received the same image we would see a 2D image instead.

DreekFire

Inversely, although VR headsets tend to try to match the baseline of the human eye, I've heard that viewing images through cameras with a much wider baseline can produce interesting effects.

noah-ku

This slide outlines how 3D visual cues are generated on displays. Traditional panels use occlusion, perspective, and shading to suggest depth, utilizing techniques like z-buffering and lighting calculations. VR/AR displays take this further by offering stereoscopic images to each eye and changing views with head movement, enhancing the 3D effect through head-tracking technology. These advances provide a more immersive and realistic experience.

dhruvchowdhary

It's interesting to see how monocular cues like shading and perspective can provide a 3D experience. In terms of rendering efficiency, do techniques that simulate depth perception, like occlusion and perspective, require significantly more computational power compared to 2D rendering? Also, for VR/AR applications, how do head-tracking algorithms differentiate between intentional head movements and accidental shakes to ensure consistent perspective rendering?

TiaJain

Given the visual cues provided by panel displays & enhanced by VR/AR, how do these methods compare wrt the user's perception of depth and space in a virtual environment?

You must be enrolled in the course to comment