Lecture 23: Virtual Reality (84)
eugenek07

In order to calibrate user gaze, do they need to have the user, for instance, look at certain spots in their VR headset? It feels like it would be a challenge to accurately gauge where a person's eyes are actually looking at, since everyone's face shapes would be different and thus the distance of someone's eyes from the display might be different.

KevinXu02

It should not be that hard to know the range that the user is looking at as the apple's vision pro can actually track which spot the user is looking at with some calibration。

MillerHollinger

Is this related to the effect where you stare at an inverted-color image for 30 seconds, and then when you view a black-and-white image the colors flip? Or is that more in the human eye and less the technology?

caelinsutch

Is this used often in normal non-3D games too?

llejj

This reminds me of a video that tests your awareness, it demonstrates the idea behind foveated rendering really well. Our eyes focus on a really small range https://www.youtube.com/watch?v=xNSgmm9FX2s

rcorona

Relatedly, there's a paper from Meta where they compress videos to contain only ~10% of pixels, with a majority around the user's gaze point. They then use an image generation network to infer the rest of the image.

Something that I'm still confused about with this method though is where they get the high-fidelity image region from. If one doesn't know where the user will gaze a priori, then how does the system avoid the need for storing the full uncompressed video?

https://research.facebook.com/publications/deepfovea-neural-reconstruction-for-foveated-rendering-and-video-compression-using-learned-statistics-of-natural-videos/

yykkcc

I'm not sure if combining multiple techniques is achievable. For example, when the user's perspective starts to move, can we predict the movement trajectory of the user's perspective and render the views that may be seen with low resolution in advance, and then use high resolution to render the user's gaze area? When the image frame rate drops, can we use AI to synthesize some intermediate frames to reduce the sense of dissonance?

Alina6618

@MillerHollinger brings up a very interesting point about visual aftereffects. The inverted-color image effect is more about the fatigue of photoreceptors in the eye and subsequent neural processes. Foveated rendering is about optimizing what is rendered based on gaze, while visual aftereffects are more related to how the brain processes an image after prolonged viewing.

keeratsingh2002

Given that foveated rendering reduces the image quality in the peripheral vision to save on computing resources, how does this technology handle rapid eye movements without the user noticing any delay or blur?

OnceLim

When I was developing an app for MagicLeap, which has eye tracking, I had to create a way allow the user to use their eyes to point at objects. When testing it out, I realized that though it works in most cases, if the eyes change gaze quickly, it is harder to precisely track, incorrectly selecting objects. Because of this, foveated rendering may seem like a cool feature but still needs time to work correctly.

TiaJain

Foveated Rendering can clearly improve performance in VR applications by lowering the graphical fidelity in peripheral vision areas, but how does this technology impact the user experience, particularly wrt immersion and presence within the virtual environment?

nickjiang2378

Interesting idea. It'd give the appearance of depth and would probably have to factor in periphery vision because a person can see outside of where they're focusing on; it should just be more blurry

Songbird94

Is it still cost effective since eye movements could be hard to follow

marilynjoyce

But how can you fix this for different perceptive depths? Some people’s peripheral perceptive depths are pretty good.

You must be enrolled in the course to comment