Lecture 9: Ray Tracing & Acceleration Structures (42)
noah-ku

It's interesting to see cases where ray tracing fails. Since the Utah teapot is so small but yet has many details, it will require many unneeded calculations if it is placed in a large empty space like a stadium that doesn't have that many details. In order for the teapot to get a good grid resolution, you'll end up calculating rays that have to traverse through a lot of empty space. This is very inefficient and unnecessarily requires more computing power.

jayc809

This slide kinda reminded me of how VR headsets still face computation and complexity limitations when it comes to simulating an entire environment around the user while using ray tracing. I heard that one issue is that VR requires rendering a different scene for each eye while a regular computer screen would only require one. Moreover, like the problem mentioned on this slide, we can kinda imagine the VR user as the teapot inside a stadium like environment, where in order to achieve photorealism, you have to simulate the entire sphere surrounding you, requiring us to use more efficient algorithms than just the uniform grid approach.

AlsonC

@jayc809 I heard about this issue too! Although we're still yet to see how the vision pro will be adapted by consumers, I remember from an investing perspective that one reason why VR headsets and by extension VR games are still not more popular is that the devices themselves still lack the rendering and computational power to make the headsets themselves comfortable. For example, modern day VR headsets are still very heavy and get hot, and beyond that don't have a perfect solution to compute everything. However, recently there have been innovative solutions to this. For example, I know the vision pro only actively renders your image similar to how your eyes work: they only render what your eyes are focusing and everything else is sort of like 'peripheral vision,' in that it isn't perfectly rendered and kind of blurry.

koizura

@alsonC that sounds like a really interesting optimization! I recently also watched a youtube video on how game engines optimize large, vast worlds—by replacing far-away 3D geometry with image planes. For example, a tree placed far away will be rendered using a single image plane that faces the camera rather than a full 3D model, but since it's so far away the user won't be able to tell the difference (kinda like mipmaps?). I find it intriguing to see if VR games would help push the techniques for optimizations for VR real time games even further.

carolyn-wang

I'm curious in what ways uniform grids can improve accuracy/efficiency for real-time object tracking in video streams? Maybe if uniform grids can partition the video frame into segments, then tracking algorithms can focus on particular areas that detect objects or motion and speed up tracking speed while reducing the search space. Building off of this, how can uniform grids interact with different tracking algorithms like optical flow, template matching, or machine learning-based approaches.

carolyn-wang

I'm curious in what ways uniform grids can improve accuracy/efficiency for real-time object tracking in video streams? Maybe if uniform grids can partition the video frame into segments, then tracking algorithms can focus on particular areas that detect objects or motion and speed up tracking speed while reducing the search space. Building off of this, how can uniform grids interact with different tracking algorithms like optical flow, template matching, or machine learning-based approaches.

antony-zhao

https://www.historyofinformation.com/detail.php?id=2449 The history behind this is mildly amusing (like how it's somewhat of an inside joke in many of these graphics software/applications) but also quite interesting. It's definitely a very interesting problem with how ray tracing would need to deal with the teapot compared to the scene itself.

brianqch

Is the algorithm that we use to partition spatial hierarchies meant to optimize scenes similar to this where there are finer details further away from the camera? I was talking to a classmate about this after class and we came across this idea that we partition more in areas where there are more objects to ensure that the nodes contain about an even number of objects. This often occurs when there are finer details further away from the camera's POV. Can I assume that this is true?

sparky-ed

It is really cool to see how light can be reflected using math equations, and there are many applications that actually use this concept to make the game look better. I can think of one example from a game called Minecraft, where they have shaders to make the ocean reflect light. Here is a link to ray tracing you can look at: https://www.youtube.com/shorts/n091fN2fd9M.

You must be enrolled in the course to comment