Lecture 9: Ray Tracing & Acceleration Structures (24)
srikartalluri
I wonder if you can preprocess the information such that the ray-intersection with every object is faster. As mentioned in the later slides, it was said that BVH is a method to do this. But I do think some smart processing might allow some even faster computation. I wonder if we can skip or greatly reduce the number the amount of computations for some pixels such as ones that may represent opaque or dark surfaces that don't really interact with light well. As such these pixels don't need the same number of ray-casts to achieve a high quality image. Sampling techniques such as this can greatly reduce the number of rays computations needed.
sritejavij
Following up on what the other comment says, I think it would be an interesting approach to store ray direction vector information based on the general area of the frame the ray is in, and with some preprocessing, we could quickly retrieve what the general form of the ray we want based on the location of the ray. Almost like a hashmap where we store coordinate-vector pairs, by storing a ton of information about directions, surfaces, and more we could save time and compute by just accessing exactly what we need with minimally extra computation.
amritamo
Reflecting on the first comment, there are adaptive sampling involves dynamically adjusting the number of rays cast based on properties of the scene, such as surface reflectivity, opacity, or lighting conditions. This approach involves allocating more rays to areas with complex regions and fewer rays to simpler ones. It could complement BVH acceleration structures and other optimization techniques to further improve the performance of ray tracing algos.
I wonder if you can preprocess the information such that the ray-intersection with every object is faster. As mentioned in the later slides, it was said that BVH is a method to do this. But I do think some smart processing might allow some even faster computation. I wonder if we can skip or greatly reduce the number the amount of computations for some pixels such as ones that may represent opaque or dark surfaces that don't really interact with light well. As such these pixels don't need the same number of ray-casts to achieve a high quality image. Sampling techniques such as this can greatly reduce the number of rays computations needed.
Following up on what the other comment says, I think it would be an interesting approach to store ray direction vector information based on the general area of the frame the ray is in, and with some preprocessing, we could quickly retrieve what the general form of the ray we want based on the location of the ray. Almost like a hashmap where we store coordinate-vector pairs, by storing a ton of information about directions, surfaces, and more we could save time and compute by just accessing exactly what we need with minimally extra computation.
Reflecting on the first comment, there are adaptive sampling involves dynamically adjusting the number of rays cast based on properties of the scene, such as surface reflectivity, opacity, or lighting conditions. This approach involves allocating more rays to areas with complex regions and fewer rays to simpler ones. It could complement BVH acceleration structures and other optimization techniques to further improve the performance of ray tracing algos.