I wonder if there is any numerical analysis like methods that can also help reduce computation instead of sampling methods. I did check out some relevant stuff. For instance, sparse grid techniques aim to mitigate the curse of dimensionality by focusing computational effort on the most important areas of the domain. These methods, which are less known than Monte Carlo simulations, construct a mesh that grows only polynomially with dimension rather than exponentially.
yauwilliam69
@zinengtang I liked how you mentioned the sparse grid technique which seeks to reduce computational cost by focusing on the most important areas of the domain. Actually, to improve computational speed, we can also use low-discrepancy sequences instead of random sampling. Compared to random sampling, low-discrepancy seeks to distribute its samples as uniformly as possible, often much better than naive random number generation as they often cluster up. This causes the convergence to happen much more quickly than naive random number generation. Methods based on this are called quasi-Monte Carlo techniques.
adam2451
What is the reasoning behind the fact that Monte Carlo Integration requires fewer samples in higher dimensions over quad-methods?
adam2451
Further, In Global-Illumination, what dimension is typical to use for integrals since doing so infinitely wouldn't be possible.
kujjwal
I'm not 100% sure, but I think for global illumination it is possible to take the limit of the integral as we approach higher and higher dimensional spaces, allowing for us to get a fairly reasonable approximation. If this is not the case, I'd imagine that we use preset bounds of say a maximum dimension of 100k so that we can mathematically compute this, but I believe this would depend on the resolution and output dimensions we wish to have.
Mehvix
@adam2451 Monte Carlo (random sampling) error doesn't depend on dimension (unlike numerical integration), so intuitively we do not need to scale the number of samples as dimensional increases
llejj
It's interesting that the Monte Carlo integral converges at the same rate regardless of dimension. Here's a resource that explains that https://www.ime.usp.br/~jmstern/wp-content/uploads/2020/04/EricVeach2.pdf
jananisriram
I'm curious as to the derivation behind the random sampling error; does MC integration have something to do with the variance ^ 1/2?
I wonder if there is any numerical analysis like methods that can also help reduce computation instead of sampling methods. I did check out some relevant stuff. For instance, sparse grid techniques aim to mitigate the curse of dimensionality by focusing computational effort on the most important areas of the domain. These methods, which are less known than Monte Carlo simulations, construct a mesh that grows only polynomially with dimension rather than exponentially.
@zinengtang I liked how you mentioned the sparse grid technique which seeks to reduce computational cost by focusing on the most important areas of the domain. Actually, to improve computational speed, we can also use low-discrepancy sequences instead of random sampling. Compared to random sampling, low-discrepancy seeks to distribute its samples as uniformly as possible, often much better than naive random number generation as they often cluster up. This causes the convergence to happen much more quickly than naive random number generation. Methods based on this are called quasi-Monte Carlo techniques.
What is the reasoning behind the fact that Monte Carlo Integration requires fewer samples in higher dimensions over quad-methods?
Further, In Global-Illumination, what dimension is typical to use for integrals since doing so infinitely wouldn't be possible.
I'm not 100% sure, but I think for global illumination it is possible to take the limit of the integral as we approach higher and higher dimensional spaces, allowing for us to get a fairly reasonable approximation. If this is not the case, I'd imagine that we use preset bounds of say a maximum dimension of 100k so that we can mathematically compute this, but I believe this would depend on the resolution and output dimensions we wish to have.
@adam2451 Monte Carlo (random sampling) error doesn't depend on dimension (unlike numerical integration), so intuitively we do not need to scale the number of samples as dimensional increases
It's interesting that the Monte Carlo integral converges at the same rate regardless of dimension. Here's a resource that explains that https://www.ime.usp.br/~jmstern/wp-content/uploads/2020/04/EricVeach2.pdf
I'm curious as to the derivation behind the random sampling error; does MC integration have something to do with the variance ^ 1/2?