How much more costly is it to calculate with more bounces in terms of how long it actually took to render this picture?
I'm not sure exactly how long it took to render this picture, but every extra bounce requires another set of scene intersection and radiance calculations, so each bounce adds significant computation time.
I'm not entirely sure about this, but do you literally just add up all the lights from that location in each of the past pictures? How would you get that super bright light in the back. I feel like if we just added instead of averaged, it'd be super easy to be really bright. Is that correct?
Adding on, with every extra bounce of light being so intensive on compute time could you not just add another light source from a different position and use just single-bounce global illumination to achieve the same brightness, but with reduced computation time?
It's important to note that in this rendering, all materials are assumed to have perfect reflective properties. Is there an alternative to Russian Roulette where we cut the light rays if we go underneath a certain value? It seems like this process would converge as i -> infinity as energy is eventually lost. Or is it b/c we have a sure estimate of when Russian Roulette would occur?
Adding on this last bounce appeared to add a lot more noise to the ceiling, is it sometimes better to terminate after a certain number of bounces in order to reduce noise and computational cost versus including more bounces to handle the noise?