You are viewing the course site for a past offering of this course. The current offering may be found here.
Lecture 12: Integration (9)
arjunsrinivasan1997

Why does Monte Carlo Integration require fewer samples than quadrature based integration only in higher dimensions?

julialuo

Wondering the same thing. Seems from the variance formulas that as soon as d>2d > 2 Monte Carlo results in less variance while still being unbiased. Are there other tradeoffs that we should be concerned about?

hershg

To summarize, this slide is saying that working at high dimensions is very expensive to integrate over all samples, so instead sampling via the Monte Carlo method (integrating over random samples towards some convergence) is more computationally feasible?

hershg

Are there any other methods of reducing the dimensionality of our sampling set? Something kinda along the lines of SVD/PCA where we identify which dimensions have the greatest variance w.r.t. that dimension, and can determine if we can safely drop 'weaker' dimensions that don't contribute much to our sampling variance?

afang-story

I think that its more that this difference grows as dimension increases, as for quadrature based numerical integration the convergence rate is exponentially worse with dimension. While Monte Carlo integration gets better as its number of samples increases, its convergence rate is independent of the number of dimensions

You must be enrolled in the course to comment