You are viewing the course site for a past offering of this course. The current offering may be found here.
Lecture 12: Integration (24)
EmmmaTao

just to further clarify the intuition behind the factor of (b-a) in the basic Monte Carlo estimator equation, we are basically taking the average of the N samples from f(x) and then using that as "height" and range of integral (b-a) as "width" and calculating the area of a rectangle with these dimensions.

raghav-cs184

To add on to what was said above in a more probabilistic perspective, given n samples of a function the best estimate of that function is the mean. However if we have some prior knowledge on the distribution of the function, then we can normalize by that prior before we compute the mean, and that gives us the better estimate of the true value of the function IF we have a good prior. So good monte carlo estimation is a problem of picking a good prior.

fywu85

I really like the explanations above. The question remains that what constitutes a good prior. How to quantifiably determine if prior A is better than prior B?

henryzxu

From what I've read here here, it looks like there's no one size fits all for determining priors. There are those who believe in determining priors subjectively using personal knowledge (subjective Bayesians), and those who believe in objectivity (objective Bayesians). Even within the objective group there is no perfect solution, according to this paper. Personally, from the little reading I've done, it sounds like the subjective approach may produce "better" priors in the sense that it includes extra information that may afford additional insight into the problem not present when one resorts to an objective prior.

You must be enrolled in the course to comment