You are viewing the course site for a past offering of this course. The current offering may be found here.
Lecture 12: Integration (21)
peterqiu1997

Helpful reminder for me from lecture in response to "won't this be biased towards outliers?": we don't know what f is, and it's through selecting our PDF that we decide where the important parts to sample from are (where the function is large).

emilyzhong

Note from lecture: by placing p(X_i) in the denominator, we are "downweighting" more probable samples, and vice versa with less probable samples

kavimehta

I am still unclear on how this does not bias towards outliers...if we "upweight" the unlikely samples and "downweight" the likely samples, then won't we be sampling disproportionately?

moridin22

The key is that we upweight the unlikely samples exactly according to how unlikely they are, and similarly downweight the likely samples exactly according to how likely they are, so that the net result is that the sampling produces an unbiased estimator.

jeromylui

I'm a little confused on one thing. I understand that f(X_i) is referring to the height of the curve at some random position, but when we apply this to graphics, what determines what the curve will look like? Basically, in terms of graphics, where are we taking the samples from?

kevinliu64

The Monte Carlo estimator places more weight on unlikely values and less weight on likely values and this makes sense because for values that are more likely to appear in our sample, we don't want these values to take over the entire estimate. Instead, the samples that appear more should be given less weight so that the outliers can be given a chance to appear in the sample as well.

dtseng

I was a bit confused about this in the beginning. P(X) is the probability distribution for which we sample X, it can possibly have nothing to do with f itself. If you don't divide by p, then you'd get a biased estimator, since you're biasing the result towards the f(Xi)'s that you tend to sample more. By weighting it by p(Xi), you weight the samples correctly.

letrangg

As Ren said in lecture, dividing by the probability of a random sample would balance out the effect of a random variable in the final summation. This is kinda reverse to the intuition of expected value, where the random variables that appear more will have a higher weight in the final summation.

You must be enrolled in the course to comment