Lecture 12: Monte Carlo Integration (25)
colinsteidtmann

I'm surprised the Monte Carlo estimator biasness doesn't depend on N. I feel like it would be unbiased as N -> infinity (or maybe that's what the expectation implies?)

jerrymby

@colinsteidtmann, yes that is what the expectation implied. It is on average unbiased which is what the expectation is doing. As N -> infinity, the variance becomes smaller and smaller such that intuitively it makes you feel like it is "more unbiased".

kujjwal

This may be beyond the scope of this lecture, but is there a specific value of N we choose as scientists that allow us a reasonable degree of comfort with how good our Monte Carlo estimator is? More specifically, is there an industry standard for how to calculate our desired sample size in order to have an estimator with a desired level of confidence (similar to having a confidence interval or p-value in statistics)? Put concisely, what kind of metric should we use when choosing a value for N?

zy5476

I was also interested if there was a rule or guideline of the number of samples and how this would be validated maybe by a confidence interval ?

danielhsu021202

Asked ChatGPT and here's the answer:

We want unbiased estimators because they provide accuracy and reliability. Unbiased estimators simplify interpretation, as they do not introduce systematic errors, and facilitate the application of statistical methods that assume unbiasedness.

You must be enrolled in the course to comment