# Physics 212, 2019: Lecture 20

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Back to the main Teaching page.

In today's lecture, we will introduce the first example of a Monte-Carlo simulation.

Follow the Random Numbers notebook in this lecture.

## Monte-Carlo area estimation

We use random numbers to perform computations that are fundamentally random, and we will focus on those in the upcoming weeks. However, there are computations that are fundamentally deterministic, but may be easier to perform with random numbers. Suppose we want to calculate an integral under a curve ${\displaystyle y=y(x)}$, with the limits ${\displaystyle x\in [a,b]}$. We can, of course, do it analytically for some functions ${\displaystyle y}$ or numerically by subdividing the interval of ${\displaystyle x}$ into small segments ${\displaystyle \Delta x}$ and adding up the areas of all small rectangles, ${\displaystyle y(x)\times \Delta x}$. An alternative way is as follows. Suppose we know that ${\displaystyle y(x)}$ is positive and smaller than ${\displaystyle f_{\rm {max}}}$. Then the area under the curve can be written down as ${\displaystyle (b-a)f_{\rm {max}}*\alpha }$, where ${\displaystyle \alpha }$ is the fraction of the square ${\displaystyle b-a}$ by ${\displaystyle f_{\rm {max}}}$ that is actually under the curve. How do we estimate this fraction? Suppose I generate random points (dart throws), uniformly distributed over the square ${\displaystyle b-a}$ by ${\displaystyle f_{\rm {max}}}$. The fraction of those that end up under the curve (and it's easy to check which one does!) is an estimate of ${\displaystyle \alpha }$. The provided script performs such calculation for an arbitrary ${\displaystyle y=y(x)}$. This is called Monte-Carlo integration. Note that our implementation is incomplete -- we don't check if the function ever is below 0 or ever is above ${\displaystyle f_{\rm {max}}}$, which a better implementation should do.

## How precise are MC approaches?

A problem, of course, is that such calculation is probabilistic -- every time we perform it, the results are different. But how different? We can verify, first of all, that our probabilistic estimate of the area converges to to true one, where true is known, e.g, when we integrate a function ${\displaystyle x^{2}}$. We can also show the distribution of the estimates of the area when we perform the are estimate with ${\displaystyle N}$ dart throws. We see that the larger ${\displaystyle N}$ is, the more concentrated the estimates are. To quantify this, we use the provided script to calculate the standard deviation of the estimates as a function of ${\displaystyle N}$. We find that ${\displaystyle \sigma ^{2}=A/N}$, where ${\displaystyle A}$ is some constant. The claim is that the general ${\displaystyle 1/N}$ law holds true for essentially any function being integrated. It's only the constant that changes -- and we can verify this by integrating other functions.

Verify that MC integration of different function still converges as ${\displaystyle 1/{\sqrt {N}}}$.