Wednesday 17 December 2014

Random walks on surfaces

Here I describe a procedure for computing random walks on surfaces (that is, manifolds of dimension two). The procedure can be easily generalised to higher dimensions, but surfaces are more commonly encountered in practice.

Let $\vec x(u,v) = (x(u,v),y(u,v),z(u,v))$ represent a parameterised surface, and $(u_0,v_0)$ the coordinates of the initial position of a diffusing particle. We illustrate with the example of a torus of major radius $R$ and minor raidus $r$, which can be parameterised by \[ \vec x = [\cos{u}(R+r\cos{v}),\sin{u}(R+r\cos{v}),r\sin{v}]. \] Of course, we cannot simply perform a random walk in the coordinate space $(u,v)$. Not only would this depend on the choice of parameterisation, but in general there exists no parameterisation for which a random walk in the coordinate space coincides with a random walk on the actual surface.

Instead, in order to perform a random walk on a surface, we need to find a pair of vectors $e_1(u,v), e_2(u,v)$ in the coordinate space which represent orthonormal directions on the surface. To do this, we begin by computing the Riemannian metric, which is defined by the following matrix: \[{g_{ij}} = \left[ {\begin{array}{*{20}{c}} {{g_{11}}}&{{g_{12}}}\\ {{g_{21}}}&{{g_{22}}} \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} {\frac{{\partial \vec x}}{{\partial u}} \cdot \frac{{\partial \vec x}}{{\partial u}}}&{\frac{{\partial \vec x}}{{\partial u}} \cdot \frac{{\partial \vec x}}{{\partial v}}}\\ {\frac{{\partial \vec x}}{{\partial v}} \cdot \frac{{\partial \vec x}}{{\partial u}}}&{\frac{{\partial \vec x}}{{\partial v}} \cdot \frac{{\partial \vec x}}{{\partial v}}} \end{array}} \right] = \left[ {\begin{array}{*{20}{c}} {{{(R + r\cos{v})}^2}}&0\\ 0&{{r^2}} \end{array}} \right].\] Using the Gramm-Schmidt process, we can take our vectors to be \begin{align*} e_1 & = \frac{1}{\sqrt{g_{11}}}(1,0),\\ e_2 & = \frac{1}{\sqrt{g}\sqrt{g_{11}}}(-g_{21},g_{11}), \end{align*} where $g$ is the determinant of the above matrix. In the specific case of the torus, this works out to be \begin{align*} e_1 & = ((R+r\cos{v})^{-1},0), \\ e_2 & = (0,r^{-1}). \end{align*} Now, all we have to do to choose at random a vector $e_0$ from the four vectors ${\pm e_1, \pm e_2}$, and the new coordinates of the particle will be given by \[ (u_1,v_1) = (u_0,v_0) + \delta e_0, \] where $\delta$ is a small number representing the step size.

Monday 15 December 2014

A simple and intuitive proof of the central limit theorem

Statisticians are often tasked with estimating the average of a quantity based on only a "sample" of the true population. For example, if we want to determine the height of the average person, it would be impractical to measure the height of everybody on earth. Instead, we measure the height of a finite sample of people, chosen at random, and compute the average of this sample. The law of large numbers tells us that as the sample size increases, the sample average approaches the population average. But just how large does our sample need to be to achieve a desired accuracy in our estimate?

This is where the central limit theorem comes in. It is an invaluable tool used daily by practicing statisticians to tell us how close to the real average our sample average is likely to be. We assume that we have a random sample $\{X_1,...,X_n\}$ of independent, identically distributed random variables, whose distribution $X$ has average $\mu$ and variance $\sigma^2$. For convenience, we will normalise $X$ so that $\mu = 0$. From this sample we may compute the sample average \[ S_n = \frac{X_1+...+X_n}{n}. \] If we were to repeat this procedure with a different sample of $n$ individuals, we would get a slightly different value for $S_n$. Thus, $S_n$ can be viewed as a random variable which satisfies a certain distribution.

Roughly speaking, the central limit theorem says that, as $n$ gets larger, $S_n$ approaches a normal distribution with average $\mu = 0$ and variance $\frac{\sigma^2}{n}$. Using this fact and the $68–95–99.7$ rule for normal distributions, we can then determine the probability that our sample average lies within a certain number of standard deviations of the true value.

A search for an explanation of the central limit theorem will typically yield abstract proofs that rely on the Fourier transform. However, the purpose of this post is to show that the central limit theorem is in fact quite simple and intuitive, and can be derived in an elementary way. The key insight is to remember that a normal distribution represents the probability for a particle to be found a given distance from the origin while it undergoes a diffusion or random walk. It follows that to demonstrate the central limit theorem, all we have to do is figure out what our scenario has to do with a random walk! As luck would have it, we can conceive of the sample average as an $n$-step random walk \[ S_n = \frac{X_1}{n}+...+\frac{X_n}{n}. \] By assumption, $E(X) = 0$, so that the drift is indeed random. As we increase $n$, the number of steps increases while the step size goes to zero, so that $S_n$ will clearly approach a normal distribution. And that's really all there is to it!

We still need to show though that the variance of this distribution is $\frac{\sigma^2}{n}$. We write $S_{n} = S_{n-1} + Z_n$, where $Z_n$ represents the random drift between steps $n-1$ and $n$. Then \begin{align*} E(S_n^2) & = E(S_{n-1}^2 +2S_{n-1}Z_n + Z_n^2) \\ & = E(S_{n-1}^2) + 2E(S_{n-1})E(Z_n) + E(Z_n^2) \\ & = E(S_{n-1}^2) + E(Z_n^2), \end{align*} since $S_{n-1}$ and $Z_n$ are independent variables and $E(Z_n)=0$. So each time we increase $n$ by one, we add the quantity $E(Z_n^2)$ to $E(S_n^2)$. This means that \[ E(S_n^2) = nE(Z_n^2). \] But $E(Z_n^2) = E(\frac{1}{n^2}X^2) = \frac{\sigma^2}{n^2}$. It follows that $E(S_n^2) = \frac{\sigma^2}{n}$.