Tuesday, January 23, 2018

The Brownian motion as scaling limit of a random walk in discrete time

I start with an infinite set of random variables $X_n$, with $n: 1,2, \ldots$ and probability density function proportional to \begin{equation*} \exp -\frac{1}{2} \sum_{n =0}^{+\infty} (x_{n+1} - x_n)^2 \end{equation*} with $x_0 =0$. Random paths of $X_n$ can be generated by \begin{equation*} X_{n+1} = X_n + G_n \end{equation*} starting at $X_0 =0$ and with $G_n$ independent standard Gaussian random variables. The $X_n$ form a random walk in discrete time ("the lattice"). The two-point correlation function is \begin{equation}\label{eq:20180123} \mathbb{E} [ X_m X_n ] = n \quad\text{if}\quad m \ge n \end{equation} Now I move to the real line: I imagine that the distance between the lattice points is $a$. Therefore, the ratio $t/a$ is the number of lattice points between $0$ and $t$, approximated as an integer. I define $B_t = a^h X_{t/a}$ for $a$ very small. The factor $a^h$ is an extra scaling to make everything work, see below. I calculate the limit \begin{equation*} \mathbb{E} [ B_s B_t] = \lim_{a \to 0}\mathbb{E} [ a^h X_{s/a} \ a^h X_{t/a}] \end{equation*} Using \eqref{eq:20180123}, this is equal to $\lim_{a \to 0} a^{2 h} \frac{t}{a}$ for $s \ge t$. This limit exists if $h = 1/2$ and is then equal to $t$. $B_t$ is the standard Brownian motion: it is a Gaussian process and has the correct two-point correlation function.

From the definition of the Brownian motion as scaling limit, one can also prove the following formula. Suppose $b > 0$, then \begin{equation}\label{eq:20180122} \mathbb{E}[B_{b t_1}\cdots B_{b t_n}] = b^{n/2} \mathbb{E}[B_{t_1}\cdots B_{t_n}] \end{equation} Indeed, the left hand side is equal to \begin{equation*} \lim_{a \to 0}\mathbb{E} \left[ a^{1/2} X_{b t_1/a}\ \cdots\ a^{1/2} X_{bt_n/a}\right] \end{equation*} Write $a = b a'$, then the limit is equal to \begin{equation*} \lim_{a' \to 0}\mathbb{E} \left[ (ba')^{1/2} X_{t_1/a'}\ \cdots \ (ba')^{1/2} X_{t_n/a'}\right] \end{equation*} This is equal to the right hand side of \eqref{eq:20180122}.

This is all quite similar to the scaling limit discussed in conformal field theory (CFT), see for example [1]. $h$ plays the role of the scaling dimension, formula \eqref{eq:20180122} is the analogue of scale invariance in CFT. For my job I use the Brownian motion and more general stochastic processes. As a hobby I wanted to do something different and study CFT. These topics are related after all: CFT seems to be some kind of two-dimensional generalization of stochastic processes.

References and comments

[1] Conformal field theory and statistical mechanics (Lecture - 01) by John Cardy

Sunday, November 12, 2017

Thursday, October 5, 2017

Girsanov for dummies

In quantitative finance, one uses change of numéraire, also known as Girsanov's theorem. In this post I explain this concept in a very simple situation. In practice, I find that this "Girsanov's theorem for dummies" version is good enough for most calculations.

Friday, July 14, 2017

Covariant Taylor Series

I recently saw for the first time formulas that are covariant versions of Taylor series. Because they are not easy to find on the internet, I write some down here. Suppose $x_0$ and $x_1$ are two points on a manifold, then the covariant Taylor series are formulas like \begin{align*} f(x_1) &= f(x_0) + f_{;\mu}(x_0)\, \eta^{\mu} + \dfrac{1}{2}f_{;\mu\nu}(x_0)\, \eta^{\mu}\eta^{\nu} + O(\eta)^3\\ T_{\mu}(x_1) &= T_{\mu}(x_0) + T_{\mu;\alpha}(x_0)\, \eta^{\alpha} + \dfrac{1}{2}\left( T_{\mu;\alpha\beta}(x_0)+\dfrac{1}{3} R^{\sigma}_{\ \ \alpha\beta\mu}(x_0)\, T_{\sigma}(x_0)\right) \eta^{\alpha}\eta^{\beta} + O(\eta)^3 \end{align*} The semi-colon denotes the covariant derivative with the Levi-Civita connection. The vector $\eta^{\mu}$ is defined as follows: take a geodesic $\gamma(t)$ such that $\gamma(0) = x_0$ and $\gamma(1) = x_1$, then $\eta^{\mu} = \dot\gamma^{\mu}(0)$. The higher coefficients in the series expansion become more and more complicated formulas involving the Riemann tensor and its covariant derivatives. The formulas can be proved using normal coordinates.

More information can be found in "The Background Field Method and the Ultraviolet Structure of the Supersymmetric Nonlinear Sigma Model", by Alvarez-Gaume, Freedman, Mukhi, 1981

Monday, June 26, 2017

Wu-Yang monopole: numerical calculation

I have been reading the paper by Wu and Yang [1] in which they find the famous Wu-Yang monopole. In the paper there are solutions for three types of monopoles: one has an analytical form, which is the one most often quoted, but there are also two other monopoles with numerical solution only. In this post I use Python/numpy to perform numerical analysis on the latter solution. I use the same notation as in [1].
Wu and Yang obtain the following system of ordinary differential equations \begin{align} \frac{d\Phi}{d \xi} &= \psi\label{eq:20170625a}\\ \frac{d\psi}{d \xi} &= \psi + \Phi(\Phi^2-1)\label{eq:20170626a} \end{align} Here $\xi$ is given by $r = e^{\xi}$, with $r$ the distance to the origin. The right-hand side of \eqref{eq:20170625a}-\eqref{eq:20170626a} defines the vector field ($d\Phi/d\xi, d\psi/d\xi)$ in the $(\Phi, \psi)$ plane. Its integral curves can be seen in the next figure
The integral curves of the vector field defined by \eqref{eq:20170625a}-\eqref{eq:20170626a}.
The stationary points are marked in red.
I calculate the integral curve from the point $(\Phi,\psi) = (0,0)$ to $(1,0)$ using the numpy function solve_bvp [2].
The integral curves of the vector field defined by \eqref{eq:20170625a}-\eqref{eq:20170626a}.
The integral curve from the stationary point $(0,0)$ to $(1,0)$ is added in red.
$\Phi(\xi)$ can be seen in the next graph. One sees that $\Phi(\xi) \to 0$ for $\xi \to -\infty$ and $\Phi(\xi) \to 1$ for $\xi \to +\infty$
In the rest of this post I reproduce part of Table 1 in [1].

Monday, May 1, 2017

Variance of Markov Chain Monte Carlo

In a previous post, I discussed the bias of Markov Chain Monte Carlo (MCMC) simulation. In this post I will discuss the variance. Please see the previous post for information about the notation that I use.
If \begin{equation*} S =\frac{1}{N} \sum_{t=1}^N f(X_t) \end{equation*} then for large $N$, the variance of $S$ is

Friday, April 28, 2017

Bias in Markov Chain Monte Carlo

Markov Chain Monte Carlo (MCMC) simulation can be used to calculate sums \begin{equation}\label{eq:20170427a} I = \sum_a \pi_a f(a) \end{equation} One finds a Markov process $X_t$ with stationary distribution $\pi_a$, then the sum \eqref{eq:20170427a} is approximated by \begin{equation*} S =\frac{1}{N} \sum_{t=1}^N f(X_t) \end{equation*} One can prove that under certain assumptions, \begin{equation*} \lim_{N \to \infty} \frac{1}{N} \sum_{t=1}^N f(X_t) = \sum_a \pi_a f(a) \end{equation*} This is Birkhoff's ergodic theorem. In this post I illustrate the behaviour of $ES$ for large $N$.