Tuesday, December 22, 2015

Examples of the Stieltjes transformation

I give the definition and two examples of the Stieltjes transformation.

Monday, December 14, 2015

Duality in Random Matrix Theory

I provide examples of a duality between the Gaussian symplectic ensemble and the Gaussian orthogonal ensemble.

Saturday, December 12, 2015

Gaussian Symplectic Ensemble

In this post I define the Gaussian Symplectic Ensemble (GSE). Often the GSE is defined using quaternions; here I only use complex numbers. I give an accurate definition of the symplectic group, of self-dual matrices, of the GSE and give an algorithm to draw random matrices from the GSE.

Thursday, December 3, 2015

A virial theorem in the Gaussian Unitary Ensemble

In the Gaussian Unitary Ensemble of \( n \times n \) matrices, one can calculate the following expectation value \begin{equation}\label{eq:20151201a} \mathbb{E} \left[ \sum_{ i \neq j} \dfrac{1}{ \left( \lambda_i - \lambda_j \right)^2} \right] = \dfrac{1}{2} n (n-1) \end{equation} with \( \lambda_1 , \ldots, \lambda_n \) the eigenvalues of the random matrix \( H \). I have normalized the GUE such that \( \mathbb{E} [ H_{ij} H_{kl} ] = \delta_{il} \delta_{jk} \). In this blog post, I check \eqref{eq:20151201a} with a Monte Carlo simulation in Mathematica.

Friday, November 27, 2015

Illustration of the Dyson Ornstein-Uhlenbeck process

I define the Dyson Ornstein-Uhlenbeck process as \begin{equation}\label{eq:20151125a} dX_t = -\alpha X_t dt + H \sqrt{dt} \end{equation} with \( \alpha > 0 \) and \( H \) a random matrix from the Gaussian Unitary Ensemble of \( n \times n \) Hermitian matrices. The eigenvalues \( \lambda_i(t) \) of \( X_t \) then have the following dynamics \begin{equation}\label{eq:20151125b} d\lambda_i = -\alpha \lambda_i dt + \sum_{ j \neq i} \frac{1}{\lambda_i - \lambda_j} dt + dB_i \end{equation} where \( B_1, \ldots, B_n \) are independent Brownian processes. In this post I illustrate the process \eqref{eq:20151125b} numerically.

Wednesday, November 25, 2015

Illustration of Dyson Brownian Motion

The Dyson Brownian motion is defined as \begin{equation}\label{eq:20151124a} X_{t + dt} = X_t + H \sqrt{dt} \end{equation} with \( H \) a random matrix from the Gaussian Unitary Ensemble of \( n \times n \) Hermitian matrices. It is then well-known that the dynamics of the eigenvalues \( \lambda_i(t) \) of \( X_t \) is described by the process \begin{equation}\label{eq:20151124b} d\lambda_i = \sum_{ j \neq i} \frac{1}{\lambda_i - \lambda_j} dt + dB_i \end{equation} where \( B_1, \ldots, B_n \) are independent Brownian processes. In this post I illustrate the process \eqref{eq:20151124b} numerically.

Thursday, November 19, 2015

Proof of a determinantal integration formula

While reading about random matrices I encountered the following formula in a blog post by Terence Tao.

If \( K ( x,y) \) is such that
  1. \( \int\! dx \ K(x,x) = \alpha \) 
  2. \( \int\! dy \ K(x,y) K(y,z) = K(x,z) \)
then \begin{equation}\label{eq:20151118a} \int dx_{n+1} \det_{i,j \le n+1} \left( K(x_i , x_j ) \right) = (\alpha - n) \det_{i,j \le n} \left( K(x_i , x_j ) \right) \end{equation} For simplicity I have written \( \int \) instead of \( \int_{\mathbb{R}} \). This formula is used when calculating n-point functions in the Gaussian Unitary Ensemble (GUE). Tao gives a short proof of \eqref{eq:20151118a} based on induction and the Laplace expansion of determinants. In this post, I give a proof using integration over Grassmann variables. The reason I am interested in this alternative proof is that I want to compress the calculation of n-point functions in the GUE as much as possible.

Sunday, November 15, 2015

Spectral Density in the Gaussian Unitary Ensemble

In this post I perform numerical experiments on the spectral density in the Gaussian Unitary Ensemble (GUE).

Thursday, November 12, 2015

Proof of the Christoffel–Darboux formula without induction

In this post I prove the Christoffel–Darboux formula without using induction. It seems that often the Christoffel–Darboux formula is proved with induction. However, I find that the proof with induction does not give insight why the Christoffel–Darboux formula is correct. I found the proof below in a paper by Barry Simon.

Saturday, October 31, 2015

On the moments of the Gaussian orthogonal ensemble

I am currently reading Mehta's book on random matrices and decided to implement a Mathematica program to calculate expectation values in the Gaussian orthogonal ensemble (GOE). This is the set of symmetric \( n \times n \) matrices \( H \) with probability measure \begin{equation*} P(H) dH = \mathcal{N} \prod_{ i \le j} dH_{ij} \ \exp \left( -\frac{1}{2}\ \mathrm{tr}(H^2)\right) \end{equation*} My program uses recursion based on Wick's theorem (also called Isserlis' theorem according to Wikipedia), and also some rules for summing over indices in \( n \) dimensions. I used ideas from Derevianko's program Wick.m

Friday, October 30, 2015

Some expectation values in the Gaussian orthogonal ensemble

I calculate some expectation values if the probability measure is given by \begin{equation*} P(H) dH = \mathcal{N} \prod_{ i \le j} dH_{ij} \ \exp \left( -\frac{1}{2}\ \mathrm{tr}(H^2)\right) \end{equation*} Hereby are \( H \) symmetric \( n \times n \) matrices and \(\mathcal{N}\) is the normalization factor. This is a special case of the Gaussian orthogonal ensemble (GOE).

Sunday, October 25, 2015

Invariance of the Gaussian orthogonal ensemble

On page 17 in his book , Mehta proves the following result about the ensemble of symmetric \( n \times n \) matrices \( H \)
  1. If the ensemble is invariant under every transformation \( H \mapsto R H R^T \) with \( R \) an orthogonal matrix
  2. and if all components \( H_{ij}, i \le j \) are independent
then the probability measure has the form \begin{equation}\label{eq:20151025e} \prod_{ i \le j} dH_{ij} \ \exp \left( -a\ \mathrm{tr}(H^2) + b\ \mathrm{tr} H + c \right) \end{equation} with \( a, b \) and \(c \) constants.

I prove here the converse, namely, the probability measure \eqref{eq:20151025e} is invariant under transformations \( H \mapsto R H R^T \).

Gaussian orthogonal ensemble

I am currently reading Mehta's book on random matrices (the first edition because it is thinner than the third). I plan to write some blog posts while studying this book. In chapter 2, Metha defines the Gaussian orthogonal ensemble. This is the set of symmetric \( n \times n \) matrices \( H \) with probability density \begin{equation}\label{eq:20151025a} \prod_{ i \le j} dH_{ij} \ \exp \left( -a\ \mathrm{tr}(H^2) + b\ \mathrm{tr} H + c \right) \end{equation} with \( a, b \) and \(c \) constants. It can be calculated that this density function is invariant under transformations \begin{equation}\label{eq:20151025b} H \mapsto R H R^T \end{equation} with \( R \) an orthogonal matrix.

 This is completely equivalent with the vector case. In the vector case the probability density is \begin{equation}\label{eq:20151025c} \prod_{ i} dx_i \ \exp \left( -a\ \sum_i x^2_i + c \right) \end{equation} This density is invariant under rotations \begin{equation}\label{eq:20151025d} x \mapsto R x \end{equation}

One can see that \eqref{eq:20151025d} is the vector representation of the orthogonal group and \eqref{eq:20151025b} is the representation on symmetric matrices. Because symmetric matrices do not form an irreducible representation of the orthogonal group - I can namely subtract the trace - I wonder at this point if one also studies something like ''Gaussian orthogonal ensemble on traceless symmetric matrices''.

Thursday, October 8, 2015

Sylvester, On Arithmetical Series, 1892

claimtoken-5618eb80bbe93

In Chebyshev's Mémoire, which is a famous paper in the history of number theory, Chebyshev obtains bounds on the function \begin{equation*} \psi(x) = \sum_{p^{\alpha} \le x} \log p \end{equation*} where the sum is over prime powers. Sylvester improved Chebyshev's bounds in his paper On Arithmetical Series by applying two strategies:
  1. Whereas Chebyshev analyzed the expression \begin{equation*} T(x) - T\left( \frac{x}{2} \right)- T\left( \frac{x}{3} \right)- T\left( \frac{x}{5} \right)+ T\left( \frac{x}{30} \right) \end{equation*} where \( T(x) = \sum_{ 1 \le n \le x} \log n\ \), Sylvester analyzed more complex expressions, for example \begin{equation*} T(x) - T\left( \frac{x}{2} \right)- T\left( \frac{x}{3} \right)- T\left( \frac{x}{5} \right)+ T\left( \frac{x}{6} \right)-T\left( \frac{x}{7} \right)+ T\left( \frac{x}{70} \right)- T\left( \frac{x}{210} \right) \end{equation*}
  2. Sylvester also applied an iterative method so that he could strengthen bounds he obtained in a recursive manner.
Method (1) is explained in part II, §1 of the paper; method (2) in part II, §2. Sylvester obtained the best bounds by combining (1) and (2). The whole exercise is now more of historical interest because the prime number theorem, which is the statement \( \psi(x) \sim x \), has been proved four years after the publication of Sylvester's paper. Nevertheless, in this post I will see how method (2) works in a simple situation. I will of course obtain bounds that are weaker than Sylvester's. More elaborations on method (1) can be found in this post.

Tuesday, September 29, 2015

A quite short proof of Bertrand's postulate

In the Wikipedia article about Bertrand's postulate, three proofs are mentioned: Chebyshev's proof, Ramanujan's proof and Erdős's proof. In this post, I give a simplified version of Chebyshev's proof.

Friday, September 25, 2015

A proof of Bertrand's postulate by Sylvester

In his paper "On Tchebycheff's Theory of the Totality of the Prime Numbers Comprised within Given Limits" from 1881, Sylvester gives an outline of a proof of Bertrand's postulate. Namely, he writes "the formulae above indicated would suffice to prove M. Bertrand's postulate, and would lead to an equation somewhat simpler in form than that led to by M. Tchebycheff's process". In this post, I provide more detail on this proof.

Wednesday, September 16, 2015

Numerical sum over primes

In his Mémoire Chebyshev uses bounds he found on the function \( \theta(x) =\sum_{p\le x} \log p\) to calculate numerically sums over primes \begin{equation*} \sum_{p =2 }^{\infty} f(p) \end{equation*} I explain his method in this post; to simplify the presentation, I use less sophisticated bounds on \( \theta(x) \) than Chebyshev. As an application, I calculate numerically the sum \begin{equation*} \sum_{p =2 }^{\infty} \frac{1}{p \log p} \end{equation*} with explicit bound on the obtained error.

Thursday, September 10, 2015

Sign patterns of the Möbius function

I test some of the statements in the paper "Sign patterns of the Liouville and Möbius function". On page 6 in the paper one can find "the three events \( \mu(n) = +1, \mu(n) = 0, \mu(n) = -1 \) occur with asymptotic probability \( \frac{1}{2 \zeta(2)}, 1 - \frac{1}{\zeta(2)}, \frac{1}{2 \zeta(2)} \) respectively". Here, \( \mu \) is the Möbius function and \(\zeta \) is Riemann's zeta function. I interpret the concept of asymptotic probability in a naive way and just count these events in Mathematica for \( n \le 100 \).

Saturday, September 5, 2015

Variations on a theme by Chebyshev

In his paper "Mémoire sur les nombres premiers", Chebyshev calculates a lower and upper bound on the function \begin{equation*} \psi(x) = \sum_{n \le x } \Lambda(n) \end{equation*} from an analysis of the expression \begin{equation}\label{eq:2} T(x) - T\left( \frac{x}{2} \right)- T\left( \frac{x}{3} \right)- T\left( \frac{x}{5} \right)+ T\left( \frac{x}{30} \right) \end{equation} with \( T(x) = \sum_{ 1 \le n \le x} \log n\ \). In this post, I quickly review Chebyshev's argument and then calculate which bounds on \(\psi(x)\) one can get from expressions that are similar to \eqref{eq:2}.

Wednesday, August 26, 2015

Lower and upper bound for Chebyshev's psi function

In this post, I prove the inequalities \begin{equation}\label{eq:1} \log 2 \cdot x - 3 \log x \le \psi(x) \le 2 \log 2 \cdot x + \frac{3}{\log 2} \log^2 x \end{equation} for all real \( x \ge 2 \). Hereby is \( \psi(x) = \sum_{n \le x } \Lambda(n) \) with \( \Lambda \) the von Mangoldt function. This is a second post motivated by Chebyshev's paper "Mémoire sur les nombres premiers" from 1852. In this paper, Chebyshev uses a more complex variant of the calculation below to obtain stronger inequalities. Similar inequalities can also be found on page 50 in Murty's book.

Sunday, August 23, 2015

How not to prove Bertrand's postulate

This is a first post motivated by Chebyshev's paper "Mémoire sur les nombres premiers" from 1852. In this paper Chebyshev proves Bertrand's postulate that there is always a prime between \( a \) and \( 2a \) for all \( a \ge 2 \). Chebyshev bases his proof on inequalities for the function \begin{equation}\label{HNTPeq:1} T(x) - T\left( \frac{x}{2} \right)- T\left( \frac{x}{3} \right)- T\left( \frac{x}{5} \right)+ T\left( \frac{x}{30} \right) \end{equation} with \begin{equation*} T(x) = \sum_{ 1 \le n \le x} \log n \end{equation*} Chebyshev's paper is quite easy to read and can be found at this link. In this post I will follow Chebyshev's reasoning on an easier version of \eqref{HNTPeq:1}.

Saturday, July 25, 2015

Neue Empirische Daten Über Die Zahlentheoretische Funktion sigma(n), von Sterneck, 1913. Part II

This post is about a formula that von Sterneck used to calculate values of \( M(n) \) for \( n \) up to 5,000,000. The formula is
\begin{equation} \sum_{d \le \sqrt{x}}\!\!\phantom{}^{\prime}\ M\left( \frac{x}{d} \right) + \sum_{d \le \sqrt{x}} \mu(d) \omega_P\left( \frac{x}{d} \right) - \omega_P(\sqrt{x})M(\sqrt{x}) =0 \end{equation}
where
  • \( \mu \) the Möbius function
  • \( M(x) = \sum_{ n \le x} \mu(n) \) the Mertens function
  • \( \omega_P(x) = \left\{ k \in\mathbb{N} \middle| 1 \le k \le x \text{ and } p_1 \nmid k \text{ and } \cdots \text{ and } p_j \nmid k\right\} \) with \(P = \left\{ p_1, \ldots p_j \right\}\) a non-empty set of primes
  • \(\sum_{k \le x}^{\prime}\) the sum over integers \(k\) with \( p_1 \nmid k \text{ and } \cdots \text{ and } p_j \nmid k\)
  • \( x \ge p_1 p_2 \cdots\ p_j \)
An example
I calculate an example for \( x = 50 \) and \( P = \{ 2,3 \} \). The first term is
\begin{equation}
\sum_{d \le 7}\!\phantom{}^{\prime}\ M\left( \frac{50}{d} \right) = M(50) + M(10) + M(7)
\end{equation}
The second term is
\begin{equation}
\sum_{d \le 7} \mu(d) \omega_P\left( \frac{50}{d} \right)
\end{equation}
If I use
\begin{equation}
\omega_P(x) = \lfloor x \rfloor - \left\lfloor \frac{x}{2} \right\rfloor -  \left\lfloor \frac{x}{3}\right\rfloor + \left\lfloor \frac{x}{6} \right\rfloor
\end{equation}
then it can be calculated that the second term is zero. The third term is \( \omega(7) M(7) = 3 M(7) \). All together, von Sterneck's formula gives the recursive relation
\begin{equation}
M(50) + M(10) +M(7) + 0 - 3 M(7) =0
\end{equation}
Because \( M(10) = -1 \) and \( M(7) = -2 \) this gives \( M(50) = -3 \).

Sketch of the proof of the formula
The proof proceeds as follows. Define the function \( e_P \)
\begin{equation}
e_P(m) = \begin{cases} 1 \quad\text{if}\quad p_1 \nmid m \text{ and } \cdots \text{ and } p_j \nmid m\\ 0 \quad\text{otherwise} \end{cases}
\end{equation}
then calculate the Dirichlet convolution 
\begin{equation} e_P \ast\mu = 1_{\langle P \rangle} \cdot \mu
\end{equation} where \( 1_{\langle P \rangle}\) is the characteristic function on the set \( \langle P \rangle = \left\{ n \in \mathbb{N} \middle|\quad \forall p: p | n \implies p \in P \right\} \). Because \( \sum_{ k \le x}e_P(k) = \omega_P(x) \), Dirichlet's hyperbola method gives
\begin{equation}
\sum_{n \le x} 1_{\langle P \rangle} (n) \mu(n) = \sum_{d \le \sqrt{x}}e_P(d) \ M\left( \frac{x}{d} \right) + \sum_{d \le \sqrt{x}} \mu(d) \omega_P\left( \frac{x}{d} \right) - \omega_P(\sqrt{x})M(\sqrt{x})
\end{equation}
The result then follows because the left hand side is zero for \( x \ge p_1 p_2 \cdots\ p_j \)

Von Sterneck used this formula with \( P = \{ 2,3,5,7 \} \). In that way he could calculate values of \(M(n) \)  for \( n \) up to 5,000,000 based on a pre-computed table of \(M(n) \)  for \( n \) up to 500,000.

Sunday, July 19, 2015

Neue Empirische Daten Über Die Zahlentheoretische Funktion sigma(n), von Sterneck, 1913. Part I

In the paper in the title, von Sterneck performs numerical tests on the inequality
\begin{equation}\label{Neueeq:1}
| M (n) | \le \sqrt{n}
\end{equation}
where \( M(n) = \displaystyle\sum_{k=1}^n \mu(n) \) is the Mertens function. The inequality (1) is of historical interest, because it implies the correctness of the Riemann hypothesis. In 1985 however, Odlyzko and te Riele, showed that (1) is incorrect, although an explicit number \( n \) for which (1) fails is still not known. Nevertheless, I find von Sterneck's paper still interesting because I want to see how \( M(n) \) was calculated for large values of \( n \) before the advent of (electronic) computers.

Friday, June 26, 2015

On Kronecker's lemma

While reading some parts of the nice lecture notes on Analytic Number Theory by Hildebrand, I encountered Kronecker's lemma on page 58.
If \( f : \mathbb N \to \mathbb C \) is a function, and \( s \in \mathbb C \) with \( \Re(s) > 0 \) is a complex number such that the Dirichlet series \( L(f,s) = \displaystyle\sum_{n=1}^{\infty} \frac{f(n)}{n^s} \) converges, then \( \displaystyle\sum_{n \le x } f(n) = o(x^s) \quad\text{for}\quad x \to + \infty \)

As an example, take \( f(n) = 1 \), then \( \displaystyle\sum_{n=1}^{\infty} \frac{1}{n^{s}} \) converges for \( s > 1 \) and indeed \( \displaystyle\sum_{n \le x } 1 = \lfloor x \rfloor = o(x^s) \) for all \( s > 1 \).

As a second example, take \( f(n) = \frac{1}{n} \), then \( \displaystyle\sum_{n=1}^{\infty} \frac{1}{n^{1+s}} \) converges for \( s > 0 \) and indeed \( \displaystyle\sum_{n \le x } \frac{1}{n} = \log x = o(x^s) \) for all \( s > 0 \).

What I like about Kronecker's lemma is that its statement and proof involve real analysis only; one does not need to know anything about the behaviour of the function \(L(f,s) \) for complex numbers \( s \). On the other hand, in analytic number theory, one often uses Perron's formula to estimate sums \( \sum_{n \le x } f(n) \) for number theoretic functions \( f \). One then needs to have information about the zeros of the Dirichlet series, and its behaviour for large imaginary values of \( s \) to make Perron's formula work. All this information is thus not needed when using Kronecker's lemma.

Although a proof of Kronecker's lemma can be found in Hildebrand's lecture notes, I write here a proof for the case \( s \in \mathbb R \). This proof is based on the French Wikipedia article about Kronecker's lemma and is very transparent.

Friday, June 12, 2015

Tauberian Theorem for Cesàro mean

Because the proof of the prime number theorem is related to Tauberian theorems of various kinds (see for example this paper by Mueger), I decided to prove a Tauberian theorem for a very simple case, namely the case of Cesàro means. It seems difficult to find the proof online, I therefore write one down here.

Sunday, May 31, 2015

Variation on a theme by Mertens

One of the formulas that Mertens proves in his paper is
\begin{equation}\label{eq1} \sum_{ p \le x } \frac{ \log p}{p} = \log x + R \quad\text{ with }\quad | R | \le 2
\end{equation}

I use Mertens method to prove the variant

\begin{equation}\label{eq2} \sum_{ n \le x } \frac{ \Lambda(n)}{n} = \log x + R \quad \text{ with } \quad -1 \le R \le 2
\end{equation}

Here, \( \Lambda \) is the von Mangoldt function. Equation \eqref{eq2} is thus similar to \eqref{eq1}, the sum is over prime powers instead of primes. It turns out that it is easier to prove \eqref{eq2} than \eqref{eq1}, because including the prime powers actually reduces the amount of estimates one has to make. The proof of \eqref{eq2} serves as a light version of the proof of \eqref{eq1} and gives insight into how the proof of \eqref{eq1} is organized.

Tuesday, May 19, 2015

Mertens, Ein Beitrag zur analytischen Zahlentheorie, 1874

I started reading the article in the title to see how Mertens proves his famous theorems.

Introduction

Mertens says he will prove the formulas
$$\sum_{ p \le x } \frac{1}{p}= \log\log x + C$$ and $$\prod_{ p \le x } \frac{1}{1 – \frac{1}{p}} = C’ \log x$$ The sum and the product are over primes \( p \). Mertens will also calculate the constants \( C\) and \( C’\). Mertens also says that the formulas are already in a paper by Chebyshev, but with doubtful proof. These formulas are now known as Mertens theorems.

Monday, May 11, 2015

Upper and lower bounds on the totient summatory function

In this post I use manipulations as in Ramanujan's proof of Bertrand's postulate to calculate explicit upper and lower bounds on the totient summatory function \( \Phi(x) = \sum_{n \le x} \phi(n) \). This shows how Ramanujan's proof works in a simpler situation.

Sunday, May 10, 2015

Ramanujan's proof of Bertrand's postulate, 1919

I read Ramanujan's proof of Bertrand's postulate. I liked the paper because with only 2 pages it is very short. Secondly, the paper contains explicit upper and lower bounds on some arithmetical functions; such bounds can be tested in Mathematica, whereas the more common statements involving the big-O notation cannot. Murty refers to Ramanujan's proof on page 38 in his book, but Murty rephrases Ramanujan's proof with the big-O notation. I find it refreshing to read the original version of the proof instead. This post contains thoughts on the structure of Ramanujan's proof.

Monday, April 27, 2015

"On Liouville's Function" by Lehman, 1960

I calculate some of the results in the paper On Liouville's Function, R. Sherman Lehman, Math. Comp. 14 (1960), 311-320 with Mathematica. I find this paper interesting because it is an old paper that used a computer to find an explicit counterexample to a conjecture of Polya. I also find it interesting because thinking about how to compute number theoretic functions helps to understand the formulas better. The names of the sections below are the same as in the paper. Equation numbers refer to the equations in the paper.