Brownian Motion

3 Gaussian distributions

3.1 Gaussian measures

3.1.1 Real Gaussian measures

Definition 3.1 Real Gaussian measure
#

The real Gaussian measure with mean \(\mu \in \mathbb {R}\) and variance \(\sigma ^2 {\gt} 0\) is the measure on \(\mathbb {R}\) with density \(\frac{1}{\sqrt{2 \pi \sigma ^2}} \exp \left(-\frac{(x - \mu )^2}{2 \sigma ^2}\right)\) with respect to the Lebesgue measure. The real Gaussian measure with mean \(\mu \in \mathbb {R}\) and variance \(0\) is the Dirac measure \(\delta _\mu \). We denote this measure by \(\mathcal{N}(\mu , \sigma ^2)\).

Lemma 3.2

The characteristic function of a real Gaussian measure with mean \(\mu \) and variance \(\sigma ^2\) is given by \(x \mapsto \exp \left(i \mu x - \frac{\sigma ^2 x^2}{2}\right)\).

Proof
Lemma 3.3

The central moment of order \(2n\) of a real Gaussian measure \(\mathcal{N}(\mu , \sigma ^2)\) is given by

\begin{align*} \mathbb {E}[(X - \mu )^{2n}] = \sigma ^{2n} (2n - 1)!! \: , \end{align*}

in which \((2n - 1)!! = (2n - 1)(2n - 3) \cdots 3 \cdot 1\) is the double factorial of \(2n - 1\).

Proof
\begin{align*} \mathbb {E}[(X - \mu )^{2n}] & = \int _{-\infty }^\infty (x - \mu )^{2n} \frac{1}{\sqrt{2 \pi \sigma ^2}} e^{-\frac{(x - \mu )^2}{2 \sigma ^2}} \mathrm dx \\ & = \int _{-\infty }^\infty x^{2n} \frac{1}{\sqrt{2 \pi \sigma ^2}} e^{-\frac{x^2}{2 \sigma ^2}} \mathrm dx \\ & = 2 \int _{0}^\infty x^{2n} \frac{1}{\sqrt{2 \pi \sigma ^2}} e^{-\frac{x^2}{2 \sigma ^2}} \mathrm dx \\ & = 2 \int _{0}^\infty {\sqrt{2 \sigma ^2 x}}^{2n} \frac{1}{\sqrt{2 \pi \sigma ^2}} e^{-x)} \frac{\sigma ^2}{\sqrt{2 \sigma ^2 x'}} \mathrm dx \\ & = \frac{\sigma ^{2n} 2^n}{\sqrt{\pi }} \int _{0}^\infty x^{n - 1/2} e{-x} \mathrm dx \\ & = \frac{\sigma ^{2n} 2^n}{\sqrt{\pi }} \Gamma (n + 1/2) \\ & = \frac{\sigma ^{2n} 2^n}{\Gamma (1/2)} \left( \prod _{k=0}^{n-1} (k + 1/2) \right) \Gamma (1/2) \\ & = \sigma ^{2n} \prod _{k=0}^{n-1} (2k + 1) \\ & = \sigma ^{2n} (2n - 1)!! \end{align*}

3.1.2 Gaussian measures on a Banach space

That kind of generality is not needed for this project, but we happen to have results about Gaussian measures on a Banach space in Mathlib, so we will use them. The main reference for this section is [ Hai09 ] .

Let \(F\) be a separable Banach space.

Definition 3.4 Gaussian measure
#

A measure \(\mu \) on \(F\) is Gaussian if for every continuous linear form \(L \in F^*\), the pushforward measure \(L_* \mu \) is a Gaussian measure on \(\mathbb {R}\).

Lemma 3.5

A Gaussian measure is a probability measure.

Proof

A finite measure \(\mu \) on \(F\) is Gaussian if and only if for every continuous linear form \(L \in F^*\), the characteristic function of \(\mu \) at \(L\) is

\begin{align*} \hat{\mu }(L) = \exp \left(i \mu [L] - \mathbb {V}_\mu [L] / 2\right) \: , \end{align*}

in which \(\mathbb {V}_\mu [L]\) is the variance of \(L\) with respect to \(\mu \).

Proof

Transformations of Gaussian measures

Lemma 3.7
#

Let \(F, G\) be two Banach spaces, let \(\mu \) be a Gaussian measure on \(F\) and let \(T : F \to G\) be a continuous linear map. Then \(T_*\mu \) is a Gaussian measure on \(G\).

Proof
Lemma 3.8

Let \(\mu \) be a Gaussian measure on \(F\) and let \(c \in F\). Then the measure \(\mu \) translated by \(c\) (the map of \(\mu \) by \(x \mapsto x + c\)) is a Gaussian measure on \(F\).

Proof
Lemma 3.9

The convolution of two Gaussian measures is a Gaussian measure.

Proof

Fernique’s theorem

Theorem 3.10

Let \(\mu \) be a finite measure on \(F\) such that \(\mu \times \mu \) is invariant under the rotation of angle \(-\frac{\pi }{4}\). Then there exists \(C {\gt} 0\) such that the function \(x \mapsto \exp (C \Vert x \Vert ^2)\) is integrable with respect to \(\mu \).

Proof
Lemma 3.11

For a Gaussian measure \(\mu \), \(\mu \times \mu \) is invariant by rotation.

Proof
Theorem 3.12 Fernique’s theorem

For a Gaussian measure, there exists \(C {\gt} 0\) such that the function \(x \mapsto \exp (C \Vert x \Vert ^2)\) is integrable.

Proof
Lemma 3.13

A Gaussian measure \(\mu \) has finite moments of all orders. In particular, there is a well defined mean \(m_\mu := \mu [\mathrm{id}]\), and for all \(L \in F^*\), \(\mu [L] = L(m_\mu )\).

Proof

A Gaussian measure has finite second moment by Lemma 3.13, hence its covariance bilinear form is well defined.

3.1.3 Gaussian measures on a finite dimensional Hilbert space

We specialize directly from Banach space to finite dimensional Hilbert space since that’s what we need in this project, although there are results for Gaussian measures on infinite dimensional Hilbert spaces that would worth stating.

A finite measure \(\mu \) on a Hilbert space \(E\) is Gaussian if and only if for every \(t \in E\), the characteristic function of \(\mu \) at \(t\) is

\begin{align*} \hat{\mu }(t) = \exp \left(i \mu [\langle t, \cdot \rangle ] - \mathbb {V}_\mu [\langle t, \cdot \rangle ] / 2\right) \: . \end{align*}
Proof

By Theorem 3.6, \(\mu \) is Gaussian iff for every continuous linear form \(L \in E^*\), the characteristic function of \(\mu \) at \(L\) is

\begin{align*} \hat{\mu }(L) = \exp \left(i \mu [L] - \mathbb {V}_\mu [L] / 2\right) \: . \end{align*}

Every continuous linear form \(L \in E^*\) can be written as \(L(x) = \langle t, x \rangle \) for some \(t \in E\), hence we have that \(\mu \) is Gaussian iff for every \(t \in E\),

\begin{align*} \hat{\mu }(t) = \exp \left(i \mu [\langle t, \cdot \rangle ] - \mathbb {V}_\mu [\langle t, \cdot \rangle ] / 2\right) \: . \end{align*}

Let \(E\) be a separable Hilbert space. We denote by \(\langle \cdot , \cdot \rangle \) the inner product on \(E\) and by \(\Vert \cdot \Vert \) the associated norm.

The characteristic function of a Gaussian measure \(\mu \) on \(E\) is given by

\begin{align*} \hat{\mu }(t) = \exp \left(i \langle t, m_\mu \rangle - \frac{1}{2} C’_\mu (t, t)\right) \: . \end{align*}
Proof

By Lemma 3.14, for every \(t \in E\),

\begin{align*} \hat{\mu }(t) = \exp \left(i \mu [\langle t, \cdot \rangle ] - \mathbb {V}_\mu [\langle t, \cdot \rangle ] / 2\right) \: . \end{align*}

By Lemma 3.13, \(\mu \) has finite first moment and \(\mu [\langle t, \cdot \rangle ] = \langle t, m_\mu \rangle \). By the same lemma, \(\mu \) has finite second moment and for any \(t\) we have \(\mathbb {V}_\mu [\langle t, \cdot \rangle ] = C'_\mu (t, t)\).

A finite measure \(\mu \) on \(E\) is Gaussian if and only if there exists \(m \in E\) and \(C\) positive semidefinite such that for all \(t \in E\), the characteristic function of \(\mu \) at \(t\) is

\begin{align*} \hat{\mu }(t) = \exp \left(i \langle t, m \rangle - \frac{1}{2} C(t, t)\right) \: , \end{align*}

If that’s the case, then \(m = m_\mu \) and \(C = C'_\mu \).

Note that this lemma does not say that there exists a Gaussian measure for any such \(m\) and \(C\). We will prove that later.

Proof

Lemma 3.15 states that the characteristic function of a Gaussian measure has the wanted form.

Suppose now that there exists \(m \in E\) and \(C\) positive semidefinite such that for all \(t \in E\), \(\hat{\mu }(t) = \exp \left(i \langle t, m \rangle - \frac{1}{2} C(t, t)\right)\).

We need to show that for all \(L \in E^*\), \(L_*\mu \) is a Gaussian measure on \(\mathbb {R}\). Such an \(L\) can be written as \(\langle u, \cdot \rangle \) for some \(u \in E\). Let then \(u \in E\). We compute the characteristic function of \(\langle u, \cdot \rangle _*\mu \) at \(x \in \mathbb {R}\) with Lemma 1.5:

\begin{align*} \widehat{\langle u, \cdot \rangle _*\mu }(x) & = \hat{\mu }(x \cdot u) \\ & = \exp \left(i x \langle u, m \rangle - \frac{1}{2} x^2 C(u, u)\right) \: . \end{align*}

This is the characteristic function of a Gaussian measure on \(\mathbb {R}\) with mean \(\langle u, m \rangle \) and variance \(C(u, u)\). By Theorem 1.4, \(\langle u, \cdot \rangle _*\mu \) is Gaussian, hence \(\mu \) is Gaussian.

By Lemma 3.15, we deduce that for any \(t \in E\) we have

\[ \exp \left(i\langle t, m \rangle - \frac{1}{2} C(t, t)\right) = \exp \left(i\langle t, m_\mu \rangle - \frac{1}{2} C'_\mu (t, t)\right). \]

In particular, for any \(t\) there exists \(n_t \in \mathbb {Z}\) such that

\[ i\langle t, m \rangle - \frac{1}{2} C(t, t) = i\langle t, m_\mu \rangle - \frac{1}{2} C'_\mu (t, t) + 2i\pi n_t. \]

We deduce that \(n\) is a continuous map from \(E\) to \(\mathbb {Z}\), and thus must be constant because \(E\) is connected. By looking at the value at \(t = 0\), we deduce that for any \(t\), \(n_t = 0\). Looking at real and imaginary parts we obtain that for any \(t\),

\[ \langle t, m \rangle = \langle t, m_\mu \rangle \quad \text{and} \quad C(t, t) = C'_\mu (t, t). \]

We immediately deduce that \(m = m_\mu \). Moreover, because \(C\) and \(C'_\mu \) are symmetric, they are characterized by their values on the diagonal. Indeed, for any \(x, y\),

\[ C(x, y) = \frac{1}{2} (C(x + y, x + y) - C(x, x) - C(y, y)). \]

We deduce that \(C = C'_\mu \).

Two Gaussian measures \(\mu \) and \(\nu \) on a separable Hilbert space are equal if and only if they have same mean and same covariance.

Proof

The forward direction is immediate.

For the converse direction, it is enough to show that \(\mu \) and \(\nu \) have the same characteristic function by Theorem 1.4. As they are both Gaussian, their characteristic functions only depend on their mean and covariance by Lemma 3.15. Thus they are equal.

Definition 3.18 Standard Gaussian measure
#

Let \((e_1, \ldots , e_d)\) be an orthonormal basis of \(E\) and let \(\mu \) be the standard Gaussian measure on \(\mathbb {R}\). The standard Gaussian measure on \(E\) is the pushforward measure of the product measure \(\mu \times \ldots \times \mu \) by the map \(x \mapsto \sum _{i=1}^d x_i \cdot e_i\).

The fact that this definition does not depend on the choice of basis will be a consequence of the fact that its characteristic function does not depend on the basis.

Lemma 3.19
#

For \(\mu _1, \ldots , \mu _d\) probability measures on \(\mathbb {R}\) and \(f : \mathbb {R} \to \mathbb {R}\) integrable with respect to \(\mu _i\), we have

\begin{align*} \int _x f(x_i) \, d(\mu _1 \times \ldots \times \mu _d)(x) = \int _x f(x) \, d\mu _i \: . \end{align*}
Proof

As \(f\) is integrable, we can use Fubini theorem to obtain that

\[ \int f(x_i) \, d(\mu _1 \times \ldots \times \mu _d)(x) = \int f(x) \, d\mu _i(x) \times \prod _{j \ne i} \int 1 \, d\mu _j(x) = \int f(x) \, d\mu _i(x) \]

because the \(\mu _j\)s are probability measures.

Lemma 3.20

The standard Gaussian measure on \(E\) is centered, i.e., \(\mu [L] = 0\) for every \(L \in E^*\).

Proof
Lemma 3.21

The standard Gaussian measure is a probability measure.

Proof
Lemma 3.22

The characteristic function of the standard Gaussian measure on \(E\) is given by

\begin{align*} \hat{\mu }(t) = \exp \left(-\frac{1}{2} \Vert t \Vert ^2 \right) \: . \end{align*}
Proof

Denote by \(\nu \) the standard Gaussian measure on \(\mathbb {R}\). This is a straightforward computation:

\begin{align*} \hat{\mu }(t) = \int \exp \left(i\langle t, \sum _{j=1}^d x_j \cdot e_j \rangle \right) d(\nu \times \ldots \times \nu )(dx) & = \int \exp \left(\sum _{j=1}^d ix_j\langle t, e_j \rangle \right) d(\nu \times \ldots \times \nu )(dx) \\ & = \int \prod _{j=1}^d \exp \left(ix_j\langle t, e_j \rangle \right) d(\nu \times \ldots \times \nu )(dx) \\ & = \prod _{j=1}^d \int \exp \left(ix\langle t, e_j \rangle \right) d\nu (x) \\ & = \prod _{j=1}^d \exp \left(-\frac{\langle t, e_j \rangle ^2}{2}\right) \\ & = \exp \left(-\frac{1}{2} \Vert t \Vert ^2 \right). \end{align*}

The standard Gaussian measure on \(E\) is a Gaussian measure.

Proof

Since the standard Gaussian is a probability measure (hence finite), we can apply Lemma 3.16 that states that it suffices to show that the characteristic function has a particular form. That form is given by Lemma 3.22, taking \(m=0\) and \(C = \langle \cdot , \cdot \rangle \).

Lemma 3.24

The mean of the standard Gaussian measure is \(0\).

Proof

The covariance matrix of the standard Gaussian measure is the identity matrix.

Proof

From Lemma 3.22, we know that for all \(t \in \mathbb {R}\),

\[ \hat{\mu }(t) = \exp \left(-\frac{\| t\| ^2}{2}\right) = \exp \left(-\frac{\langle t, \mathrm{I}t\rangle }{2}\right). \]

As the identity is positive semidefinite, we deduce from Lemma 3.16 that \(\Sigma _\mu \) is the identity matrix.

Definition 3.26 Multivariate Gaussian
#

The multivariate Gaussian measure on \(\mathbb {R}^d\) with mean \(m \in \mathbb {R}^d\) and covariance matrix \(\Sigma \in \mathbb {R}^{d \times d}\), with \(\Sigma \) positive semidefinite, is the pushforward measure of the standard Gaussian measure on \(\mathbb {R}^d\) by the map \(x \mapsto m + \Sigma ^{1/2} x\). We denote this measure by \(\mathcal{N}(m, \Sigma )\).

The mean of the multivariate Gaussian measure \(\mathcal{N}(m, \Sigma )\) is \(m\).

Proof

The covariance matrix of the multivariate Gaussian measure \(\mathcal{N}(m, \Sigma )\) is \(\Sigma \).

Proof

A multivariate Gaussian measure is a Gaussian measure.

Proof

The multivariate Gaussian measure is the pushforward of the standard Gaussian measure by an affine map, and is thus Gaussian by Lemma 3.8 and Lemma 3.7.

The characteristic function of a multivariate Gaussian measure \(\mathcal{N}(m, \Sigma )\) is given by

\begin{align*} \hat{\mu }(t) = \exp \left(i \langle m, t \rangle - \frac{1}{2} \langle t, \Sigma t \rangle \right) \: . \end{align*}
Proof

Since the multivariate Gaussian measure is a Gaussian measure, we can apply Lemma 3.15 to it. It suffices then to show that the mean and the covariance matrix of the multivariate Gaussian measure are equal to \(m\) and \(\Sigma \), respectively. This is given by Lemma 3.27 and Lemma 3.28.

3.2 Gaussian processes

Definition 3.31 Gaussian process
#

A process \(X : T \to \Omega \to E\) is Gaussian if for every finite subset \(t_1, \ldots , t_n \in T\), the random vector \((X_{t_1}, \ldots , X_{t_n})\) has a Gaussian distribution.

Lemma 3.32

Let \(X, Y : T \to \Omega \to E\) be two stochastic processes that are modifications of each other (that is, for all \(t \in T\), \(X_t =_{a.e.} Y_t\)). If \(X\) is a Gaussian process, then \(Y\) is a Gaussian process as well.

Proof

Being a Gaussian process is defined in terms of the distribution of finite-dimensional random vectors. By Lemma 2.5, the random vector \((Y_{t_1}, \ldots , Y_{t_n})\) has the same distribution as the random vector \((X_{t_1}, \ldots , X_{t_n})\) for all \(t_1, \ldots , t_n \in T\).