Brownian Motion

2 Gaussian distributions

2.1 Gaussian measures

2.1.1 Real Gaussian measures

Definition 2.1 Real Gaussian measure
#

The real Gaussian measure with mean \(\mu \in \mathbb {R}\) and variance \(\sigma ^2 {\gt} 0\) is the measure on \(\mathbb {R}\) with density \(\frac{1}{\sqrt{2 \pi \sigma ^2}} \exp \left(-\frac{(x - \mu )^2}{2 \sigma ^2}\right)\) with respect to the Lebesgue measure. The real Gaussian measure with mean \(\mu \in \mathbb {R}\) and variance \(0\) is the Dirac measure \(\delta _\mu \). We denote this measure by \(\mathcal{N}(\mu , \sigma ^2)\).

Lemma 2.2

The characteristic function of a real Gaussian measure with mean \(\mu \) and variance \(\sigma ^2\) is given by \(x \mapsto \exp \left(i \mu x - \frac{\sigma ^2 x^2}{2}\right)\).

Proof
Lemma 2.3

The central moment of order \(2n\) of a real Gaussian measure \(\mathcal{N}(\mu , \sigma ^2)\) is given by

\begin{align*} \mathbb {E}[(X - \mu )^{2n}] = \sigma ^{2n} (2n - 1)!! \: , \end{align*}

in which \((2n - 1)!! = (2n - 1)(2n - 3) \cdots 3 \cdot 1\) is the double factorial of \(2n - 1\).

Proof

2.1.2 Gaussian measures on a Banach space

That kind of generality is not needed for this project, but we happen to have results about Gaussian measures on a Banach space in Mathlib, so we will use them. The main reference for this section is [ Hai09 ] .

Let \(F\) be a separable Banach space.

Definition 2.4 Gaussian measure
#

A measure \(\mu \) on \(F\) is Gaussian if for every continuous linear form \(L \in F^*\), the pushforward measure \(L_* \mu \) is a Gaussian measure on \(\mathbb {R}\).

Lemma 2.5

A Gaussian measure is a probability measure.

Proof
Definition 2.6 Centered measure
#

A measure \(\mu \) on \(F\) is centered if for every continuous linear form \(L \in F^*\), \(\mu [L] = 0\).

A finite measure \(\mu \) on \(F\) is Gaussian if and only if for every continuous linear form \(L \in F^*\), the characteristic function of \(\mu \) at \(L\) is

\begin{align*} \hat{\mu }(L) = \exp \left(i \mu [L] - \mathbb {V}_\mu [L] / 2\right) \: , \end{align*}

in which \(\mathbb {V}_\mu [L]\) is the variance of \(L\) with respect to \(\mu \).

Proof

Transformations of Gaussian measures

Lemma 2.8
#

Let \(F, G\) be two Banach spaces, let \(\mu \) be a Gaussian measure on \(F\) and let \(T : F \to G\) be a continuous linear map. Then \(T_*\mu \) is a Gaussian measure on \(G\).

Proof
Lemma 2.9

Let \(\mu \) be a Gaussian measure on \(F\) and let \(c \in F\). Then the measure \(\mu \) translated by \(c\) (the map of \(\mu \) by \(x \mapsto x + c\)) is a Gaussian measure on \(F\).

Proof
Lemma 2.10

The convolution of two Gaussian measures is a Gaussian measure.

Proof

Fernique’s theorem

Theorem 2.11

Let \(\mu \) be a finite measure on \(F\) such that \(\mu \times \mu \) is invariant under the rotation of angle \(-\frac{\pi }{4}\). Then there exists \(C {\gt} 0\) such that the function \(x \mapsto \exp (C \Vert x \Vert ^2)\) is integrable with respect to \(\mu \).

Proof
Lemma 2.12

For a Gaussian measure \(\mu \), \(\mu \times \mu \) is invariant by rotation.

Proof
Theorem 2.13 Fernique’s theorem

For a Gaussian measure, there exists \(C {\gt} 0\) such that the function \(x \mapsto \exp (C \Vert x \Vert ^2)\) is integrable.

Proof
Lemma 2.14

A Gaussian measure \(\mu \) has finite moments of all orders. In particular, there is a well defined mean \(m_\mu := \mu [\mathrm{id}]\), and for all \(L \in F^*\), \(\mu [L] = L(m_\mu )\).

Proof

A Gaussian measure has finite second moment by Lemma 2.14, hence its covariance bilinear form is well defined.

2.1.3 Gaussian measures on a finite dimensional Hilbert space

We specialize directly from Banach space to finite dimensional Hilbert space since that’s what we need in this project, although there are results for Gaussian measures on infinite dimensional Hilbert spaces that would worth stating.

A finite measure \(\mu \) on a separable Hilbert space \(E\) is Gaussian if and only if for every \(t \in E\), the characteristic function of \(\mu \) at \(t\) is

\begin{align*} \hat{\mu }(t) = \exp \left(i \mu [\langle t, \cdot \rangle ] - \mathbb {V}_\mu [\langle t, \cdot \rangle ] / 2\right) \: . \end{align*}
Proof

By Theorem 2.7, \(\mu \) is Gaussian iff for every continuous linear form \(L \in E^*\), the characteristic function of \(\mu \) at \(L\) is

\begin{align*} \hat{\mu }(L) = \exp \left(i \mu [L] - \mathbb {V}_\mu [L] / 2\right) \: . \end{align*}

Every continuous linear form \(L \in E^*\) can be written as \(L(x) = \langle t, x \rangle \) for some \(t \in E\), hence we have that \(\mu \) is Gaussian iff for every \(t \in E\),

\begin{align*} \hat{\mu }(t) = \exp \left(i \mu [\langle t, \cdot \rangle ] - \mathbb {V}_\mu [\langle t, \cdot \rangle ] / 2\right) \: . \end{align*}

Let \(E\) be a finite dimensional Hilbert space. We denote by \(\langle \cdot , \cdot \rangle \) the inner product on \(E\) and by \(\Vert \cdot \Vert \) the associated norm.

The characteristic function of a Gaussian measure \(\mu \) on \(E\) is given by

\begin{align*} \hat{\mu }(t) = \exp \left(i \langle t, m_\mu \rangle - \frac{1}{2} \langle t, \Sigma _\mu t \rangle \right) \: . \end{align*}
Proof

By Lemma 2.15, for every \(t \in E\),

\begin{align*} \hat{\mu }(t) = \exp \left(i \mu [\langle t, \cdot \rangle ] - \mathbb {V}_\mu [\langle t, \cdot \rangle ] / 2\right) \: . \end{align*}

By Lemma 2.14, \(\mu \) has finite first moment and \(\mu [\langle t, \cdot \rangle ] = \langle t, m_\mu \rangle \).

TODO: the second moment is also finite and we can get to the covariance matrix.

A finite measure \(\mu \) on \(E\) is Gaussian if and only if there exists \(m \in E\) and \(\Sigma \) positive semidefinite such that for all \(t \in E\), the characteristic function of \(\mu \) at \(t\) is

\begin{align*} \hat{\mu }(t) = \exp \left(i \langle t, m \rangle - \frac{1}{2} \langle t, \Sigma t \rangle \right) \: , \end{align*}

If that’s the case, then \(m = m_\mu \) and \(\Sigma = \Sigma _\mu \).

Note that this lemma does not say that there exists a Gaussian measure for any such \(m\) and \(\Sigma \). We will prove that later.

Proof

Lemma 2.16 states that the characteristic function of a Gaussian measure has the wanted form.

Suppose now that there exists \(m \in E\) and \(\Sigma \) positive semidefinite such that for all \(t \in E\), \(\hat{\mu }(t) = \exp \left(i \langle t, m \rangle - \frac{1}{2} \langle t, \Sigma t \rangle \right)\).

We need to show that for all \(L \in E^*\), \(L_*\mu \) is a Gaussian measure on \(\mathbb {R}\). Such an \(L\) can be written as \(\langle u, \cdot \rangle \) for some \(u \in E\). Let then \(u \in E\). We compute the characteristic function of \(\langle u, \cdot \rangle _*\mu \) at \(x \in \mathbb {R}\) with Lemma 1.5:

\begin{align*} \widehat{\langle u, \cdot \rangle _*\mu }(x) & = \hat{\mu }(x \cdot u) \\ & = \exp \left(i x \langle u, m \rangle - \frac{1}{2} x^2 \langle u, \Sigma u \rangle \right) \: . \end{align*}

This is the characteristic function of a Gaussian measure on \(\mathbb {R}\) with mean \(\langle u, m \rangle \) and variance \(\langle u, \Sigma u \rangle \). By Theorem 1.4, \(\langle u, \cdot \rangle _*\mu \) is Gaussian, hence \(\mu \) is Gaussian.

Definition 2.18 Standard Gaussian measure
#

Let \((e_1, \ldots , e_d)\) be an orthonormal basis of \(E\) and let \(\mu \) be the standard Gaussian measure on \(\mathbb {R}\). The standard Gaussian measure on \(E\) is the pushforward measure of the product measure \(\mu \times \ldots \times \mu \) by the map \(x \mapsto \sum _{i=1}^d x_i \cdot e_i\).

The fact that this definition does not depend on the choice of basis will be a consequence of the fact that its characteristic function does not depend on the basis.

Lemma 2.19

The standard Gaussian measure on \(E\) is centered, i.e., \(\mu [L] = 0\) for every \(L \in E^*\).

Proof
Lemma 2.20

The standard Gaussian measure is a probability measure.

Proof

The characteristic function of the standard Gaussian measure on \(E\) is given by

\begin{align*} \hat{\mu }(t) = \exp \left(-\frac{1}{2} \Vert t \Vert ^2 \right) \: . \end{align*}
Proof

The standard Gaussian measure on \(E\) is a Gaussian measure.

Proof

Since the standard Gaussian is a probability measure (hence finite), we can apply Lemma 2.17 that states that it suffices to show that the characteristic function has a particular form. That form is given by Lemma 2.21.

Lemma 2.23

The mean of the standard Gaussian measure is \(0\).

Proof
Lemma 2.24

The covariance matrix of the standard Gaussian measure is the identity matrix.

Proof
Definition 2.25 Multivariate Gaussian
#

The multivariate Gaussian measure on \(\mathbb {R}^d\) with mean \(m \in \mathbb {R}^d\) and covariance matrix \(\Sigma \in \mathbb {R}^{d \times d}\), with \(\Sigma \) positive semidefinite, is the pushforward measure of the standard Gaussian measure on \(\mathbb {R}^d\) by the map \(x \mapsto m + \Sigma ^{1/2} x\). We denote this measure by \(\mathcal{N}(m, \Sigma )\).

Lemma 2.26

The mean of the multivariate Gaussian measure \(\mathcal{N}(m, \Sigma )\) is \(m\).

Proof
Lemma 2.27

The covariance matrix of the multivariate Gaussian measure \(\mathcal{N}(m, \Sigma )\) is \(\Sigma \).

Proof

A multivariate Gaussian measure is a Gaussian measure.

Proof

The multivariate Gaussian measure is the pushforward of the standard Gaussian measure by an affine map, and is thus Gaussian by Lemma 2.9 and Lemma 2.8.

The characteristic function of a multivariate Gaussian measure \(\mathcal{N}(m, \Sigma )\) is given by

\begin{align*} \hat{\mu }(t) = \exp \left(i \langle m, t \rangle - \frac{1}{2} \langle t, \Sigma t \rangle \right) \: . \end{align*}
Proof

Since the multivariate Gaussian measure is a Gaussian measure, we can apply Lemma 2.16 to it. It suffices then to show that the mean and the covariance matrix of the multivariate Gaussian measure are equal to \(m\) and \(\Sigma \), respectively. This is given by Lemma 2.26 and Lemma 2.27.

2.2 Gaussian processes

Definition 2.30 Gaussian process
#

A process \(X : T \to \Omega \to E\) is Gaussian if for every finite subset \(t_1, \ldots , t_n \in T\), the random vector \((X_{t_1}, \ldots , X_{t_n})\) has a Gaussian distribution.

Lemma 2.31

Let \(X, Y : T \to \Omega \to E\) be two stochastic processes that are versions of each other (that is, for all \(t \in T\), \(X_t =_{a.e.} Y_t\)). Then for all \(t_1, \ldots , t_n \in T\), the random vector \((X_{t_1}, \ldots , X_{t_n})\) has the same distribution as the random vector \((Y_{t_1}, \ldots , Y_{t_n})\).

Proof

For all measurable sets \(A \subseteq E^n\), we have

\begin{align*} \vert \mathbb {P}((X_{t_1}, \ldots , X_{t_n}) \in A) - \mathbb {P}((Y_{t_1}, \ldots , Y_{t_n}) \in A) \vert & \le \mathbb {P}(\exists i \in [n], X_{t_i} \ne Y_{t_i}) \\ & \le \sum _{i=1}^n \mathbb {P}(X_{t_i} \ne Y_{t_i}) \\ & = 0 \: . \end{align*}
Lemma 2.32

Let \(X, Y : T \to \Omega \to E\) be two stochastic processes that are versions of each other (that is, for all \(t \in T\), \(X_t =_{a.e.} Y_t\)). If \(X\) is a Gaussian process, then \(Y\) is a Gaussian process as well.

Proof

Being a Gaussian process is defined in terms of the distribution of finite-dimensional random vectors. By Lemma 2.31, the random vector \((Y_{t_1}, \ldots , Y_{t_n})\) has the same distribution as the random vector \((X_{t_1}, \ldots , X_{t_n})\) for all \(t_1, \ldots , t_n \in T\).

Lemma 2.33
#

Let \(X, Y : T \to \Omega \to E\) be two stochastic processes that are versions of each other (that is, for all \(t \in T\), \(X_t =_{a.e.} Y_t\)) and are almost surely continuous. Then \(X\) and \(Y\) are indistinguishable. That is, almost surely, \(X_t = Y_t\) for all \(t \in T\) simultaneously.

TODO: hypotheses on \(T, E\)?

Proof