Formalization of a Brownian motion and of stochastic integrals in Lean

13 Stochastic integral

The lecture notes at this link as well as chapter 18 of [ are good references for this chapter. Some of the proofs are taken from [ .

13.1 Total variation and Lebesgue-Stieltjes integral

TODO: in Mathlib, we can integrate with respect to the measure given by a right-continuous monotone function (StieltjesFunction.measure). This will be useful to integrate against the quadratic variation of a local martingale. However, we will also want to integrate with respect to a signed measure given by a càdlàg function with finite variation. We need to investigate what’s already in Mathlib. See Mathlib.Topology.EMetricSpace.BoundedVariation.

13.2 Square integrable martingales

In this section, \(E\) denotes a complete normed space.

Definition 13.1 Square integrable martingales

Let \(\mathcal{M}^2\) be the set of square integrable continuous martingales with respect to a filtration \(\mathcal{F}\) indexed by \(\mathbb {R}_+\),

\begin{align*} \mathcal{M}^2 = \{ M : \mathbb {R}_+ \to \Omega \to \mathbb {R} \mid M \text{ continuous martingale with } \sup _{t}\mathbb {E}[M_t^2] {\lt} \infty \} \: . \end{align*}

TODO: add results about \(M_\infty \) for \(M \in \mathcal{M}^2\) .

The space \(\mathcal{M}^2\) is a Hilbert space with the inner product defined by

\begin{align*} \langle M, N \rangle = \mathbb {E}[M_\infty N_\infty ] \: . \end{align*}
Proof

13.3 Local martingales

TODO: filtrations should be assumed right-continuous and complete whenever needed.

If \(M\) is a cadlag local martingale, then \(\Vert M \Vert ^2\) is a cadlag local sub-martingale.

Proof
Definition 13.4 Quadratic variation

For any continuous local martingale \(M\), there exists a continuous process \([M]\) with \([M]_0 = 0\) such that \(\Vert M \Vert ^2 - [M]\) is a local martingale. That process is a.s. unique and is called the quadratic variation of \(M\). \([M]\) is defined as the predictable part of the Doob-Meyer decomposition of the local sub-martingale \(\Vert M \Vert ^2\) .

Definition 13.5 Covariation

For any continuous local martingales \(M\) and \(N\), there exists a continuous process \([M,N]\) with \([M,N]_0 = 0\) such that \(MN - [M,N]\) is a local martingale. That process is a.s. unique and is called the covariation of \(M\) and \(N\).

It can be defined by \([M, N]_t = \frac{1}{4}\left([M+N]_t - [M-N]_t \right)\) .

Let \(M\) and \(N\) be continuous square integrable martingales. Then

\begin{align*} \mathbb {E}\left[[M,N]_\infty \right] = \langle M - M_0, N - N_0 \rangle _{\mathcal{M}^2} \: . \end{align*}
Proof
Lemma 13.7

Let \(B\) be a standard Brownian motion. Then the quadratic variation of \(B\) is given by \([B]_t = t\) .

Proof
Definition 13.8 Continuous semi-martingale

A continuous semi-martingale is a process that can be decomposed into a local martingale and a finite variation process. More formally, a process \(X : \mathbb {R}_+ \to \Omega \to E\) is a continuous semi-martingale if there exists a continuous local martingale \(M\) and a continuous adapted process \(A\) with locally finite variation and \(A_0 = 0\) such that

\begin{align*} X_t = M_t + A_t \end{align*}

for all \(t \ge 0\). The decomposition is a.s. unique.

13.4 Stochastic integral

TODO: relax continuity of the martingales, be clear about continuous quadratic variation vs general càdlàg quadratic variation.

For \(M\) a continuous local martingale and \(X\) a stochastic process, we will use integrals of the form \(\int _0^t X_s \: d[M]_s\) . Let’s explain what those integrals mean. For all \(\omega \in \Omega \), \([M](\omega )\) is a right-continuous non-decreasing function (called a Stieltjes function in Mathlib) so it defines a measure on \(\mathbb {R}_+\), denoted by \(d[M]\) . Then, for each fixed \(\omega \in \Omega \), if the function \(s \mapsto X_s(\omega )\) is integrable with respect to the measure \(d[M](\omega )\), the Bochner integral \(\int _0^t X_s(\omega ) \: d[M](\omega )\) is well-defined. By \(\int _0^t X_s \: d[M]_s\), we mean the random variable \(\omega \mapsto \int _0^t X_s(\omega ) \: d[M](\omega )\) . If we also vary \(t\), we get a stochastic process.

13.4.1 Itô isometry

Let \(M \in \mathcal{M}^2\) be a continuous square integrable martingale. We define

\begin{align*} L^2(M) = L^2(\Omega \times \mathbb {R}_+, \mathcal{P}, \mathbb {P} \times d[M]) \end{align*}

in which \(\mathcal{P}\) is the predictable \(\sigma \)-algebra and \(d[M]\) is the measure induced by the quadratic variation of \(M\). The norm on that Hilbert space is \(\Vert X \Vert ^2 = \mathbb {E}\left[ \int _0^{\infty } X_t^2 \: d[M]_t \right]\) .

TODO the sources don’t use the same assumptions: predictable vs progressive (MeasureTheory.ProgMeasurable). Progressive would be more general.

For \(V \in \mathcal{E}\) and \(M \in \mathcal{M}^2\), then \(V \bullet M \in \mathcal{M}^2\) and

\begin{align*} \Vert V \bullet M \Vert _{\mathcal{M}^2}^2 & = \Vert V \Vert _{L^2(M)}^2 \: . \end{align*}
Proof

There are two steps to the proof.

First, in order to make sense of \(\Vert V \Vert _{L^2(M)}\), we define the natural linear map from \(\mathcal{E}\) to \(L^2(M)\) via SimpleProcess.toFun. (Informally, this is identifying \(\mathcal{E}\) as a subset \(\mathcal{E} \subseteq L^2(M)\).) This induces the \(L^2(M)\)-norm on \(\mathcal{E}\), using something like NormedSpace.induced.

Next, we show that integration \(V \mapsto V \bullet M\) is an isometry from \(\mathcal{E}\) with the \(L^2(M)\)-norm, to \(\mathcal{M}^2\). The proof is TODO.

Lemma 13.11

Let \(X\in L^2(M)\) such that \(\int _0^t X_s \: d[M]_s = 0\) for all \(t \ge 0\) a.s.. Then \(X = 0\) \((\mathbb {P} \times d[M])\)-almost everywhere.

Proof

For \(B\) a measurable set of \(\mathbb {R}_+\) and \(\omega \in \Omega \), let \(\nu _\omega (B) = \int _B X_s(\omega ) \: d[M]_s(\omega )\) . This is a signed measure on \(\mathbb {R}_+\) . Then if for all \(t\), \(\int _0^t X_s(\omega ) \: d[M]_s(\omega ) = 0\) then \(\nu _\omega ([0,t]) = 0\) for all \(t\). Those intervals generate the Borel \(\sigma \)-algebra on \(\mathbb {R}_+\), so \(\nu _\omega \) is the zero measure. Thus, for almost all \(\omega \), \(\nu _\omega \) is the zero measure.

The measure \(\nu _\omega \) is absolutely continuous with respect to the measure \(d[M](\omega )\) , and its Radon-Nikodym derivative is \(X(\omega )\) . Since \(\nu _\omega \) is the zero measure for almost all \(\omega \), we have that \(X(\omega ) = 0\) \(d[M](\omega )\)-almost everywhere for almost all \(\omega \) . Equivalently, \(X = 0\) \((\mathbb {P} \times d[M])\)-almost everywhere.

Lemma 13.12 Injectivity of the integral

Let \(X, Y \in L^2(M)\) such that \(\int _0^t X_s \: d[M]_s = \int _0^t Y_s \: d[M]_s\) for all \(t \ge 0\) a.s.. Then \(X = Y\) almost everywhere.

Proof

By linearity of the integral, we have \(\int _0^t (X_s - Y_s) \: d[M]_s = 0\) for all \(t \ge 0\) a.s.. Then apply Lemma 13.11 to \(X - Y\) .

Lemma 13.13

Let \(X \in L^2(M)\) such that \(\langle X, V\rangle = 0\) for any simple process \(V\). Let \(A_t = \int _0^t X_s \: d[M]_s\). Then \(A_t\) is a martingale.

Proof

By Lemma 10.15, it is enough to show that for any bounded real simple process \(V\), \(\mathbb {E}[(V \bullet A)_\infty ] = 0\) .

\begin{align*} \mathbb {E}\left[(V \bullet A)_\infty \right] & = \mathbb {E}\left[ \int _0^{\infty } V_t X_t \: d[M]_t \right] \\ & = \langle X, V \rangle \\ & = 0 \: . \end{align*}

Let \(X \in L^2(M)\) such that \(\langle X, V\rangle = 0\) for any simple process \(V\). Then for all \(t\), \(\int _0^t X_s \: d[M]_s = 0\).

Proof

\(A_t := \int _0^t X_s \: d[M]_s\) is a finite variation process such that \(A_t\) is integrable for all \(t \ge 0\) . By Theorem 9.29, it is enough to show that \(A\) is a local martingale. We have by Lemma 13.13 that \(A\) is a martingale, and hence a local martingale (Lemma 9.26).

The set of simple processes is dense in \(L^2(M)\).

Proof

Since \(L^2(M)\) is a Hilbert space, it is enough to show that if \(X \in L^2(M)\) is orthogonal to all simple processes, then \(X = 0\) . Let \(X \in L^2(M)\) such that for any simple process \(V\), \(\mathbb {E}\left[ \int _0^{\infty } X_t V_t \: d[M]_t \right] = 0\) . Let \(A_t = \int _0^t X_s \: d[M]_s\). It suffices to show that \(A = 0\) by Lemma 13.11 . This is proved in Lemma 13.14 .

In Lean, this is stated as: the natural linear map toFun from \(\mathcal{E}\) to \(L^2(M)\) is IsDenseInducing. The Itô isometry is then defined using IsDenseInducing.extend.

Definition 13.16 Itô isometry

Let \(M \in \mathcal{M}^2\). Then the elementary stochastic integral map \(\mathcal{E} \to \mathcal{M}^2\) defined by \(V \mapsto V \cdot M\) extends to an isometry \(L^2(M) \to \mathcal{M}^2\).

Lemma 13.17

\(\langle X \cdot M, Y \cdot M \rangle _{\mathcal{M}^2} = (XY) \cdot \langle M, N \rangle _{\mathcal{M}^2}\).

Proof

13.4.2 Local martingales

Definition 13.18 \(L^2_{loc}(M)\)

Let \(M\) be a continuous local martingale. We define \(L^2_{loc}(M)\) as the space of predictable processes \(X\) such that for all \(t \ge 0\), \(\mathbb {E}\left[ \int _0^t X_s^2 \: d[M]_s \right] {\lt} \infty \).

Definition 13.19 Stochastic integral for continuous local martingales

Let \(M\) be a continuous local martingale and let \(X \in L^2_{loc}(M)\). We define the local stochastic integral \(X \cdot M\) as the unique continuous local martingale with \((X \cdot M)_0 = 0\) such that for any continuous local martingale \(N\), almost surely,

\begin{align*} [X \cdot M, N] = X \cdot [M, N] \: . \end{align*}

13.4.3 Semi-martingales

Definition 13.20

For a continuous semi-martingale \(X = M + A\) and \(V \in L^2_{semi}(X)\) (to be defined) we define the stochastic integral as

\begin{align*} V \cdot X = V \cdot M + V \cdot A \: , \end{align*}

in which \(V \cdot M\) is the local stochastic integral defined in 13.19 and \(V \cdot A\) is the Lebesgue-Stieltjes integral with respect to the locally finite variation process \(A\).

For \(X = M + A\) and \(Y = N + B\), we define the covariation as

\begin{align*} [X, Y] = [M, N] \: . \end{align*}

13.5 Itô formula

Theorem 13.21 Integration by parts

Let \(X\) and \(Y\) be two continuous semi-martingales. Then we have almost surely

\begin{align*} X_t Y_t - X_0 Y_0 = (X \cdot Y)_t + (Y \cdot X)_t + [X,Y]_t \: . \end{align*}
Proof
Theorem 13.22 Itô’s formula

Let \(X^1, \ldots , X^d\) be continuous semi-martingales and let \(f : \mathbb {R}^d \to \mathbb {R}\) be a twice continuously differentiable function. Then, writing \(X = (X^1, \ldots , X^d)\), the process \(f(X)\) is a semi-martingale and we have

\begin{align*} f(X_t) & = f(X_0) + \sum _{i=1}^d \int _0^t \frac{\partial f}{\partial x_i}(X_s) \: dX^i_s + \frac{1}{2} \sum _{i,j=1}^d \int _0^t \frac{\partial ^2 f}{\partial x_i \partial x_j}(X_s) \: d[X^i, X^j]_s \: . \end{align*}
Proof