Formalization of a Brownian motion and of stochastic integrals in Lean

10 Simple processes and elementary integrals

Definition 10.1 Simple process
#

Let \((s_k {\lt} t_k)_{k \in \{ 1, ..., n\} }\) be points in a linear order \(T\) with a bottom element 0. Let \((\eta _k)_{0 \le k \le n}\) be bounded random variables with values in a normed real vector space \(E\) such that \(\eta _0\) is \(\mathcal{F}_0\)-measurable and \(\eta _k\) is \(\mathcal{F}_{s_k}\)-measurable for \(k \ge 1\). Then the simple process for that sequence is the process \(V : T \to \Omega \to E\) defined by

\begin{align*} V_t = \eta _0 \mathbb {1}_{\{ 0\} }(t) + \sum _{k=1}^{n} \eta _k \mathbb {1}_{(s_k, t_k]}(t) \: . \end{align*}

Let \(\mathcal{E}_{T, E}\) be the set of simple processes indexed by \(T\) with value in \(E\).

Definition 10.2 Elementary predictable set

A set \(A \subseteq T \times \Omega \) is an elementary predictable set if it is a finite union of sets of the form \({0} \times B\) for \(B \in \mathcal{F}_0\) or of the form \((s, t] \times B\) with \(0 \le s {\lt} t\) in \(T\) and \(B \in \mathcal{F}_s\).

An elementary predictable set is measurable with respect to the predictable \(\sigma \)-algebra.

Proof

It is a union of sets of the form \({0} \times B\) with \(B \in \mathcal{F}_0\) or of the form \((s, t] \times B\) with \(B \in \mathcal{F}_s\), which are measurable by Lemma 7.47.

A simple process is predictable.

Proof

Real simple processes generate the predictable \(\sigma \)-algebra, i.e., the predictable \(\sigma \)-algebra is the supremum of the comaps by simple processes (seen as maps from \(T \times \Omega \) to \(\mathbb {R}\)) of the Borel \(\sigma \)-algebra.

Proof

A set \(A \subseteq T \times \Omega \) is an elementary predictable set if and only if the indicator function \(\mathbb {1}_A\) is a simple process.

Proof

Currently only proved the forward direction, because the backward direction is not easy to state.

Lemma 10.7

The simple processes \(\mathcal{E}_{T, F}\) form an additive commutative group.

Proof
Lemma 10.8

The simple processes \(\mathcal{E}_{T, F}\) form a module over the scalars \(\mathbb {R}\).

Proof
Definition 10.9 Elementary stochastic integral

Let \(V \in \mathcal{E}_{T, E}\) be a simple process and let \(X\) be a stochastic process with values in a normed space \(F\). Let \(B\) be a continuous bilinear map from \(E \times F\) to another normed space \(G\). The general elementary stochastic integral process \(V \bullet _B X : T \to \Omega \to G\) is defined by

\begin{align*} (V \bullet _B X)_t & = \sum _{k=1}^{n} B (\eta _k, X^t_{t_k} - X^t_{s_k}) \: . \end{align*}
Lemma 10.10

For \(V, W\) two simple processes with values in \(E\) and \(F\) respectively, the process defined by \(B(V, W)_t(\omega ) = B(V_t(\omega ), W_t(\omega ))\) is a simple process.

Note: in Lean, the above is not a lemma but a definition of a simple process with a lemma that its coercion to a function is given by above.

Proof

It has values \(B(\eta _i, \xi _j)\) on the intervals \((s_i, t_i] \cap (u_j, v_j]\) if \(V\) and \(W\) are defined by the sequences \((s_i, t_i, \eta _i)\) and \((u_j, v_j, \xi _j)\) respectively.

Definition 10.11

Let \(E\) and \(F\) be normed real vector spaces. We call linear elementary stochastic integral the general elementary stochastic integral of Definition 10.9 with \(V\) being valued in \(L(E, F)\)-valued and \(B(L, x) = L(x)\) the evaluation map. We denote it by \(V \bullet _L X\).

Definition 10.12

We call scalar elementary stochastic integral the general elementary stochastic integral of Definition 10.9 with \(V\) being \(\mathbb {R}\)-valued and \(B(r, x) = r \cdot x\) the scalar multiplication. We denote it by \(V \bullet _{\mathbb {R}} X\).

Proof

(In Lean, this is split into several lemmas about each argument and add/sub/smul.)

Immediate from the definition.

Lemma 10.14

\((V \bullet _B X)_0 = 0\) for every simple process \(V\) and stochastic process \(X\).

Proof
\begin{align*} (V \bullet _B X)_0 & = \sum _{k=1}^{n} B (\eta _k, X^0_{t_k} - X^0_{s_k}) \\ & = \sum _{k=1}^{n} B (\eta _k, X_{0} - X_{0}) \\ & = \sum _{k=1}^{n} B (\eta _k, 0) \\ & = 0 \: . \end{align*}
Lemma 10.15

Let \(X_c\) be a constant process (equal to the same random variable for all times). Then for every simple process \(V\),

\begin{align*} V \bullet _B X_c = 0 \: . \end{align*}
Proof

Let \(E, F, G, H, I, J\) be normed real vector spaces. Let \(B_1 : E \times H \to I\), \(B_2 : F \times G \to H\), \(B_3 : E \times F \to J\) and \(B_4 : J \times G \to I\) be continuous bilinear maps such that for all \(x \in E\), \(y \in F\) and \(z \in G\), \(B_1(x, B_2(y, z)) = B_4(B_3(x, y), z)\). Then for every simple process \(V\) with values in \(E\), simple process \(W\) with values in \(F\), and stochastic process \(X\) with values in \(G\), we have

\begin{align*} V \bullet _{B_1} (W \bullet _{B_2} X) & = B_3(V, W) \bullet _{B_4} X \: . \end{align*}
Proof

Unfold definitions, use linearity.

Corollary 10.17

Let \(E, F, G\) be normed real vector spaces. Let \(B : E \times F \to G\) be a continuous bilinear map. Then for every simple process \(V\) with values in \(\mathbb {R}\), simple process \(W\) with values in \(E\), and stochastic process \(X\) with values in \(F\), we have

\begin{align*} V \bullet _{\mathbb {R}} (W \bullet _{B} X) & = (V \cdot W) \bullet _{B} X \: . \end{align*}
Proof

Use Lemma 10.16.

\(V \bullet _{\mathbb {R}} (W \bullet _{\mathbb {R}} X) = (V \times W) \bullet _{\mathbb {R}} X\) for real-valued simple processes \(V, W\) and stochastic process \(X\).

Proof

Apply Corollary 10.17 with \(B\) the scalar multiplication.

For simple process \(W\) with values in \(L(E, F)\), simple process \(V\) with values in \(L(F, G)\), and stochastic process \(X\) with values in \(E\), we have

\begin{align*} V \bullet _L (W \bullet _L X) & = (V \circ W) \bullet _L X \: , \end{align*}

in which \(V \circ W\) is the simple process defined by \((V \circ W)_t = V_t \circ W_t\).

Proof

Use Lemma 10.16 with \(B_1, B_2, B_4\) equal to the evaluation maps and \(B_3\) the composition of continuous linear maps.

Lemma 10.20

For \(V \in \mathcal{E}_{T, F}\), \(X : T \to \Omega \to E\) a càdlàg process and a continuous bilinear map \(B: E \times F \to G\), the elementary stochastic integral \(V \bullet _B X\) is càdlàg.

Proof

Let \(X\) be a stochastic process, \(V\) a simple process and \(τ\) a stopping time. Then

\begin{align*} (V \bullet _B X)^τ = V \bullet _B (X^τ) \: . \end{align*}
Proof
\begin{align*} (V \bullet _B X)^\tau _t & = (V \bullet _B X)_{t \wedge \tau } \\ & = \sum _{k=1}^{n} B (\eta _k, X^{t \wedge \tau }_{t_k} - X^{t \wedge \tau }_{s_k}) \\ & = \sum _{k=1}^{n} B (\eta _k, (X^{\tau })^t_{t_k} - (X^{\tau })^t_{s_k}) \\ & = (V \bullet _B (X^{\tau }))_t \: . \end{align*}

For \(V\) a simple process and \(X\) a stochastic process, \((V \bullet _B X)_t\) tends to its limit process \((V \bullet _B X)_\infty \) as \(t\) goes to infinity.

Proof

That process is eventually constant.

Lemma 10.23
\begin{align*} (V \bullet _B X)_\infty & = \sum _{k=1}^{n} B (\eta _k, X_{t_k} - X_{s_k}) \: . \end{align*}
Proof

Immediate from the definition.

An adapted integrable process \(X\) is a submartingale if and only if for every bounded simple process \(V\) with values in \(\mathbb {R}_+\), \(\mathbb {E}[(V \bullet _{\mathbb {R}} X)_\infty ] \ge 0\).

Note that in Lean \(V\) comes with the data of a sum representation: by “values in \(\mathbb {R}_+\)” we mean that the random variables in that particular sum representing \(V\) are nonnegative.

Proof

First suppose that \(X\) is a submartingale. The simple process \(V\) can be written as \(V_t = \eta _0 \mathbb {1}_{\{ 0\} }(t) + \sum _{k=1}^{n} \eta _k \mathbb {1}_{(s_k, t_k]}(t)\) for nonnegative \(\eta _k\). Then

\begin{align*} \mathbb {E}[(V \bullet _{\mathbb {R}} X)_\infty ] & = \mathbb {E}\left[\sum _{k=1}^{n} \eta _k (X_{t_k} - X_{s_k})\right] \\ & = \sum _{k=1}^{n} \mathbb {E}[\eta _k (X_{t_k} - X_{s_k})] \: . \end{align*}

It suffices to show that each term of the sum is nonnegative. Since \(\eta _k\) is \(\mathcal{F}_{s_k}\)-measurable and nonnegative, by the submartingale property we have

\begin{align*} \mathbb {E}[\eta _k (X_{t_k} - X_{s_k})] & = \mathbb {E}[\eta _k \mathbb {E}[X_{t_k} - X_{s_k} \mid \mathcal{F}_{s_k}]] \ge 0 \: . \end{align*}

This concludes the proof in one direction. Suppose now that for every bounded simple process \(V\) with values in \(\mathbb {R}_+\), \(\mathbb {E}[(V \bullet _{\mathbb {R}} X)_\infty ] \ge 0\). To show that \(X\) is a submartingale, let \(s {\lt} t\) in \(T\) and let \(A = \{ \mathbb {E}[X_t \mid \mathcal{F}_s] {\lt} X_s\} \in \mathcal{F}_s\). Define the simple process \(V\) by \(V_r = \mathbb {1}_A \mathbb {1}_{(s, t]}(r)\). Then

\begin{align*} \mathbb {E}[\mathbb {1}_A (X_t - X_s)] & = \mathbb {E}[(V \bullet _{\mathbb {R}} X)_\infty ] \ge 0 \: . \end{align*}

But we also have

\begin{align*} \mathbb {E}[\mathbb {1}_A (X_t - X_s)] & = \mathbb {E}[\min \{ \mathbb {E}[X_t \mid \mathcal{F}_s] - X_s, 0\} ] \le 0 \: . \end{align*}

Combining both inequalities, we get \(\mathbb {E}[\min \{ \mathbb {E}[X_t \mid \mathcal{F}_s] - X_s, 0\} ] = 0\). Since \(\min \{ \mathbb {E}[X_t \mid \mathcal{F}_s] - X_s, 0\} \) is nonpositive and has expectation zero, it is almost surely zero. Thus \(\mathbb {E}[X_t \mid \mathcal{F}_s] \ge X_s\) almost surely, which concludes the proof.

An adapted integrable process \(X\) is a martingale if and only if for every bounded simple process \(V\) with values in \(\mathbb {R}\), \(\mathbb {E}[(V \bullet _{\mathbb {R}} X)_\infty ] = 0\).

Proof

There might not be an order on the type \(E\) of values of \(X\), so we cannot just say that \(X\) is a martingale if and only if both \(X\) and \(-X\) are submartingales and conclude with Lemma 10.24. But we can use almost the same proof as in Lemma 10.24, replacing inequalities by equalities?

Let \(X\) be a submartingale and \(A\) be an elementary predictable set. Then for all \(t \in T\),

\begin{align*} \mathbb {E}[(\mathbb {1}_A \bullet _{\mathbb {R}} X)_t] \le \mathbb {E}[X_t - X_0] \: . \end{align*}
Proof

Let \(A\) be an elementary predictable set and let \(t \in T\).

\begin{align*} \mathbb {E}[(\mathbb {1}_A \bullet _{\mathbb {R}} X)_t] & = \mathbb {E}[X_t - X_0] - \mathbb {E}[(\mathbb {1}_{A^c} \bullet _{\mathbb {R}} X)_t] \le \mathbb {E}[X_t - X_0] \: . \end{align*}

The inequality follows from Lemma 10.24 applied to the nonnegative bounded simple process \(\mathbb {1}_{A^c}\).

Let \(X\) be a submartingale and \(V\) be a nonnegative bounded simple process. Then the elementary stochastic integral \(V \bullet _{\mathbb {R}} X\) is a submartingale.

Proof

We use Lemma 10.24 and have to show that for every nonnegative bounded simple process \(W\), \(\mathbb {E}[(W \bullet _{\mathbb {R}} (V \bullet _{\mathbb {R}} X))_\infty ] \ge 0\). By Lemma 10.18, this is \(\mathbb {E}[((W \times V) \bullet _{\mathbb {R}} X)_\infty ]\). Since \(W \times V\) is a nonnegative bounded simple process, this is nonnegative by Lemma 10.24 applied to \(X\).

Let \(X\) be a martingale and \(V\) be a bounded simple process. Then the elementary stochastic integral \(V \bullet _B X\) is a martingale.

Proof

We use Lemma 10.25 and have to show that for every bounded simple process \(W\), \(\mathbb {E}[(W \bullet _{\mathbb {R}} (V \bullet _B X))_\infty ] = 0\). By Lemma 10.17, this is \(\mathbb {E}[((W \bullet _{\mathbb {R}} V) \bullet _B X)_\infty ]\). Since \(W \bullet _{\mathbb {R}} V\) is a bounded simple process, this is zero by Lemma 10.25 applied to \(X\).

Let \(X\) be a stochastic process and \(τ\) be a stopping time taking finitely many values. Then \(\mathbb {1}_{[0, τ]}\) is a simple process and

\begin{align*} X^τ = X_0 + \mathbb {1}_{[0, τ]} \bullet X \: . \end{align*}
Proof