Formalization of a Brownian motion and of stochastic integrals in Lean

12 Doob-Meyer Theorem

This chapter starts with the derivation of a Komlòs lemma, which is a useful tool to extract converging subsequences from bounded sequences of functions. Then, we give a short review of the properties of the Doob decomposition of an adapted process indexed on a discrete set, and then follows [ BSV12 ] which gives an elementary and short proof of the Doob-Meyer theorem.

12.1 Komlòs Lemma

Lemma 12.1
#

Let \((f_n)_{n\in \mathbb {N}}\) be a sequence in a vector space \(E\) and \(\phi : E \to \mathbb {R}_+\) be a function such that \(\phi (f_n)\) is a bounded sequence. For \(\delta {\gt} 0\), let \(S_\delta = \{ (f, g) \mid \phi (f)/2 + \phi (g)/2 - \phi ((f+g)/2) \ge \delta \} \). Then there exist \(g_n\in convex(f_n,f_{n+1},\cdots )\) such that for all \(\delta {\gt} 0\), for \(N\) large enough and \(n, m \ge N\), \((g_n, g_m) \notin S_\delta \).

Proof

Let \(B\) be the bound of \((\phi (f_n))_{n\in \mathbb {N}}\). Then for all \(n\in \mathbb {N}\) and \(g\in convex(f_n,f_{n+1},\cdots )\) we have \(\phi (g)\le B\) by convexity of \(\phi \). Let \(r_n = \inf (\phi (g) \mid g\in convex(f_n, f_{n+1},\ldots ))\). By construction \((r_n)_{n\in \mathbb {N}}\) is nondecreasing. Let \(A = \sup _{n \ge 1} r_n\), which is finite (as \(A \le B\)) and for each \(n\) we may pick some \(g_n\in convex(f_n, f_{n+1},\ldots )\) such that \(\phi (g_n) \le A+1/n\) by \(\inf \) and \(\sup \) definitions.

Let \(\varepsilon \in (0, \delta /4)\). By properties of \(\sup \) there exists \(\bar{n}\) such that \(r_{\bar{n}} \ge A-\varepsilon \) and such that \(\frac{1}{\bar{n}} \le \varepsilon \). Let \(m \ge k \ge \bar{n}\). We have \((g_k+g_m)/2 \in convex(f_k,f_{k+1},\ldots )\) and it follows since \((r_n)_{n\in \mathbb {N}}\) is nondecreasing that \(\phi ((g_k+g_m)/2) \ge A - \varepsilon \). Hence due to the ordering of \(m,k,\bar{n}\),

\begin{align*} \phi (g_k)/2 + \phi (g_m)/2 - \phi ((g_k+g_m)/2) & \le 2(A + \frac{1}{\bar{n}}) - 2(A - \varepsilon ) \\ & \le 4 \varepsilon \\ & {\lt} \delta \: . \end{align*}

Thus, for \(n, m \ge \bar{n}\), \((g_n, g_m) \notin S_\delta \).

Lemma 12.2
#

Let \(H\) be a Hilbert space and \((f_n)_{n\in \mathbb {N}}\) a bounded sequence in \(H\). Then there exist functions \(g_n\in convex(f_n,f_{n+1},\cdots )\) such that \((g_n)_{n\in \mathbb {N}}\) converges in \(H\).

Proof

Consider \(\phi : H \to \mathbb {R}_+\) defined by \(\phi (f) = \| f\| _2^2\), which is convex. Then Lemma 12.1 applied to \((f_n)_{n\in \mathbb {N}}\) and \(\phi \) gives us functions \(g_n\in convex(f_n,f_{n+1},\cdots )\) such that for every \(\delta {\gt}0\) there exists \(N\) such that for \(n,m\geq N\), \((g_n,g_m)\notin S_\delta \). Thus for every \(\delta {\gt}0\) there exists \(N\) such that for \(n,m\geq N\),

\begin{align*} \| g_n\| _2^2/2 + \| g_m\| _2^2/2 - \| (g_n+g_m)/2\| _2^2 & {\lt} \delta \: . \end{align*}

But the left-hand side is equal to \(\| g_n - g_m\| _2^2/4\) by the parallelogram identity, hence \((g_n)_{n\in \mathbb {N}}\) is a Cauchy sequence in \(H\) and thus converges in \(H\) by completeness.

Lemma 12.3

Let \((x_n)_{n \in \mathbb {N}}\) be a sequence in a real vector space converging to \(x\). Let \(\mathcal{C}((x_n))\) be the set of sequences \((y_n)_{n \in \mathbb {N}}\) such that for all \(n\), \(y_n \in convex(x_n, x_{n+1}, \ldots )\). Then we have

  1. uniform convergence over \(\mathcal{C}((x_n))\): for all \(\varepsilon {\gt} 0\), there exists \(\bar{n}\) such that for all \(n \ge \bar{n}\), for all \((y_n)_{n \in \mathbb {N}} \in \mathcal{C}((x_n))\), \(\Vert y_n - x \Vert \le \varepsilon \);

  2. pointwise convergence: for all \((y_n)_{n \in \mathbb {N}} \in \mathcal{C}((x_n))\), \((y_n)_{n \in \mathbb {N}}\) converges to \(x\).

Proof

The second point is a direct consequence of the first one, so we only prove the first one. Let \(\varepsilon {\gt}0\). By convergence of \(x_n\), there exists \(\bar{n}\) such that for all \(n \ge \bar{n}\), \(\Vert x_n-x \Vert \le \varepsilon \). Let \(a_{n, m}\) be convex weights such that \(y_n = \sum _{m = n}^{N_n} a_{n, m} x_m\). By triangular inequality it follows that for \(n \ge \bar{n}\),

\begin{align*} \Vert y_n - x \Vert = \left\Vert \sum _{m = n}^{N_n} a_{n, m} x_m - x \right\Vert = \left\Vert \sum _{m = n}^{N_n} a_{n, m} (x_m - x) \right\Vert \le \sum _{m = n}^{N_n} a_{n, m} \Vert x_m - x \Vert \le \varepsilon \: . \end{align*}

By convex weights on \(\mathbb {N}\), we mean a sequence of non-negative real numbers \((a_n)_{n \in \mathbb {N}}\) with finitely many nonzero entries such that \(\sum _{n \in \mathbb {N}} a_n = 1\). If \((a_m)_{m \in \mathbb {N}}\) are convex weights and \((b^n_m)_{n,m \in \mathbb {N}}\) is such that for all \(n\), the \((b^n_m)\) are convex weights, then we denote by \((a_\cdot ) * (b^\cdot _\cdot )\) the convex weights defined by \(((a_\cdot ) * (b^\cdot _\cdot ))_m = \sum _{k} a_k b^k_m\).

Lemma 12.4

Let \(E\) be a Hilbert space and for \(i \in \mathbb {N}\), let \((x_n^{(i)})_{n \in \mathbb {N}}\) be a bounded sequence in \(E\). Then there exists a sequence of convex weights \((\lambda ^{k,n}_\cdot )_{k, n \in \mathbb {N}}\) with \(\lambda ^{k,n}_m = 0\) for \(m {\lt} n\) such that for all \(k \in \mathbb {N}\), \(\left(\sum _{m \ge n} \left((\lambda ^{k,n}_\cdot ) * \ldots * (\lambda ^{1,\cdot }_\cdot )\right)_m x_m^{(k)}\right)_{n \in \mathbb {N}}\) converges.

Proof

First by lemma 12.2 applied to \((x_n^{(1)})_{n\in \mathbb {N}}\) in the Hilbert space \(E\), there exist \(g_n^1 \in convex(x_n^{(1)}, x_{n+1}^{(1)}, \ldots )\) (call its weights \(\lambda ^{1,n}_n,\cdots ,\lambda ^{1,n}_{N^1_n}\)) such that \(g_n^1\) converges to some \(g^1\).

Secondly define \(\tilde{g}_n^2\), convex combination of \(x_n^{(2)}, x_{n+1}^{(2)}, \ldots \) with weights \(\lambda ^{1,n}_n,\cdots ,\lambda ^{1,n}_{N^1_n}\). Applying lemma 12.2 to \((\tilde{g}_n^2)_{n\in \mathbb {N}}\) gives us \(g_n^2 \in convex(\tilde{g}_n^2, \tilde{g}_{n+1}^2, \ldots )\) (call its weights \(\lambda ^{1,n}_n,\cdots ,\lambda ^{2,n}_{N^2_n}\)) such that \(g_n^2\) converges to some \(g^2\). \(g_n^2\) is a convex combination of \(x_n^{(2)}, x_{n+1}^{(2)}, \ldots \) with weights \((\lambda ^{2,n}_\cdot ) * (\lambda ^{1,\cdot }_\cdot )\).

We continue iterating this process inductively. At iteration \(k\) we have weights \((\lambda ^{k,n}_\cdot * \ldots * \lambda ^{1,\cdot }_\cdot )\). We define \(\tilde{g}_n^{k+1}\) as the convex combination of \(x_n^{(k+1)}, x_{n+1}^{(k+1)}, \ldots \) with those weights. We apply Lemma 12.2 to \((\tilde{g}_n^{k+1})_{n\in \mathbb {N}}\) to get \(g_n^{k+1} \in convex(\tilde{g}_n^{k+1}, \tilde{g}_{n+1}^{k+1}, \ldots )\) such that \(g_n^{k+1}\) converges to some \(g^{k+1}\). We denote its weights by \(\lambda ^{k+1,n}_n,\cdots ,\lambda ^{k+1,n}_{N^{k+1}_n}\).

We have thus defined, for all \(k, n \in \mathbb {N}\), convex weights \((\lambda ^{k,n}_m)\) (that are zero for \(m {\lt} n\)) such that \(\sum _{m \ge n}((\lambda ^{k,n}_\cdot * \ldots * \lambda ^{1, \cdot }_\cdot ))_m x_m^{(k)}\) converges to \(g^k\).

Lemma 12.5

Let \(E\) be a Hilbert space and for \(i \in \mathbb {N}\), let \((x_n^{(i)})_{n \in \mathbb {N}}\) be a bounded sequence in \(E\). Let \((\lambda ^{k,n}_\cdot )_{k, n \in \mathbb {N}}\) be convex weights satisfying the conclusion of Lemma 12.4, and let \((g^i)_{i\in \mathbb {N}}\) be the sequence of limits of the sums.

Then for every \(k \ge i\), the sequence \(\left(\sum _{m \ge n} \left((\lambda ^{k,n}_\cdot ) * \ldots * (\lambda ^{1,\cdot }_\cdot )\right)_m x_m^{(i)}\right)_{n \in \mathbb {N}}\) converges to \(g^i\), uniformly in \(k\).

Proof

Let \(i \in \mathbb {N}\). By Lemma 12.3, there is uniform convergence over all convex combinations of the sequence \(\left(\sum _{m \ge n} \left((\lambda ^{i,n}_\cdot ) * \ldots * (\lambda ^{1,\cdot }_\cdot )\right)_m x_m^{(i)}\right)_{n \in \mathbb {N}}\) to \(g^i\). All sums \(\sum _{m \ge n} \left((\lambda ^{k,n}_\cdot ) * \ldots * (\lambda ^{1,\cdot }_\cdot )\right)_m x_m^{(i)}\) for \(k \ge i\) are convex combinations of \(\sum _{m \ge n} \left((\lambda ^{i,n}_\cdot ) * \ldots * (\lambda ^{1,\cdot }_\cdot )\right)_m x_m^{(i)}\), hence they converge to \(g^i\) uniformly in \(k\).

Lemma 12.6

Let \(E\) be a Hilbert space and for \(i \in \mathbb {N}\), let \((x_n^{(i)})_{n \in \mathbb {N}}\) be a bounded sequence in \(E\). Then there exists a sequence of convex weights \((\eta ^n_\cdot )_{n \in \mathbb {N}}\) with \(\eta ^n_m = 0\) for \(m {\lt} n\) such that for all \(i \in \mathbb {N}\), the sequence \(\left(\sum _{m \ge n} \eta ^n_m x_m^{(i)}\right)_{n \in \mathbb {N}}\) converges.

Proof

Let \((\lambda ^{k,n}_\cdot )_{k, n \in \mathbb {N}}\) be convex weights satisfying the conclusion of Lemma 12.4, and let \((g^i)_{i\in \mathbb {N}}\) be the sequence of limits of the sums. Let \(\eta ^n_m = (\lambda ^{n,n}_\cdot * \ldots * \lambda ^{1,\cdot }_\cdot )_m\). We show that for all \(i \in \mathbb {N}\), the sequence \(\left(\sum _{m \ge n} \eta ^n_m x_m^{(i)}\right)_{n \in \mathbb {N}}\) converges to \(g^i\).

Let \(i \in \mathbb {N}\). By Lemma 12.5, for all \(\varepsilon {\gt} 0\), there exists \(\bar{n}\) such that for all \(n \ge \bar{n}\), for all \(k \ge i\), \(\left\Vert \sum _{m \ge n} \left((\lambda ^{k,n}_\cdot ) * \ldots * (\lambda ^{1,\cdot }_\cdot )\right)_m x_m^{(i)} - g^i\right\Vert \le \varepsilon \). Hence for \(n \ge \max (\bar{n}, i)\),

\begin{align*} \left\Vert \sum _{m \ge n} \eta ^n_m x_m^{(i)} - g^i\right\Vert & = \left\Vert \sum _{m \ge n} \left((\lambda ^{n,n}_\cdot ) * \ldots * (\lambda ^{1,\cdot }_\cdot )\right)_m x_m^{(i)} - g^i\right\Vert \le \varepsilon \: . \end{align*}
Lemma 12.7

Let \(E\) be a Hilbert space and let \((f_n)_{n \in \mathbb {N}}\) be a sequence in \(\Omega \to E\). For \(i \in \mathbb {N}\), set \(f_n^{(i)} = f_n \mathbb {1}_{(\Vert f_n \Vert \le i)}\), such that \(f_n^{(i)} \in L^2(E)\). Then there exists a sequence of convex weights \(\lambda _n^{n}, \ldots , \lambda _{N_n}^{n}\) such that the functions \(\left(\lambda _n^{n} f_n^{(i)} + \ldots + \lambda _{N_n}^{n} f_{N_n}^{(i)} \right)_{n\in \mathbb {N}}\) converge in \(L^2(E)\) for every \(i \in \mathbb {N}\).

Proof

Use Lemma 12.6 in the Hilbert space \(L^2(E)\) with the sequence of sequences \((f_n^{(i)})\), which are bounded in \(L^2(E)\) for each \(i \in \mathbb {N}\).

Lemma 12.8 Komlòs Lemma
#

Let \((f_n)_{n\in \mathbb {N}}\) be a uniformly integrable sequence of functions \(\Omega \to E\), for \(E\) a Hilbert space. Then there exist functions \(g_n \in convex(f_n, f_{n+1}, \cdots )\) such that \((g_n)_{n\in \mathbb {N}}\) converges in \(L^1\).

Proof

For \(i,n\in \mathbb {N}\) set \(f_{n}^{(i)}:=f_n \mathbb {1}_{(|f_n|\leq i)}\) such that \(f_{n}^{(i)}\in L^2\). Using 12.7 there exist for every \(n\) convex weights \(\lambda _n^{n}, \ldots , \lambda _{N_n}^{n}\) such that the functions \( \lambda _n^{n} f_n^{(i)} + \ldots +\lambda _{N_n}^{n} f_{N_n}^{(i)}\) converge in \(L^2\) for every \(i\in \mathbb {N}\). By uniform integrability, \(\lim _{i\to \infty }\| f^{(i)}_n- f_n\| _1=0\), uniformly with respect to \(n\). Hence, once again, uniformly with respect to \(n\),

\[ \textstyle \lim _{i\to \infty }\| (\lambda _n^{n} f_n^{(i)} + \ldots +\lambda _{N_n}^{n} f_{N_n}^{(i)})-(\lambda _n^{n} f_n + \ldots +\lambda _{N_n}^{n} f_{N_n})\| _1= 0. \]

Thus \((\lambda _n^{n} f_n + \ldots +\lambda _{N_n}^{n} f_{N_n})_{n\geq 1}\) is a Cauchy sequence in \(L^1\).

Komlòs lemma for nonnegative random variables

Lemma 12.9 Komlòs lemma - nonnegative, a.e. convergence
#

Let \((f_n)_{n\in \mathbb {N}}\) be a sequence of random variables with values in \([0, \infty ]\). Then there exist random variables \(g_n \in convex( f_n, f_{n+1}, \cdots )\) such that \((g_n)_{n\in \mathbb {N}}\) converges almost surely to a random variable \(g\).

Proof

Let \(\phi : (\Omega \to [0, \infty ]) \to [0, \infty ]\) be defined by \(\phi (X) = \mathbb {E}[e^{-X}]\). Then \(\phi \) is convex and \(\phi (f_n) \le 1\) for all \(n\). By Lemma 12.1, there exist \(g_n \in convex( f_n, f_{n+1}, \cdots )\) such that for all \(\delta {\gt}0\), for \(N\) large enough and \(n, m \ge N\),

\begin{align*} \mathbb {E}[e^{-g_n}]/2 + \mathbb {E}[e^{-g_m}]/2 - \mathbb {E}[e^{-(g_n + g_m)/2}] {\lt} \delta \: . \end{align*}

For \(\varepsilon {\gt} 0\), let \(B_\varepsilon = \{ (x, y) \in [0, \infty ]^2 \mid \vert x - y \vert \ge \varepsilon \text{ and } \min \{ x, y\} \le 1/\varepsilon \} \). Then for all \(x, y\),

\begin{align*} \left\vert e^{-x} - e^{-y} \right\vert & \le \varepsilon + 2 e^{-1/\varepsilon } + 2 \mathbb {1}_{B_\varepsilon }(x, y) \: . \end{align*}

Hence for any pair of random variables \((X, Y)\) with values in \([0, \infty ]\),

\begin{align*} \mathbb {E}\left[\left\vert e^{-X} - e^{-Y} \right\vert \right] & \le \varepsilon + 2 e^{-1/\varepsilon } + 2 P((X, Y) \in B_\varepsilon ) \: . \end{align*}

On the other hand, for \((x, y) \in B_\varepsilon \), there exists \(\delta _\varepsilon {\gt} 0\) such that

\begin{align*} e^{-x}/2 + e^{-y}/2 - e^{-(x + y)/2} \ge \delta _\varepsilon \: . \end{align*}

Thus,

\begin{align*} P((X, Y) \in B_\varepsilon ) & \le \frac{1}{\delta _\varepsilon } \mathbb {E}\left[ e^{-X}/2 + e^{-Y}/2 - e^{-(X + Y)/2} \right] \: . \end{align*}

For \(n, m \ge N\) large enough so that we can apply the first inequality of this proof with \(\delta = \varepsilon \delta _\varepsilon \), we deduce that

\begin{align*} \mathbb {E}\left[\left\vert e^{-g_n} - e^{-g_m} \right\vert \right] & \le \varepsilon + 2 e^{-1/\varepsilon } + \frac{2}{\delta _\varepsilon } \mathbb {E}\left[ e^{-g_n}/2 + e^{-g_m}/2 - e^{-(g_n + g_m)/2} \right] \\ & \le \varepsilon + 2 e^{-1/\varepsilon } + 2 \varepsilon \: . \end{align*}

As \(\varepsilon \) is arbitrary, we deduce that \((e^{-g_n})_{n\in \mathbb {N}}\) is a Cauchy sequence in \(L^1\) and thus converges in \(L^1\) to some random variable \(h\). Therefore, it has a subsequence \((e^{-g_{n_k}})_{k\in \mathbb {N}}\) converging almost surely to \(h\). Finally, the subsequence of \(g_n\) converges almost surely to \(g = -\log (h)\).

12.2 Doob decomposition in discrete time

Definition 12.10 Predictable part
#

Let \(X : \mathbb {N} \to \Omega \to E\) be a process indexed by \(\mathbb {N}\), for \(E\) a Banach space. Let \((\mathcal{F}_n)_{n\in \mathbb {N}}\) be a filtration on \(\Omega \). The predictable part of \(X\) is the process \(A : \mathbb {N} \to \Omega \to E\) defined for \(n \ge 0\) by

\[ A_n = \sum _{k=0}^{n-1} \mathbb {E}[X_{k+1}-X_k \mid \mathcal{F}_k]. \]
Definition 12.11 Martingale part
#

Let \(X : \mathbb {N} \to \Omega \to E\) be a process indexed by \(\mathbb {N}\), for \(E\) a Banach space. Let \((\mathcal{F}_n)_{n\in \mathbb {N}}\) be a filtration on \(\Omega \) and let \(A\) be the predictable part of \(X\) for that filtration. The martingale part of \(X\) is the process \(M : \mathbb {N} \to \Omega \to E\) defined by \(M_n = X_n - A_n\).

In what follows, we fix a process \(X : \mathbb {N} \to \Omega \to E\) and a filtration \((\mathcal{F}_n)_{n \in \mathbb {N}}\), and denote by \(A\) the predictable part of \(X\) and by \(M\) its martingale part.

Lemma 12.12

We have \(A_0 = 0\).

Proof

By definition.

\(M_0 = X_0\).

Proof

By definition of the martingale part, \(M = X - A\). By Lemma 12.12, \(A_0 = 0\), thus \(M_0 = X_0 - A_0 = X_0\).

Lemma 12.14

For any integer \(n \ge 0\), \(A_{n+1} = A_n + \mathbb {E}[X_{n+1} - X_n \mid \mathcal{F}_n]\).

Proof

Let \(n \in \mathbb {N}\). Then

\begin{align*} A_{n+1} & = \sum _{k=0}^n \mathbb {E}[X_{k+1}-X_k \mid \mathcal{F}_k] \\ & = \sum _{k=0}^{n-1} \mathbb {E}[X_{k+1}-X_k \mid \mathcal{F}_k] + \mathbb {E}[X_{n+1}-X_n \mid \mathcal{F}_n] \\ & = A_n + \mathbb {E}[X_{n+1} - X_n \mid \mathcal{F}_n], \end{align*}

which concludes the proof.

\(M_{n+1} = M_n + X_{n+1} - X_n - \mathbb {E}[X_{n+1} - X_n \mid \mathcal{F}_n]\).

Proof

Using Lemma 12.14, we have for \(n \in \mathbb {N}\),

\begin{align*} M_{n+1} & = X_{n+1} - A_{n+1} \\ & = X_{n+1} - A_n - \mathbb {E}[X_{n+1} - X_n \mid \mathcal{F}_n] \\ & = M_n + X_{n+1} - X_n - \mathbb {E}[X_{n+1} - X_n \mid \mathcal{F}_n] \: . \end{align*}

If \(X\) is a martingale, then \(A = 0\) almost surely.

Proof

By the martingale property, each conditional expectation in the definition of \(A\) is zero.

If \(X\) is predictable, then \(A = X - X_0\) almost surely.

Proof

Since \(X\) is predictable, for all \(n \in \mathbb {N}\), \(X_{n+1}\) is \(\mathcal{F}_n\)-measurable and thus \(\mathbb {E}[X_{n+1} - X_n \mid \mathcal{F}_n] = X_{n+1} - X_n\) a.s.. We get a telescoping sum in the definition of \(A\) and thus \(A_n = X_n - X_0\) a.s. for all \(n \in \mathbb {N}\).

If \(X\) is predictable, then \(M = X_0\) almost surely.

Proof

By definition of the martingale part, \(M = X - A\). By Lemma 12.17, \(A = X - X_0\) almost surely, thus \(M = X_0\) almost surely.

If \(X\) is a martingale, then \(M = X\) almost surely.

Proof

By definition of the martingale part, \(M = X - A\). By Lemma 12.16, \(A = 0\) almost surely, thus \(M = X\) almost surely.

For any scalar \(c\), the predictable part of \(c X\) is \(c A\).

Proof

Linearity of the conditional expectation.

For any scalar \(c\), the martingale part of \(c X\) is \(c M\).

Proof

By definition of the martingale part, \(M = X - A\). By Lemma 12.20, the predictable part of \(c X\) is \(c A\). Therefore, the martingale part of \(c X\) is \(c X - c A = c M\).

The predictable part of \(X + Y\) is the sum of the predictable part of \(X\) and the predictable part of \(Y\).

Proof

Linearity of the conditional expectation.

The martingale part of \(X + Y\) is the sum of the martingale part of \(X\) and the martingale part of \(Y\).

Proof

By definition of the martingale part, \(M = X - A\). By Lemma 12.22, the predictable part of \(X + Y\) is \(A + B\). Therefore, the martingale part of \(X + Y\) is \((X + Y) - (A + B) = M + N\).

Lemma 12.24

The predictable part \(A\) is adapted to the filtration \((\mathcal{F}_{n+1})_{n \in \mathbb {N}}\).

Proof

The predictable part of a process is predictable.

Proof

By Lemma 7.48, the process \(A\) is predictable if \(A_0\) is \(\mathcal{F}_0\)-measurable and for all integer \(n\), \(A_{n+1}\) is \(\mathcal{F}_n\)-measurable. As \(A_0 = 0\) from Lemma 12.12, it is \(\mathcal{F}_0\)-measurable. Lemma 12.24 allows to conclude the proof.

Lemma 12.26

Suppose that the filtration is \(\sigma \)-finite and that \(X\) is adapted, with \(X_n\) integrable for all \(n\). Then the martingale part of \(X\) is a martingale.

Proof

The predictable part of a real-valued submartingale is an almost surely nondecreasing process.

Proof

Let \(X\) be a submartingale and let \(A\) be its predictable part. Then for all \(n \geq 0\), from Lemma 7.9 we have that almost surely

\begin{align*} A_{n+1} & = A_n + \mathbb {E}\left[ X_{n+1} - X_n | \mathcal{F}_n \right] \ge A_n \: . \end{align*}

The first equality comes from Lemma 12.14. As \(\mathbb {N}\) is countable, we deduce that almost surely, for all \(n \in \mathbb {N}\), \(A_{n+1} \ge A_n\). Thus, \((A_n)_{n \in \mathbb {N}}\) is almost surely nondecreasing.

The predictable part of a real-valued submartingale is almost surely nonnegative.

Proof

By Lemma 12.27, the predictable part \(A\) is almost surely nondecreasing. By Lemma 12.12, \(A_0 = 0\). Therefore, \(A_n \ge 0\) almost surely for all \(n \in \mathbb {N}\).

Let \(X\) be a real adapted process and let \(A\) be its predictable part. Let \(c \in \mathbb {R}\). The hitting time \(\tau _{A_{n + 1} {\gt} c}\) of Definition 8.62 is a stopping time.

Proof

Since \(A_{n}\) is predictable, \(A_{n + 1}\) is adapted. The hitting time of an adapted process is a stopping time (we use the discrete time version of that result here, not the full Début theorem).

Let \(X\) be a real adapted process and let \(A\) be its predictable part. Let \(c \in \mathbb {R}\). Then \(A_{\tau _{A_{n + 1} {\gt} c}} \le c\) and if \(\tau _{A_{n + 1} {\gt} c} {\lt} \infty \) then \(A_{\tau _{A_{n + 1} {\gt} c} + 1} {\gt} c\).

Proof

Let \(X\) be a real adapted process and let \(A\) be its predictable part. Let \(a, b \in \mathbb {R}\) with \(a \le b\). If \(\tau _{A_{n + 1} {\gt} b} {\lt} \infty \) then \(A_{\tau _{A_{n + 1} {\gt} b}+1} - A_{\tau _{A_{n + 1} {\gt} a}} \ge b - a\).

Proof

Let \(T \in \mathbb {N}\) and let \(X\) be an adapted process with \(X_n\) integrable for all \(n\) and such that \(X_T = 0\). Then for a stopping time \(\tau \le T\), almost surely, \(M_\tau = -\mathbb {E}[A_T \mid \mathcal{F}_\tau ]\) and \(X_\tau = A_\tau - \mathbb {E}[A_T \mid \mathcal{F}_\tau ]\).

Proof

By definition and since \(X_T = 0\), \(M_T = X_T - A_T = - A_T\). Since \(M\) is a martingale it follows by optional sampling that for any stopping time \(\tau \le T\)

\begin{align*} M_\tau = \mathbb {E}[M_T \mid \mathcal{F}_\tau ] = -\mathbb {E}[A_T \mid \mathcal{F}_\tau ] \: . \end{align*}

And then \(X_\tau = M_\tau + A_\tau = A_\tau - \mathbb {E}[A_T \mid \mathcal{F}_\tau ]\).

TODO: define \(\tau ^T_{A_{n+1}{\gt}c}\) as the hitting time of \(A_{n+1} {\gt} c\) but with the convention that it is equal to \(T\) if the hitting time is greater than \(T\) (hittingBtwn).

In the next lemmas, we write \(\tau ^T(c) = \tau ^T_{A_{n+1}{\gt}c}\) for brevity.

Suppose that \(X\) is a submartingale and let \(T \ge 1\), \(c \in \mathbb {R}\). Then \(A_T {\gt} c \iff \tau ^T(c) {\lt} T\).

Proof

By Lemma 8.64, \(\tau ^T(c) {\lt} T \iff \exists s {\lt} T, A_{s+1} {\gt} c\), which is equivalent to \(\exists s \in \{ 1, \ldots , T\} , A_s {\gt} c\). By monotonicity of \(A\) (Lemma 12.27), this is equivalent to \(A_T {\gt} c\).

Suppose that \(X\) is a submartingale with \(X_T = 0\). Then

\begin{align*} \mathbb {E}\left[A_T \mathbb {I}\{ A_T {\gt} c\} \right] \le c \mathbb {P}(\tau ^T(c) {\lt} T) - \int _{\tau ^T(c) {\lt} T} X_{\tau ^T(c)} dP \: . \end{align*}
Proof

By Lemma 12.33,

\begin{align*} \mathbb {E}\left[A_T \mathbb {I}\{ A_T {\gt} c\} \right] = \mathbb {E}\left[A_T \mathbb {I}\{ \tau ^T(c) {\lt} T\} \right] \end{align*}

Since that last event is \(\mathcal{F}_{\tau ^T(c)}\)-measurable, we can apply the tower property of conditional expectation to get

\begin{align*} \mathbb {E}\left[A_T \mathbb {I}\{ \tau ^T(c) {\lt} T\} \right] = \int _{\tau ^T(c) {\lt} T} \mathbb {E}[A_T \mid \mathcal{F}_{\tau ^T(c)}] dP \end{align*}

Now by Lemma 12.32 and Lemma 12.30,

\begin{align*} \mathbb {E}\left[A_T \mathbb {I}\{ \tau ^T(c) {\lt} T\} \right] & = \int _{\tau ^T(c) {\lt} T} A_{\tau ^T(c)} - X_{\tau ^T(c)} dP \\ & \le c \mathbb {P}(\tau ^T(c) {\lt} T) - \int _{\tau ^T(c) {\lt} T} X_{\tau ^T(c)} dP \: . \end{align*}

Suppose that \(X\) is a submartingale with \(X_T = 0\). Then

\begin{align*} \mathbb {P}(\tau ^T(c) {\lt} T) & \le - \frac{2}{c} \int _{\tau ^T(c/2) {\lt} T} X_{\tau ^T(c/2)} dP \: . \end{align*}
Proof

Notice that \(\{ \tau ^T(c){\lt}T\} \subseteq \{ \tau ^T(c/2){\lt}T\} \), thus

\begin{align*} \int _{\tau ^T(c/2){\lt}T} -X_{\tau ^T(c/2)}dP & =\int _{\tau ^T(c/2){\lt}T}\mathbb {E}[A_T \mid \mathcal{F}_{\tau ^T(c/2)}] - A_{\tau ^T(c/2)} dP \\ & = \int _{\tau ^T(c/2){\lt}T}A_T - A_{\tau ^T(c/2)}dP \\ & \geq \int _{\tau ^T(c){\lt}T}A_T - A_{\tau ^T(c/2)}dP \\ \intertext {(over the event $\{ \tau ^T(c){\lt}T\} $ $A_T\geq c$ and $A_{\tau ^T(c/2)}\leq c/2$, thus $A_T - A_{\tau ^T(c/2)}\geq c/2$)} & \ge \frac{c}{2}P(\tau ^T(c){\lt}T) \: . \end{align*}

Suppose that \(X\) is a submartingale with \(X_T = 0\). Then

\begin{align*} \mathbb {E}\left[A_T \mathbb {I}\{ A_T {\gt} c\} \right] \le - 2 \int _{\tau ^T(c/2) {\lt} T} X_{\tau ^T(c/2)} dP - \int _{\tau ^T(c) {\lt} T} X_{\tau ^T(c)} dP \: . \end{align*}
Proof

Put together the bounds of Lemma 12.34 and Lemma 12.35.

Suppose that \(X\) is a submartingale with \(X_T = 0\). Then

\begin{align*} P(\tau ^T(c) {\lt} T) \le -\frac{\mathbb {E}[X_0]}{c} \: . \end{align*}
Proof

Starting with Lemma 12.33,

\begin{align*} P(\tau ^T(c){\lt}T) =P(A_T{\gt}c) \stackrel{Markov}{\leq }\frac{\mathbb {E}[A_T]}{c} =-\frac{\mathbb {E}[M_T]}{c} \stackrel{mg}{=}-\frac{\mathbb {E}[X_0]}{c} \: . \end{align*}

12.3 Doob-Meyer decomposition

For uniqueness of Doob-Meyer Decomposition we will need theorem 9.30.

We now start the construction for the existence part.

Definition 12.38 Dyadics
#

For \(T{\gt}0\), let \(\mathcal{D}_n^T = \left\lbrace \frac{k}{2^n}T \mid k=0,\cdots 2^n\right\rbrace \) be the set of dyadics at scale \(n\) and let \(\mathcal{D}^T=\bigcup _{n\in \mathbb {N}}\mathcal{D}_n^T\) be the set of all dyadics of \([0,T]\).

Let \(S : [0,T] \to \Omega \to \mathbb {R}\) be a cadlag submartingale of class D on \([0,T]\).

Definition 12.39 S, A, M

For \(n \in \mathbb {N}\), the restriction of \(S\) to \(\mathcal{D}_n^T\) is a discrete time submartingale \(S^n : \mathbb {N} \to \Omega \to \mathbb {R}\) with respect to the filtration \(\mathcal{F}^n_k = \mathcal{F}_{k2^{-n}T}\), with \(S^n_k = S_{k2^{-n}T}\) (and constant equal to \(S_T\) for \(k {\gt} 2^n\); similarly for the filtration), and we can apply the construction of the previous section to it. Let \(A^n\) and \(M^n\) be the predictable and martingale parts of \(S^n\).

The sequence \((A^n_{2^n})_{n\in \mathbb {N}}\) is uniformly integrable (bounded in \(L^1\) norm).

Remark: \(A^n_{2^n}\) is the predictable part of \(S\) at time \(T\) for the discrete time filtration given by the dyadics at scale \(n\).

Proof

WLOG \(S^n_{2^n} = S_T=0\) and \(S_t\leq 0\) (else consider \(S_t-\mathbb {E}\left[S_T\vert \mathcal{F}_{t}\right]\)).

We write \(\tau _n(c)\) for the hitting time \(\tau ^T_{A^n_{k+1}{\gt}c}\).

By Lemma 12.36,

\[ \int _{(A^n_{2^n}{\gt}c)} A^n_{2^n} dP \le -2 \int _{\tau _n(c/2){\lt} 2^n} S^n_{\tau _n(c/2)} dP - \int _{\tau _n(c) {\lt} 2^n} S^n_{\tau _n(c)} dP. \]

On the other hand, by Lemma 12.37,

\[ P(\tau _n(c){\lt}2^n) \le -\frac{\mathbb {E}[S_0]}{c} \]

which goes to \(0\) uniformly in \(n\) as \(c\) goes to infinity.

The integrals in the upper bound on \(\int _{(A^n_{2^n}{\gt}c)} A^n_{2^n} dP\) are integrals of uniformly integrable random variables (by the class D assumption) over sets whose probability goes to zero uniformly. Therefore, these integrals go to zero uniformly in \(n\) as \(c\) goes to infinity.

This implies that \(\int _{(A^n_{2^n}{\gt}c)} A^n_{2^n} dP\) goes to 0 uniformly in \(n\) as \(c \to +\infty \). Hence, the \(L^1\) norm is uniformly bounded.

The sequence \((M^n_{2^n})_{n\in \mathbb {N}}\) is uniformly integrable (bounded in \(L^1\) norm).

Proof

\(M^n_{2^n} = S_{2^n} - A^n_{2^n}\), also \(S\) is of class \(D\) hence uniformly integrable and \(A^n_{2^n}\) is uniformly integrable by Lemma 12.40.

The martingale on \([0, T]\) defined by \(t \mapsto \mathbb {E}[M^n_{2^n}\vert \mathcal{F}_t]\) admits a modification which is a cadlag martingale.

Proof

By theorem 11.11

Definition 12.43
#

For \(t\in [0,T]\) let \(\overline{M}^n_t\) be the cadlag modification of \(t \mapsto \mathbb {E}[M^n_{2^n} \mid \mathcal{F}_t]\) from lemma 12.42.

There exists an \(M : \Omega \to \mathbb {R}\) and convex weights \(\lambda ^n_n,\cdots ,\lambda ^n_{N_n}\) such that \(\mathcal{M}^n_T\stackrel{L^1}{\rightarrow }M\), where \(\mathcal{M}^n:=\lambda ^n_n \overline{M}^n+\cdots +\lambda ^n_{N_n} \overline{M}^{N_n}.\)

Proof

By lemma 12.41 \((M^n_T)_{n\in \mathbb {N}}\) is uniformly bounded in \(L^1\), thus by lemma 12.8 there are convex weights \(\lambda ^n_n,\cdots ,\lambda ^n_{N_n}\) such that \(\mathcal{M}^n_T\stackrel{L^1}{\rightarrow }M\), where \(\mathcal{M}^n:=\lambda ^n_n \overline{M}^n+\cdots +\lambda ^n_{N_n} \overline{M}^{N_n}.\)

Definition 12.45

For \(t\in [0,T]\) let \(\mathcal{M}^n_t=\lambda ^n_n \overline{M}^n_t+\cdots +\lambda ^n_{N_n} \overline{M}^{N_n}_t\) be the convex combination of the \(\overline{M}^n_t\)’s from Lemma 12.44 and let \(M\) be its limit.

\(\mathcal{M}^n\) is cadlag.

Proof

By construction and 12.42

Definition 12.47

Let \(M_t\) be a cadlag modification of \(\mathbb {E}[M \mid \mathcal{F}_t]\), obtained by applying Theorem 11.11 to the martingale \(t \mapsto \mathbb {E}[M \mid \mathcal{F}_t]\).

For every \(t\in [0,T]\) we have \(\mathcal{M}^n_t\stackrel{L^1}{\rightarrow }M_t\).

Proof

By Jensen’s inequality, the tower lemma and lemma 12.44

\begin{gather} \nonumber \mathbb {E}[|\mathcal{M}^n_t-M_t|] = \mathbb {E}[|\mathbb {E}[\mathcal{M}^n_T-M\vert \mathcal{F}_t]|] \le \mathbb {E}[|\mathcal{M}^n_T-M|]\rightarrow 0 \: , \\ \Rightarrow \mathcal{M}^n_t\stackrel{L^1}{\rightarrow } M_t,\quad \forall t\in [0,T].\label{equation_DM_e7} \end{gather}

Define

  • a left continuous process \(\overline{A}^n_s:=\sum _{t\in \mathcal{D}^T_n}A^n_t\mathbb {1}_{(t-2^{-n}T,t]}(s)\)

  • \(\mathcal{A}^n=\lambda ^n_n \overline{A}^n+\cdots +\lambda ^n_{N_n}\overline{A}^{N_n}\)

  • \(A_t=S_t-M_t\)

TODO: add proper def, add details.

Lemma 12.49

If \(f_n, f : [0, 1] \rightarrow \mathbb {R}\) are increasing functions such that \(f\) is right continuous and \(\lim _n f_n(t) = f (t)\) for \(t \in \mathcal{D}^T\), then \(\limsup _n f_n(t) \leq f (t)\) for all \(t \in [0, T]\).

Proof

Let \(t\in [0,T]\) and \(s\in \mathcal{D}^T\) such that \(t{\lt}s\). We have

\[ \limsup _n f_n(t)\leq \limsup _n f_n(s)=f(s). \]

Since the above is true uniformly in \(s\) in particular since \(f\) is right-continuous

\[ \limsup _n f_n(t)\leq \lim _{\stackrel{s\rightarrow t^+}{s\in \mathcal{D}^T}}f(s)=f(t). \]
Lemma 12.50

If \(f_n, f : [0, 1] \rightarrow \mathbb {R}\) are increasing functions such that \(f\) is right continuous and \(\lim _n f_n(t) = f (t)\) for \(t \in \mathcal{D^T}\), if \(f\) is continuous in \(t\in [0,T]\) then \(\lim _n f_n(t) = f (t)\).

Proof

By lemma 12.49 it is enough to show that \(\liminf _n f_n(t)\geq f(t)\). Let \(s\in \mathcal{D}^T\) such that \(t{\gt}s\). We have

\[ \liminf _n f_n(t)\geq \liminf _n f_n(s)=f(s). \]

Since the above is true uniformly in \(s\) in particular since \(f\) is continuous in \(t\)

\[ \liminf _n f_n(t)\geq \lim _{\stackrel{s\rightarrow t^-}{s\in \mathcal{D}^T}}f(s)=f(t). \]
Lemma 12.51

There exists a set \(E\subseteq \Omega \), \(P(E)=0\) and a subsequence \(k_n\) such that \(\lim _n\mathcal{A}^{k_n}_t(\omega )=A_t(\omega )\) for every \(t\in \mathcal{D}^T,\omega \in \Omega \setminus E\).

Proof

By Lemma 12.48

\[ \mathcal{A}^n_t=S_t-\mathcal{M}^n_t\stackrel{L^1}{\rightarrow }S_t-M_t=A_t,\quad \forall t\in \mathcal{D}^T. \]

\(\mathcal{D}^T\) is countable we can arrange the elements as \((t_n)_{n\in \mathbb {N}}\). Given \(t_0\in \mathcal{D}^T\) there exists a subsequence \(k^{0}_n\) for which \(\mathcal{A}^{k^{0}_n}_{t_0}\) converges to \(A_{t_0}\) over the set \(\Omega \setminus E_{0}\) where \(P(E_{0})=0\). Suppose we have a sequence \(k^m_n\) for which \(\mathcal{A}^{k^j_n}_{t_j}\) converges to \(A_{t_j}\) over the set \(\Omega \setminus E_{m}\) where \(P(E_{m})=0\) for each \(j=0,\cdots ,m\). From this subsequence we may extract a new subsequence \(k^{m+1}_n\) for which \(\mathcal{A}^{k^{m+1}_n}_{t_{m+1}}\) converges to \(A_{t_{m+1}}\) over the set \(\Omega \setminus E_{m+1}\) where \(P(E_{m+1})=0\). By construction over this subsequence the convergence for \(t_0,\cdots ,t_m\) still applies. With a diagonal argument we obtain the final result with \(E=\bigcup _n E_n\).

Lemma 12.52

\((A_t)_{t\in [0,T]}\) is an increasing process.

Proof

Since \(\mathcal{A}^n_t\) is increasing on \(\mathcal{D}^T\) by lemma 12.51 also \(A\) is almost surely increasing on \(\mathcal{D}^T\). Since \(S,M\) are cadlag also \(A\) is cadlag (thus right-continuous). It follows that \(A\) must be increasing on \([0,T]\).

Lemma 12.53

Let \(\tau \) be an \((\mathcal{F}_t)_{t\in [0,T]}\) stopping time. We have \(\lim _n\mathbb {E}[A^n_\tau ]=\mathbb {E}[A_\tau ]\).

Proof

Let \(\sigma _n:=\inf \left(t\in \mathcal{D}^T_n\vert t{\gt}\tau \right)\). By construction of \(A^n\) we have \(A^n_\tau =A^n_{\sigma _n}\). Also \(\sigma _n\searrow \tau \). Since \(S\) is of class \(D\) and cadlag we have

\begin{align*} \mathbb {E}[A^n_\tau ]& =\mathbb {E}[A^n_{\sigma _n}]=\mathbb {E}[S_{\sigma _n}]-\mathbb {E}[M^n_{\sigma _n}]=\mathbb {E}[S_{\sigma _n}]-\mathbb {E}[M^n_0]=\\ & =\mathbb {E}[S_{\sigma _n}]-\mathbb {E}[S_0]\rightarrow \mathbb {E}[S_\tau ]-\mathbb {E}[M_0]=\mathbb {E}[S_\tau ]-\mathbb {E}[M_\tau ]=\mathbb {E}[A_\tau ]. \end{align*}
Lemma 12.54

Let \(\tau \) be an \((\mathcal{F}_t)_{t\in [0,T]}\) stopping time. We have \(\limsup _n \mathcal{A}_\tau ^n = A_\tau \).

Proof

Firstly we notice that \(\liminf _n \mathbb {E}[A_\tau ^n] \leq \limsup _n \mathbb {E} [\mathcal{A}_\tau ^n ] \leq \mathbb {E}[\limsup _n \mathcal{A}_\tau ^n ] \leq \mathbb {E}[ A_\tau ]\), where the first inequality is justified by the definition of limsup and liminf and the fact that

\[ \sup _{k\geq n}\mathbb {E}[\mathcal{A}^k_\tau ]\geq \sum _{m=k}^{N_k}\lambda ^k_m\mathbb {E}[A^m_\tau ]\geq \sum _{m=k}^{N_k}\lambda ^k_m\inf _{j\geq n}\mathbb {E}[A^j_\tau ]=\inf _{k\geq n}\mathbb {E}[A^k_\tau ] \]

the third inequality by 12.49. Let’s prove the second inequality: observe that

\[ \mathcal{A}^n_\tau = A_1+\mathcal{A}^n_\tau -A_1\leq A_1+(\mathcal{A}^n_\tau -A_1)_+, \]

thus it follows that \(\mathcal{A}^n_\tau - (\mathcal{A}^n_\tau -A_1)_+\leq A_1\); since \(A_1\) is an integrable guardian the inverse Fatou Lemma may be applied to show together with limsup properties that

\begin{align*} \limsup _n\mathbb {E}[\mathcal{A}^n_\tau ]+0 & = \limsup _n\mathbb {E}[\mathcal{A}^n_\tau ]+\liminf _n-\mathbb {E}[(\mathcal{A}^n_\tau -A_1)_+] \leq \limsup _n\mathbb {E}[\mathcal{A}^n_\tau -(\mathcal{A}^n_\tau -A_1)_+]\leq \\ & \leq \mathbb {E}[\limsup _n\mathcal{A}^n_\tau -(\mathcal{A}^n_\tau -A_1)_+]\leq \mathbb {E}[\limsup _n\mathcal{A}^n_\tau ]-\mathbb {E}[\liminf _n(\mathcal{A}^n_\tau -A_1)_+]\leq \mathbb {E}[\limsup _n\mathcal{A}^n_\tau ], \end{align*}

where the first equality is justified by the fact that \(\mathcal{A}^n_\tau \leq \mathcal{A}^n_1\rightarrow A_1\) almost surely. Due to lemma 12.53 and 12.49 the first sequence of inequalities is a sequence of equalities, thus we know that \(A_\tau - \limsup _n \mathcal{A}_\tau ^n \) is an a.s. nonnegative function with null expected value, and thus it must be almost everywhere null.

Theorem 12.55 Doob-Meyer decomposition

Let \(S = (S_t )_{0\leq t\leq T}\) be a cadlag submartingale of class \(D\). Then, \(S\) can be written in a unique way in the form \(S = M + A\) where \(M\) is a cadlag martingale and \(A\) is a predictable increasing process starting at \(0\).

Proof

By construction \(M\) is a cadlag martingale and \(A_0=0\) and by lemma 12.52 \(A\) is increasing. It suffices to show that \(A\) is predictable. \(A^n,\mathcal{A}^n\) are left continuous and adapted, and thus they are predictable (Lemma 7.49). It is enough to show that \(\omega -a.e.\), \(\forall t\in [0,T]\), \(\limsup _n\mathcal{A}^n_t(\omega )=A_t(\omega )\).

By lemma 12.50 that is true for any continuity point of \(A\). Since \(A\) is increasing it can only have a finite amount of jumps larger than \(1/k\) for any \(k\in \mathbb {N}\). Consider now \(\tau _{q,k}\) the family of stopping times equal to the \(q\)-th time that the process \(A_t\) has a jump higher than \(1/k\). This is a countable family. Given a time \(t\) and a trajectory \(\omega \) there are only two possibilities: either \(A\) is continuous or not at time \(t\) along \(\omega \). If \(A\) is continuous at time \(t\) we have \(\limsup _n\mathcal{A}^n_t(\omega )=A_t(\omega )\), if it jumps there exists \(q(\omega ),k(\omega )\) such that \(t=\tau _{q(\omega ),k(\omega )}(\omega )\). Due to lemma 12.54 we know that \(\limsup _n A^n_{\tau _{q,k}} = A_{\tau _{q,k}}\) for each \(q,k\) almost surely. Thus, since it is an intersection of a countable amount of almost sure events \(\forall \omega \in \Omega '\) with \(P(\Omega ')=1\), for each \(q,k\) \(\limsup _n A^n_{\tau _{q,k}}(\omega ) = A_{\tau _{q,k}}(\omega )\) (\(\omega \) does not depend upon \(q,k\)). Consequently, \(\forall \omega \in \Omega '\) we have \(\limsup _n\mathcal{A}^n_t(\omega )=\limsup _n\mathcal{A}^n_{\tau _{q(\omega ),k(\omega )}}(\omega )=A_{\tau _{q(\omega ),k(\omega )}}(\omega )=A_t(\omega )\)

12.4 Local version of the Doob-Meyer decomposition

An adapted process \(X\) is a cadlag local submartingale iff \(X = M + A\) where \(M\) is a cadlag local martingale and \(A\) is a predictable, cadlag, locally integrable and increasing process starting at \(0\). The processes \(M\) and \(A\) are uniquely determined by \(X\) a.s.

Proof