Formalization of a Brownian motion and of stochastic integrals in Lean

8 Doob-Meyer Theorem

This chapter follows [ which gives an elementary and short proof of the result.

8.1 Cadlag modifications of (local) martingales

Definition 8.1 Dyadics
#

For \(T{\gt}0\), let \(\mathcal{D}_n^T = \left\lbrace \frac{k}{2^n}T \mid k=0,\cdots 2^n\right\rbrace \) be the set of dyadics at scale \(n\) and let \(\mathcal{D}^T=\bigcup _{n\in \mathbb {N}}\mathcal{D}_n^T\) be the set of all dyadics of \([0,T]\).

Let \(X=(X_t)_{t\in \mathcal{D}}\) be a martingale indexed by the dyadics. Then almost surely, for every \(t\geq 0\) the limit

\[ \lim _{\stackrel{s\rightarrow t^-}{s\in \mathcal{D}}}X_s(\omega ) \]

exists and is finite.

Proof

See 8.2.1 of Pascucci.

Let \(X=(X_t)_{t\in \mathcal{D}}\) be a martingale indexed by the dyadics. Then almost surely, for every \(t\geq 0\) the limit

\[ \lim _{\stackrel{s\rightarrow t^+}{s\in \mathcal{D}}}X_s(\omega ) \]

exists and is finite.

Proof

See 8.2.1 of Pascucci.

Let the filtered probability space satisfy the usual conditions. Then every martingale \(X\) admits a modification that is still a martingale with cadlag trajectories.

Proof

See 8.2.3 of Pascucci.

Let the filtered probability space satisfy the usual conditions. Then every nonnegative submartingale \(X\) admits a modification that is still a nonnegative submartingale with cadlag trajectories.

Proof

See 8.2.3 of Pascucci.

Lemma 8.6

Let the filtered probability space satisfy the usual conditions. Then every local martingale \(X\) admits a modification that is still a local martingale with cadlag trajectories.

Proof

8.2 Komlòs Lemma

Firstly we will need Komlos’ Lemma.

Lemma 8.7

Let \(H\) be a Hilbert space and \((f_n)_{n\in \mathbb {N}}\) a bounded sequence in \(H\). Then there exist functions \(g_n\in convex(f_n,f_{n+1},\cdots )\) such that \((g_n)_{n\in \mathbb {N}}\) converges in \(H\).

Proof

Let \(r_n = \inf (\| g\| _2:g\in convex(f_n, f_{n+1},\ldots ))\). Let \(A=\sup _{n\geq 1} r_n\). \(A\) is finite by boundedness of \((f_n)_{n\in \mathbb {N}}\) and for each \(n\) we may pick some \(g_n\in convex(f_n, f_{n+1},\ldots )\) such that \( \| g_n\| _2\leq A+1/n\) by \(\inf \) and \(\sup \) definitions. Let \(\epsilon {\gt}0\). By construction \((r_n)_{n\in \mathbb {N}}\) is increasing. By properties of \(\sup \) there exists \(\bar{n}\) such that \(r_{\bar{n}}\geq A-\epsilon \) and such that \(\frac{1}{\bar{n}}\leq \epsilon \). Let \(m\geq k\geq \bar{n}\). \((g_k+g_m)/2 \in convex(f_k,f_{k+1},\ldots )\). It follows since \((r_n)_{n\in \mathbb {N}}\) is increasing that \(\| (g_k+g_m)/2\| _2\geq A-\epsilon \). Hence due to the ordering of \(m,k,\bar{n}\)

\[ \| g_k-g_m\| _2^2=2 \| g_k\| _2^2+2\| g_m\| _2^2- \| g_k+g_m\| _2^2 \leq 4(A+\frac{1}{\bar{n}})^2-4(A-\epsilon )^2\leq 16A\epsilon . \]

By completeness, \((g_n)_{n\geq 1}\) converges in \(\| .\| _2\).

Lemma 8.8

Let \(X\) be a normed vector space (over \(\mathbb {R}\)). Let \((x_n)_{n\in \mathbb {N}}\) be a sequence in \(X\) converging to \(x\) w.r.t. the topology of \(X\). Let \((N_n)_{n\in \mathbb {N}}\) be a sequence in \(\mathbb {N}\) such that \(n\leq N_n\) for every \(n\in \mathbb {N}\) (maybe here we could have \(N_n\) increasing WLOG). Let \((a_{n,m})_{n\in \mathbb {N},m\in \left\lbrace n,\cdots ,N_n\right\rbrace }\) be a triangular array in \(\mathbb {R}\) such that \(0\leq a_{n,m}\leq 1\) and \(\sum _{m=n}^{N_n}a_{n,m}=1\). Then \((\sum _{m=n}^{N_n}a_{n,m}x_m)_{n\in \mathbb {N}}\) converges to \(x\) uniformly w.r.t. the triangular array.

Proof

Let \(\epsilon {\gt}0\). By convergence of \(x_n\) we have \(\exists \bar{n}\) such that \(\forall n\geq \bar{n}\) \(|x_n-x|\leq \epsilon \). By triangular inequality it follows that

\[ |\sum _{m=n}^{N_n}a_{n,m}x_m - x|\leq \sum _{m=n}^{N_n}a_{n,m}|x_m-x|\leq \epsilon . \]
Lemma 8.9

For \(i,n\in \mathbb {N}\) set \(f_{n}^{(i)}:=f_n \mathbb {1}_{(|f_n|\leq i)}\) such that \(f_{n}^{(i)}\in L^2\). There exists the sequence of convex weights \(\lambda _n^{n}, \ldots , \lambda _{N_n}^{n}\) such that the functions \( (\lambda _n^{n} f_n^{(i)} + \ldots +\lambda _{N_n}^{n} f_{N_n}^{(i)})_{n\in \mathbb {N}}\) converge in \(L^2\) for every \(i\in \mathbb {N}\) uniformly.

Proof

Firstly by lemma 8.7 over \((f_n^{(1)})_{n\in \mathbb {N}}\) there exist convex weights \(\prescript {1}{}{\lambda }^n_n,\cdots ,\prescript {1}{}{\lambda }^n_{N^1_n}\) such that \(g^1_n=\sum _{m=n}^{N^1_n}\prescript {1}{}{\lambda }^n_mf_m^{(1)}\) converges to some \(g^1\). Secondly apply the lemma to \((\tilde{g}^2_n=\sum _{m=n}^{N^1_n}\prescript {1}{}{\lambda }^n_mf^{(2)}_m)_{n\in \mathbb {N}}\), there exists convex weights \(\tilde{\lambda }^n_n,\cdots ,\tilde{\lambda }^n_{\tilde{N}_n}\) such that \(g^2_n=\sum _{m=n}^{\tilde{N}_n}\tilde{\lambda }^n_m\tilde{g}_m^{(2)}=\sum _{m=n}^{N^2_n}\prescript {2}{}{\lambda }^n_mf_m^{(2)}\) converges to some \(g^2\). Notice that \(\sum _{m=n}^{N^2_n}\prescript {2}{}{\lambda }^n_mf_m^{(1)}=\sum _{m=n}^{\tilde{N}_n}\tilde{\lambda }^n_m\tilde{g}_m^{(1)}\) and thus this sequence by lemma 8.8 converges still to \(g^1\). By iteration we may define \(\prescript {i}{}{\lambda }^n_n,\cdots ,\prescript {i}{}{\lambda }^n_{N^i_n}\) convex weights such that if used on \((f^j_n)_{n\in \mathbb {N}}\) they make the sequence convergent if \(1\leq j\leq i\). At this point consider \(\lambda ^n_m=\prescript {n}{}{\lambda }^n_m\). Since \(\forall m\geq i\) \(\sum _{j=n}^{N^m_n}\prescript {m}{}{\lambda }^n_j f^{(i)}_j\rightarrow g^i\) and even better \(\forall \epsilon {\gt}0\) \(\exists \bar{n}\), \(\forall n\geq \bar{n}\), \(\forall m\geq i\) \(|\sum _{j=n}^{N^m_n}\prescript {m}{}{\lambda }^n_j f^{(i)}_j - g^i|\leq \epsilon \) (this works by lemma 8.8 uniformity of convergence w.r.t. triangular array) this concludes.

Lemma 8.10 Komlòs Lemma

Let \(( f_n)_{n\in \mathbb {N}}\) be a uniformly integrable sequence of functions on a probability space \((\Omega , \mathcal{F} , P)\). Then there exist functions \(g_n \in convex( f_n, f_{n+1}, \cdots )\) such that \((g_n)_{n\in \mathbb {N}}\) converges in \(L^1 (\Omega )\).

Proof

For \(i,n\in \mathbb {N}\) set \(f_{n}^{(i)}:=f_n \mathbb {1}_{(|f_n|\leq i)}\) such that \(f_{n}^{(i)}\in L^2\). Using 8.9 there exist for every \(n\) convex weights \(\lambda _n^{n}, \ldots , \lambda _{N_n}^{n}\) such that the functions \( \lambda _n^{n} f_n^{(i)} + \ldots +\lambda _{N_n}^{n} f_{N_n}^{(i)}\) converge in \(L^2\) for every \(i\in \mathbb {N}\). By uniform integrability, \(\lim _{i\to \infty }\| f^{(i)}_n- f_n\| _1=0\), uniformly with respect to \(n\). Hence, once again, uniformly with respect to \(n\),

\[ \textstyle \lim _{i\to \infty }\| (\lambda _n^{n} f_n^{(i)} + \ldots +\lambda _{N_n}^{n} f_{N_n}^{(i)})-(\lambda _n^{n} f_n + \ldots +\lambda _{N_n}^{n} f_{N_n})\| _1= 0. \]

Thus \((\lambda _n^{n} f_n + \ldots +\lambda _{N_n}^{n} f_{N_n})_{n\geq 1}\) is a Cauchy sequence in \(L^1\).

8.3 Doob-Meyer decomposition

For uniqueness of Doob-Meyer Decomposition we will need theorem 7.27.

We now start the construction for the existence part. Let \(T{\gt}0\) and recall that \(\mathcal{D}_n^T=\left\lbrace \frac{k}{2^n}T \mid k=0,\cdots 2^n\right\rbrace \).

TODO: everywhere below, \(S\) is a cadlag submartingale of class D on \([0,T]\)?

Definition 8.11

\(D\) is the class of all adapted processes \((S_t)_{0\leq t\leq T}\) such that the set \(\{ S_\tau \mid \tau \text{ is a stopping time}\} \) is uniformly integrable.

Definition 8.12 A

Define \(A_0=0\) and for \(t\in \mathcal{D}_n^T\) positive,

\begin{align*} A^n_t & =A^n_{t-T2^{-n}} + \mathbb {E}\left[ S_t-S_{t-T2^{-n}}|\mathcal{F}_{t-T2^{-n}}\right] \: . \end{align*}
Definition 8.13 M

For \(t\in \mathcal{D}_n^T\), define \(M^n_t = S_t-A^n_t\) .

\((A^n_t)_{t\in \mathcal{D}_n^T}\) is a predictable process.

Proof

Trivial

\((M^n_t)_{t\in \mathcal{D}_n^T}\) is a martingale.

Proof

Trivial

\((A^n_t)_{t\in \mathcal{D}_n^T}\) is an increasing process.

Proof

\(S\) is a submartingale:

\begin{align*} A^n_{t+T2^{-n}} - A^n_t & = \mathbb {E}\left[ S_{t+T2^{-n}}-S_t|\mathcal{F}_t\right] \ge 0 \: . \end{align*}
Definition 8.17 Hitting time for \(A\)

Let \(c{\gt}0\). Define the hitting time on \(\mathcal{D}^T_n\)

\begin{align*} \tau _n(c) & = \inf \{ t \in \mathcal{D}^T_n \mid A^n_{t + 2^{-n}T} {\gt} c\} \wedge T \: . \end{align*}

\(\tau _n(c)\) is a stopping time.

Proof

Since \(A^n_{t}\) is predictable, \(A^n_{t + 2^{-n}T}\) is adapted. The hitting time of an adapted process is a stopping time (we use the discrete time version of that result here, not the full Début theorem).

Lemma 8.19

\(A^n_{\tau _n(c)} \le c\) and if \(\tau _n(c) {\lt} T\) then \(A^n_{\tau _n(c)+T2^{-n}} {\gt} c\).

Proof
Lemma 8.20

Let \(a, b {\gt} 0\) with \(a \le b\). If \(\tau _n(b) {\lt} T\) then \(A^n_{\tau _n(b)+T2^{-n}} - A^n_{\tau _n(a)} \ge b - a\).

Proof

The sequence \((A^n_T)_{n\in \mathbb {N}}\) is uniformly integrable (bounded in \(L^1\) norm).

Proof

WLOG \(S_T=0\) and \(S_t\leq 0\) (else consider \(S_t-\mathbb {E}\left[S_T\vert \mathcal{F}_{t}\right]\)).

We have that \(0=S_T=M^n_T+A^n_T\). Thus

\begin{equation} \label{equation_DM_e1} M^n_T=-A^n_T. \end{equation}
1

Since \(M^n\) is a martingale it follows by optional sampling that for any \((\mathcal{F}_t)_{t\in \mathcal{D}_n}\) stopping time \(\tau \)

\begin{equation} \label{equation_DM_e2} S_\tau =M^n_\tau +A^n_\tau = \mathbb {E}[M^n_T\vert \mathcal{F}_\tau ]+A^n_\tau \stackrel{\eqref{equation_DM_e1}}{=} -\mathbb {E}[A^n_T\vert \mathcal{F}_\tau ]+A^n_\tau . \end{equation}
2

Let \(c{\gt}0\). By Lemma 8.18, \(\tau _n(c)\) (Definition 8.17) is a stopping time. By construction \(A^n_{\tau _n(c)}\leq c\). It follows that

\begin{equation} \label{equation_DM_e3} S_{\tau _n(c)}\stackrel{\eqref{equation_DM_e2}}{=}-\mathbb {E}[A^n_T\vert \mathcal{F}_{\tau _n(c)}]+A^n_{\tau _n(c)}\leq -\mathbb {E}[A^n_T\vert \mathcal{F}_{\tau _n(c)}]+c. \end{equation}
3

Since \((A^n_T{\gt}c)=(\tau _n(c){\lt}T)\) we have

\begin{align} \nonumber \int _{(A^n_T{\gt}c)}A^n_TdP& =\int _{(\tau _n(c){\lt}T)}A^n_TdP\stackrel{\mathrm{Tower}}{=}\int _{(\tau _n(c){\lt}T)}\mathbb {E}[A^n_T\vert \mathcal{F}_{\tau _n(c)}]dP\\ & \stackrel{\eqref{equation_DM_e3}}{\leq } cP(\tau _n(c){\lt}T)-\int _{\tau _n(c){\lt}T}S_{\tau _n(c)}dP.\label{equation_DM_e4} \end{align}

Now we notice that \((\tau _n(c){\lt}T)\subseteq (\tau _n(c/2){\lt}T)\), thus

\begin{align} \nonumber \int _{\tau _n(c/2){\lt}T}-S_{\tau _n(c/2)}dP & \stackrel{\eqref{equation_DM_e2}}{=}\int _{(\tau _n(c/2)){\lt}T}\mathbb {E}[A^n_T\vert \mathcal{F}_{\tau _n(c/2)}]-A^n_{\tau _n(c/2)}dP \nonumber \\ & \stackrel{\mathrm{Tower}}{=}\int _{(\tau _n(c/2){\lt}T)}A^n_t-A^n_{\tau _n(c/2)}dP\nonumber \\ & \geq \int _{(\tau _n(c){\lt}T)}A^n_t-A^n_{\tau _n(c/2)}dP\nonumber \\ \intertext {(over the event $(\tau _n(c){\lt}T)$ $A^n_T\geq c$ and $A^n_{\tau _n(c/2)}\leq c/2$, thus $A^n_T-A^n_{\tau _n(c/2)}\geq c/2$)} & \geq \frac{c}{2}P(\tau _n(c){\lt}T).\label{equation_DM_e5} \end{align}

It follows

\[ \int _{(A^n_T{\gt}c)}A^n_TdP\stackrel{\eqref{equation_DM_e4}}{\leq }cP(\tau _n(c){\lt}T)-\int _{\tau _n(c){\lt}T}S_{\tau _n(c)}dP\stackrel{\eqref{equation_DM_e5}}{\leq }-2\int _{\tau _n(c/2){\lt}T}S_{\tau _n(c/2)}dP-\int _{\tau _n(c){\lt}T}S_{\tau _n(c)}dP. \]

We may notice that

\[ P(\tau _n(c){\lt}T)=P(A^n_T{\gt}c)\stackrel{Markov}{\leq }\frac{\mathbb {E}[A^n_T]}{c}=-\frac{\mathbb {E}[M^n_T]}{c}\stackrel{mg}{=}-\frac{\mathbb {E}[S_0]}{c} \]

which goes to \(0\) uniformly in \(n\) as \(c\) goes to infinity. This implies that \(\int _{(A^n_T{\gt}c)}A^n_TdP\) is uniformly bounded in \(n\) due to the fact that \(S\) is of class \(D\). And so also the \(L^1\) norm is uniformly bounded.

Lemma 8.22

The sequence \((M^n_T)_{n\in \mathbb {N}}\) is uniformly integrable (bounded in \(L^1\) norm).

Proof

\(M^n_T=S_T-A^n_T\), also \(S\) is of class \(D\) and \(A^n_T\) is uniformly integrable.

Lemma 8.23

If \(f_n, f : [0, 1] \rightarrow \mathbb {R}\) are increasing functions such that \(f\) is right continuous and \(\lim _n f_n(t) = f (t)\) for \(t \in \mathcal{D}^T\), then \(\limsup _n f_n(t) \leq f (t)\) for all \(t \in [0, T]\).

Proof

Let \(t\in [0,T]\) and \(s\in \mathcal{D}^T\) such that \(t{\lt}s\). We have

\[ \limsup _n f_n(t)\leq \limsup _n f_n(s)=f(s). \]

Since the above is true uniformly in \(s\) in particular since \(f\) is right-continuous

\[ \limsup _n f_n(t)\leq \lim _{\stackrel{s\rightarrow t^+}{s\in \mathcal{D}^T}}f(s)=f(t). \]
Lemma 8.24

If \(f_n, f : [0, 1] \rightarrow \mathbb {R}\) are increasing functions such that \(f\) is right continuous and \(\lim _n f_n(t) = f (t)\) for \(t \in \mathcal{D^T}\), if \(f\) is continuous in \(t\in [0,T]\) then \(\lim _n f_n(t) = f (t)\).

Proof

By lemma 8.23 it is enough to show that \(\liminf _n f_n(t)\geq f(t)\). Let \(s\in \mathcal{D}^T\) such that \(t{\gt}s\). We have

\[ \liminf _n f_n(t)\geq \liminf _n f_n(s)=f(s). \]

Since the above is true uniformly in \(s\) in particular since \(f\) is continuous in \(t\)

\[ \liminf _n f_n(t)\geq \lim _{\stackrel{s\rightarrow t^-}{s\in \mathcal{D}^T}}f(s)=f(t). \]

Define \(M^n_t\) on \([0,T]\) using \(M^n_t=\mathbb {E}[M^n_T\vert \mathcal{F}_t]\).

Lemma 8.25

\(M^n_t\) admits a modification which is a cadlag martingale.

Proof

By theorem 8.4

From this point onwards \(M^n_t\) will be redefined as the modification from lemma 8.25.

Lemma 8.26

There are convex weights \(\lambda ^n_n,\cdots ,\lambda ^n_{N_n}\) such that \(\mathcal{M}^n_T\stackrel{L^1}{\rightarrow }M\), where \(\mathcal{M}^n:=\lambda ^n_nM^n+\cdots +\lambda ^n_{N_n}M^{N_n}.\)

Proof

By lemma 8.22 \((M^n_T)_{n\in \mathbb {N}}\) is uniformly bounded in \(L^1\), thus by lemma 8.10 there are convex weights \(\lambda ^n_n,\cdots ,\lambda ^n_{N_n}\) such that \(\mathcal{M}^n_T\stackrel{L^1}{\rightarrow }M\), where \(\mathcal{M}^n:=\lambda ^n_nM^n+\cdots +\lambda ^n_{N_n}M^{N_n}.\)

Lemma 8.27

\(\mathcal{M}^n\) is cadlag.

Proof

By construction and 8.25

Let

\begin{equation} \label{equation_DM_e6} M_t = \mathbb {E}[M\vert \mathcal{F}_t].\end{equation}
5

Lemma 8.28

\(M_t\) admits a martingale cadlag modification.

Proof

By construction \(M_t\) is a martingale and thus by theorem 8.4 admits a cadlag martingale modification (\(M_t\) is a version of \(\mathbb {E}[M\vert \mathcal{F}_t]\) and thus passing to modification does not pose any problem).

From this point onwards \(M^n_t\) will be redefined as the modification from lemma 8.28. Define

  • Extend now \(A^n\) as a left continuous process \(A^n_s:=\sum _{t\in \mathcal{D}^T_n}A^n_t\mathbb {1}_{]t-2^{-n},t]}(s)\)

  • \(\mathcal{A}^n=\lambda ^n_nA^n+\cdots +\lambda ^n_{N_n}A^{N_n}\)

  • \(A_t=S_t-M_t\)

Lemma 8.29

For every \(t\in [0,T]\) we have \(\mathcal{M}^n_t\stackrel{L^1}{\rightarrow }M_t\).

Proof

We may notice that by Jensen’s inequality, the tower lemma and lemma 8.26

\begin{gather} \nonumber \mathbb {E}[|\mathcal{M}^n_t-M_t|]=\mathbb {E}[|\mathbb {E}[\mathcal{M}^n_T-M\vert \mathcal{F}_t]|]\leq \mathbb {E}[|\mathcal{M}^n_T-M|]\rightarrow 0,\\ \Rightarrow \mathcal{M}^n_t\stackrel{L^1}{\rightarrow } M_t,\quad \forall t\in [0,T].\label{equation_DM_e7} \end{gather}
Lemma 8.30

There exists a set \(E\subseteq \Omega \), \(P(E)=0\) and a subsequence \(k_n\) such that \(\lim _n\mathcal{A}^{k_n}_t(\omega )=A_t(\omega )\) for every \(t\in \mathcal{D}^T,\omega \in \Omega \setminus E\).

Proof

By Lemma 8.29

\[ \mathcal{A}^n_t=S_t-\mathcal{M}^n_t\stackrel{L^1}{\rightarrow }S_t-M_t=A_t,\quad \forall t\in \mathcal{D}^T. \]

\(\mathcal{D}^T\) is countable we can arrange the elements as \((t_n)_{n\in \mathbb {N}}\). Given \(t_0\in \mathcal{D}^T\) there exists a subsequence \(k^{0}_n\) for which \(\mathcal{A}^{k^{0}_n}_{t_0}\) converges to \(A_{t_0}\) over the set \(\Omega \setminus E_{0}\) where \(P(E_{0})=0\). Suppose we have a sequence \(k^m_n\) for which \(\mathcal{A}^{k^j_n}_{t_j}\) converges to \(A_{t_j}\) over the set \(\Omega \setminus E_{m}\) where \(P(E_{m})=0\) for each \(j=0,\cdots ,m\). From this subsequence we may extract a new subsequence \(k^{m+1}_n\) for which \(\mathcal{A}^{k^{m+1}_n}_{t_{m+1}}\) converges to \(A_{t_{m+1}}\) over the set \(\Omega \setminus E_{m+1}\) where \(P(E_{m+1})=0\). By construction over this subsequence the convergence for \(t_0,\cdots ,t_m\) still applies. With a diagonal argument we obtain the final result with \(E=\bigcup _n E_n\).

Lemma 8.31

\((A_t)_{t\in [0,T]}\) is an increasing process.

Proof

Since \(\mathcal{A}^n_t\) is increasing on \(\mathcal{D}^T\) by lemma 8.30 also \(A\) is almost surely increasing on \(\mathcal{D}^T\). Since \(S,M\) are cadlag also \(A\) is cadlag (thus right-continuous). It follows that \(A\) must be increasing on \([0,T]\).

Lemma 8.32

Let \(\tau \) be an \((\mathcal{F}_t)_{t\in [0,T]}\) stopping time. We have \(\lim _n\mathbb {E}[A^n_\tau ]=\mathbb {E}[A_\tau ]\).

Proof

Let \(\sigma _n:=\inf \left(t\in \mathcal{D}^T_n\vert t{\gt}\tau \right)\). By construction of \(A^n\) we have \(A^n_\tau =A^n_{\sigma _n}\). Also \(\sigma _n\searrow \tau \). Since \(S\) is of class \(D\) and cadlag we have

\begin{align*} \mathbb {E}[A^n_\tau ]& =\mathbb {E}[A^n_{\sigma _n}]=\mathbb {E}[S_{\sigma _n}]-\mathbb {E}[M^n_{\sigma _n}]=\mathbb {E}[S_{\sigma _n}]-\mathbb {E}[M^n_0]=\\ & =\mathbb {E}[S_{\sigma _n}]-\mathbb {E}[S_0]\rightarrow \mathbb {E}[S_\tau ]-\mathbb {E}[M_0]=\mathbb {E}[S_\tau ]-\mathbb {E}[M_\tau ]=\mathbb {E}[A_\tau ]. \end{align*}
Lemma 8.33

Let \(\tau \) be an \((\mathcal{F}_t)_{t\in [0,T]}\) stopping time. We have \(\limsup _n \mathcal{A}_\tau ^n = A_\tau \).

Proof

Firstly we notice that \(\liminf _n \mathbb {E}[A_\tau ^n] \leq \limsup _n \mathbb {E} [\mathcal{A}_\tau ^n ] \leq \mathbb {E}[\limsup _n \mathcal{A}_\tau ^n ] \leq \mathbb {E}[ A_\tau ]\), where the first inequality is justified by the definition of limsup and liminf and the fact that

\[ \sup _{k\geq n}\mathbb {E}[\mathcal{A}^k_\tau ]\geq \sum _{m=k}^{N_k}\lambda ^k_m\mathbb {E}[A^m_\tau ]\geq \sum _{m=k}^{N_k}\lambda ^k_m\inf _{j\geq n}\mathbb {E}[A^j_\tau ]=\inf _{k\geq n}\mathbb {E}[A^k_\tau ] \]

the third inequality by 8.23. Let’s prove the second inequality: observe that

\[ \mathcal{A}^n_\tau = A_1+\mathcal{A}^n_\tau -A_1\leq A_1+(\mathcal{A}^n_\tau -A_1)_+, \]

thus it follows that \(\mathcal{A}^n_\tau - (\mathcal{A}^n_\tau -A_1)_+\leq A_1\); since \(A_1\) is an integrable guardian the inverse Fatou Lemma may be applied to show together with limsup properties that

\begin{align*} \limsup _n\mathbb {E}[\mathcal{A}^n_\tau ]+0 & = \limsup _n\mathbb {E}[\mathcal{A}^n_\tau ]+\liminf _n-\mathbb {E}[(\mathcal{A}^n_\tau -A_1)_+] \leq \limsup _n\mathbb {E}[\mathcal{A}^n_\tau -(\mathcal{A}^n_\tau -A_1)_+]\leq \\ & \leq \mathbb {E}[\limsup _n\mathcal{A}^n_\tau -(\mathcal{A}^n_\tau -A_1)_+]\leq \mathbb {E}[\limsup _n\mathcal{A}^n_\tau ]-\mathbb {E}[\liminf _n(\mathcal{A}^n_\tau -A_1)_+]\leq \mathbb {E}[\limsup _n\mathcal{A}^n_\tau ], \end{align*}

where the first equality is justified by the fact that \(\mathcal{A}^n_\tau \leq \mathcal{A}^n_1\rightarrow A_1\) almost surely. Due to lemma 8.32 and 8.23 the first sequence of inequalities is a sequence of equalities, thus we know that \(A_\tau - \limsup _n \mathcal{A}_\tau ^n \) is an a.s. nonnegative function with null expected value, and thus it must be almost everywhere null.

Theorem 8.34

Let \(S = (S_t )_{0\leq t\leq T}\) be a cadlag submartingale of class \(D\). Then, \(S\) can be written in a unique way in the form \(S = M + A\) where \(M\) is a cadlag martingale and \(A\) is a predictable increasing process starting at \(0\).

Proof

By construction \(M\) is a cadlag martingale and \(A_0=0\) and by lemma 8.31 \(A\) is increasing. It suffices to show that \(A\) is predictable. \(A^n,\mathcal{A}^n\) are left continuous and adapted, and thus they are predictable (measurable wrt the predictable sigma algebra (the one generated by left-cont adapted processes)). It is enough to show that \(\omega -a.e.\), \(\forall t\in [0,T]\), \(\limsup _n\mathcal{A}^n_t(\omega )=A_t(\omega )\).

By lemma 8.24 that is true for any continuity point of \(A\). Since \(A\) is increasing it can only have a finite amount of jumps larger than \(1/k\) for any \(k\in \mathbb {N}\). Consider now \(\tau _{q,k}\) the family of stopping times equal to the \(q\)-th time that the process \(A_t\) has a jump higher than \(1/k\). This is a countable family. Given a time \(t\) and a trajectory \(\omega \) there are only two possibilities: either \(A\) is continuous or not at time \(t\) along \(\omega \). If \(A\) is continuous at time \(t\) we have \(\limsup _n\mathcal{A}^n_t(\omega )=A_t(\omega )\), if it jumps there exists \(q(\omega ),k(\omega )\) such that \(t=\tau _{q(\omega ),k(\omega )}(\omega )\). Due to lemma 8.33 we know that \(\limsup _n A^n_{\tau _{q,k}} = A_{\tau _{q,k}}\) for each \(q,k\) almost surely. Thus, since it is an intersection of a countable amount of almost sure events \(\forall \omega \in \Omega '\) with \(P(\Omega ')=1\), for each \(q,k\) \(\limsup _n A^n_{\tau _{q,k}}(\omega ) = A_{\tau _{q,k}}(\omega )\) (\(\omega \) does not depend upon \(q,k\)). Consequently, \(\forall \omega \in \Omega '\) we have \(\limsup _n\mathcal{A}^n_t(\omega )=\limsup _n\mathcal{A}^n_{\tau _{q(\omega ),k(\omega )}}(\omega )=A_{\tau _{q(\omega ),k(\omega )}}(\omega )=A_t(\omega )\)

8.4 Local version of the Doob-Meyer decomposition

Every local submartingale \(X\) with \(X_0 = 0\) is locally of class D.

Proof

By Lemma 7.22, it suffices to show that if \(X\) is a submartingale with \(X_0=0\) then it is locally of class D.

TODO

Theorem 8.36 Doob-Meyer decomposition

An adapted process \(X\) is a cadlag local submartingale iff \(X = M + A\) where \(M\) is a cadlag local martingale and \(A\) is a predictable, cadlag, locally integrable and increasing process starting at \(0\). The processes \(M\) and \(A\) are uniquely determined by \(X\) a.s.

Proof