8 Doob-Meyer Theorem
This chapter follows [ which gives an elementary and short proof of the result.
8.1 Cadlag modifications of (local) martingales
For \(T{\gt}0\), let \(\mathcal{D}_n^T = \left\lbrace \frac{k}{2^n}T \mid k=0,\cdots 2^n\right\rbrace \) be the set of dyadics at scale \(n\) and let \(\mathcal{D}^T=\bigcup _{n\in \mathbb {N}}\mathcal{D}_n^T\) be the set of all dyadics of \([0,T]\).
Let \(X=(X_t)_{t\in \mathcal{D}}\) be a martingale indexed by the dyadics. Then almost surely, for every \(t\geq 0\) the limit
exists and is finite.
See 8.2.1 of Pascucci.
Let \(X=(X_t)_{t\in \mathcal{D}}\) be a martingale indexed by the dyadics. Then almost surely, for every \(t\geq 0\) the limit
exists and is finite.
See 8.2.1 of Pascucci.
Let the filtered probability space satisfy the usual conditions. Then every martingale \(X\) admits a modification that is still a martingale with cadlag trajectories.
See 8.2.3 of Pascucci.
Let the filtered probability space satisfy the usual conditions. Then every nonnegative submartingale \(X\) admits a modification that is still a nonnegative submartingale with cadlag trajectories.
See 8.2.3 of Pascucci.
Let the filtered probability space satisfy the usual conditions. Then every local martingale \(X\) admits a modification that is still a local martingale with cadlag trajectories.
8.2 Komlòs Lemma
Firstly we will need Komlos’ Lemma.
Let \(H\) be a Hilbert space and \((f_n)_{n\in \mathbb {N}}\) a bounded sequence in \(H\). Then there exist functions \(g_n\in convex(f_n,f_{n+1},\cdots )\) such that \((g_n)_{n\in \mathbb {N}}\) converges in \(H\).
Let \(r_n = \inf (\| g\| _2:g\in convex(f_n, f_{n+1},\ldots ))\). Let \(A=\sup _{n\geq 1} r_n\). \(A\) is finite by boundedness of \((f_n)_{n\in \mathbb {N}}\) and for each \(n\) we may pick some \(g_n\in convex(f_n, f_{n+1},\ldots )\) such that \( \| g_n\| _2\leq A+1/n\) by \(\inf \) and \(\sup \) definitions. Let \(\epsilon {\gt}0\). By construction \((r_n)_{n\in \mathbb {N}}\) is increasing. By properties of \(\sup \) there exists \(\bar{n}\) such that \(r_{\bar{n}}\geq A-\epsilon \) and such that \(\frac{1}{\bar{n}}\leq \epsilon \). Let \(m\geq k\geq \bar{n}\). \((g_k+g_m)/2 \in convex(f_k,f_{k+1},\ldots )\). It follows since \((r_n)_{n\in \mathbb {N}}\) is increasing that \(\| (g_k+g_m)/2\| _2\geq A-\epsilon \). Hence due to the ordering of \(m,k,\bar{n}\)
By completeness, \((g_n)_{n\geq 1}\) converges in \(\| .\| _2\).
Let \(X\) be a normed vector space (over \(\mathbb {R}\)). Let \((x_n)_{n\in \mathbb {N}}\) be a sequence in \(X\) converging to \(x\) w.r.t. the topology of \(X\). Let \((N_n)_{n\in \mathbb {N}}\) be a sequence in \(\mathbb {N}\) such that \(n\leq N_n\) for every \(n\in \mathbb {N}\) (maybe here we could have \(N_n\) increasing WLOG). Let \((a_{n,m})_{n\in \mathbb {N},m\in \left\lbrace n,\cdots ,N_n\right\rbrace }\) be a triangular array in \(\mathbb {R}\) such that \(0\leq a_{n,m}\leq 1\) and \(\sum _{m=n}^{N_n}a_{n,m}=1\). Then \((\sum _{m=n}^{N_n}a_{n,m}x_m)_{n\in \mathbb {N}}\) converges to \(x\) uniformly w.r.t. the triangular array.
Let \(\epsilon {\gt}0\). By convergence of \(x_n\) we have \(\exists \bar{n}\) such that \(\forall n\geq \bar{n}\) \(|x_n-x|\leq \epsilon \). By triangular inequality it follows that
For \(i,n\in \mathbb {N}\) set \(f_{n}^{(i)}:=f_n \mathbb {1}_{(|f_n|\leq i)}\) such that \(f_{n}^{(i)}\in L^2\). There exists the sequence of convex weights \(\lambda _n^{n}, \ldots , \lambda _{N_n}^{n}\) such that the functions \( (\lambda _n^{n} f_n^{(i)} + \ldots +\lambda _{N_n}^{n} f_{N_n}^{(i)})_{n\in \mathbb {N}}\) converge in \(L^2\) for every \(i\in \mathbb {N}\) uniformly.
Firstly by lemma 8.7 over \((f_n^{(1)})_{n\in \mathbb {N}}\) there exist convex weights \(\prescript {1}{}{\lambda }^n_n,\cdots ,\prescript {1}{}{\lambda }^n_{N^1_n}\) such that \(g^1_n=\sum _{m=n}^{N^1_n}\prescript {1}{}{\lambda }^n_mf_m^{(1)}\) converges to some \(g^1\). Secondly apply the lemma to \((\tilde{g}^2_n=\sum _{m=n}^{N^1_n}\prescript {1}{}{\lambda }^n_mf^{(2)}_m)_{n\in \mathbb {N}}\), there exists convex weights \(\tilde{\lambda }^n_n,\cdots ,\tilde{\lambda }^n_{\tilde{N}_n}\) such that \(g^2_n=\sum _{m=n}^{\tilde{N}_n}\tilde{\lambda }^n_m\tilde{g}_m^{(2)}=\sum _{m=n}^{N^2_n}\prescript {2}{}{\lambda }^n_mf_m^{(2)}\) converges to some \(g^2\). Notice that \(\sum _{m=n}^{N^2_n}\prescript {2}{}{\lambda }^n_mf_m^{(1)}=\sum _{m=n}^{\tilde{N}_n}\tilde{\lambda }^n_m\tilde{g}_m^{(1)}\) and thus this sequence by lemma 8.8 converges still to \(g^1\). By iteration we may define \(\prescript {i}{}{\lambda }^n_n,\cdots ,\prescript {i}{}{\lambda }^n_{N^i_n}\) convex weights such that if used on \((f^j_n)_{n\in \mathbb {N}}\) they make the sequence convergent if \(1\leq j\leq i\). At this point consider \(\lambda ^n_m=\prescript {n}{}{\lambda }^n_m\). Since \(\forall m\geq i\) \(\sum _{j=n}^{N^m_n}\prescript {m}{}{\lambda }^n_j f^{(i)}_j\rightarrow g^i\) and even better \(\forall \epsilon {\gt}0\) \(\exists \bar{n}\), \(\forall n\geq \bar{n}\), \(\forall m\geq i\) \(|\sum _{j=n}^{N^m_n}\prescript {m}{}{\lambda }^n_j f^{(i)}_j - g^i|\leq \epsilon \) (this works by lemma 8.8 uniformity of convergence w.r.t. triangular array) this concludes.
Let \(( f_n)_{n\in \mathbb {N}}\) be a uniformly integrable sequence of functions on a probability space \((\Omega , \mathcal{F} , P)\). Then there exist functions \(g_n \in convex( f_n, f_{n+1}, \cdots )\) such that \((g_n)_{n\in \mathbb {N}}\) converges in \(L^1 (\Omega )\).
For \(i,n\in \mathbb {N}\) set \(f_{n}^{(i)}:=f_n \mathbb {1}_{(|f_n|\leq i)}\) such that \(f_{n}^{(i)}\in L^2\). Using 8.9 there exist for every \(n\) convex weights \(\lambda _n^{n}, \ldots , \lambda _{N_n}^{n}\) such that the functions \( \lambda _n^{n} f_n^{(i)} + \ldots +\lambda _{N_n}^{n} f_{N_n}^{(i)}\) converge in \(L^2\) for every \(i\in \mathbb {N}\). By uniform integrability, \(\lim _{i\to \infty }\| f^{(i)}_n- f_n\| _1=0\), uniformly with respect to \(n\). Hence, once again, uniformly with respect to \(n\),
Thus \((\lambda _n^{n} f_n + \ldots +\lambda _{N_n}^{n} f_{N_n})_{n\geq 1}\) is a Cauchy sequence in \(L^1\).
8.3 Doob-Meyer decomposition
For uniqueness of Doob-Meyer Decomposition we will need theorem 7.27.
We now start the construction for the existence part. Let \(T{\gt}0\) and recall that \(\mathcal{D}_n^T=\left\lbrace \frac{k}{2^n}T \mid k=0,\cdots 2^n\right\rbrace \).
TODO: everywhere below, \(S\) is a cadlag submartingale of class D on \([0,T]\)?
\(D\) is the class of all adapted processes \((S_t)_{0\leq t\leq T}\) such that the set \(\{ S_\tau \mid \tau \text{ is a stopping time}\} \) is uniformly integrable.
Define \(A_0=0\) and for \(t\in \mathcal{D}_n^T\) positive,
For \(t\in \mathcal{D}_n^T\), define \(M^n_t = S_t-A^n_t\) .
\((A^n_t)_{t\in \mathcal{D}_n^T}\) is a predictable process.
Trivial
\((M^n_t)_{t\in \mathcal{D}_n^T}\) is a martingale.
Trivial
\((A^n_t)_{t\in \mathcal{D}_n^T}\) is an increasing process.
\(S\) is a submartingale:
Let \(c{\gt}0\). Define the hitting time on \(\mathcal{D}^T_n\)
\(\tau _n(c)\) is a stopping time.
Since \(A^n_{t}\) is predictable, \(A^n_{t + 2^{-n}T}\) is adapted. The hitting time of an adapted process is a stopping time (we use the discrete time version of that result here, not the full Début theorem).
\(A^n_{\tau _n(c)} \le c\) and if \(\tau _n(c) {\lt} T\) then \(A^n_{\tau _n(c)+T2^{-n}} {\gt} c\).
Let \(a, b {\gt} 0\) with \(a \le b\). If \(\tau _n(b) {\lt} T\) then \(A^n_{\tau _n(b)+T2^{-n}} - A^n_{\tau _n(a)} \ge b - a\).
The sequence \((A^n_T)_{n\in \mathbb {N}}\) is uniformly integrable (bounded in \(L^1\) norm).
WLOG \(S_T=0\) and \(S_t\leq 0\) (else consider \(S_t-\mathbb {E}\left[S_T\vert \mathcal{F}_{t}\right]\)).
We have that \(0=S_T=M^n_T+A^n_T\). Thus
Since \(M^n\) is a martingale it follows by optional sampling that for any \((\mathcal{F}_t)_{t\in \mathcal{D}_n}\) stopping time \(\tau \)
Let \(c{\gt}0\). By Lemma 8.18, \(\tau _n(c)\) (Definition 8.17) is a stopping time. By construction \(A^n_{\tau _n(c)}\leq c\). It follows that
Since \((A^n_T{\gt}c)=(\tau _n(c){\lt}T)\) we have
Now we notice that \((\tau _n(c){\lt}T)\subseteq (\tau _n(c/2){\lt}T)\), thus
It follows
We may notice that
which goes to \(0\) uniformly in \(n\) as \(c\) goes to infinity. This implies that \(\int _{(A^n_T{\gt}c)}A^n_TdP\) is uniformly bounded in \(n\) due to the fact that \(S\) is of class \(D\). And so also the \(L^1\) norm is uniformly bounded.
The sequence \((M^n_T)_{n\in \mathbb {N}}\) is uniformly integrable (bounded in \(L^1\) norm).
\(M^n_T=S_T-A^n_T\), also \(S\) is of class \(D\) and \(A^n_T\) is uniformly integrable.
If \(f_n, f : [0, 1] \rightarrow \mathbb {R}\) are increasing functions such that \(f\) is right continuous and \(\lim _n f_n(t) = f (t)\) for \(t \in \mathcal{D}^T\), then \(\limsup _n f_n(t) \leq f (t)\) for all \(t \in [0, T]\).
Let \(t\in [0,T]\) and \(s\in \mathcal{D}^T\) such that \(t{\lt}s\). We have
Since the above is true uniformly in \(s\) in particular since \(f\) is right-continuous
If \(f_n, f : [0, 1] \rightarrow \mathbb {R}\) are increasing functions such that \(f\) is right continuous and \(\lim _n f_n(t) = f (t)\) for \(t \in \mathcal{D^T}\), if \(f\) is continuous in \(t\in [0,T]\) then \(\lim _n f_n(t) = f (t)\).
By lemma 8.23 it is enough to show that \(\liminf _n f_n(t)\geq f(t)\). Let \(s\in \mathcal{D}^T\) such that \(t{\gt}s\). We have
Since the above is true uniformly in \(s\) in particular since \(f\) is continuous in \(t\)
Define \(M^n_t\) on \([0,T]\) using \(M^n_t=\mathbb {E}[M^n_T\vert \mathcal{F}_t]\).
\(M^n_t\) admits a modification which is a cadlag martingale.
By theorem 8.4
From this point onwards \(M^n_t\) will be redefined as the modification from lemma 8.25.
There are convex weights \(\lambda ^n_n,\cdots ,\lambda ^n_{N_n}\) such that \(\mathcal{M}^n_T\stackrel{L^1}{\rightarrow }M\), where \(\mathcal{M}^n:=\lambda ^n_nM^n+\cdots +\lambda ^n_{N_n}M^{N_n}.\)
\(\mathcal{M}^n\) is cadlag.
By construction and 8.25
Let
\(M_t\) admits a martingale cadlag modification.
By construction \(M_t\) is a martingale and thus by theorem 8.4 admits a cadlag martingale modification (\(M_t\) is a version of \(\mathbb {E}[M\vert \mathcal{F}_t]\) and thus passing to modification does not pose any problem).
From this point onwards \(M^n_t\) will be redefined as the modification from lemma 8.28. Define
Extend now \(A^n\) as a left continuous process \(A^n_s:=\sum _{t\in \mathcal{D}^T_n}A^n_t\mathbb {1}_{]t-2^{-n},t]}(s)\)
\(\mathcal{A}^n=\lambda ^n_nA^n+\cdots +\lambda ^n_{N_n}A^{N_n}\)
\(A_t=S_t-M_t\)
For every \(t\in [0,T]\) we have \(\mathcal{M}^n_t\stackrel{L^1}{\rightarrow }M_t\).
We may notice that by Jensen’s inequality, the tower lemma and lemma 8.26
There exists a set \(E\subseteq \Omega \), \(P(E)=0\) and a subsequence \(k_n\) such that \(\lim _n\mathcal{A}^{k_n}_t(\omega )=A_t(\omega )\) for every \(t\in \mathcal{D}^T,\omega \in \Omega \setminus E\).
By Lemma 8.29
\(\mathcal{D}^T\) is countable we can arrange the elements as \((t_n)_{n\in \mathbb {N}}\). Given \(t_0\in \mathcal{D}^T\) there exists a subsequence \(k^{0}_n\) for which \(\mathcal{A}^{k^{0}_n}_{t_0}\) converges to \(A_{t_0}\) over the set \(\Omega \setminus E_{0}\) where \(P(E_{0})=0\). Suppose we have a sequence \(k^m_n\) for which \(\mathcal{A}^{k^j_n}_{t_j}\) converges to \(A_{t_j}\) over the set \(\Omega \setminus E_{m}\) where \(P(E_{m})=0\) for each \(j=0,\cdots ,m\). From this subsequence we may extract a new subsequence \(k^{m+1}_n\) for which \(\mathcal{A}^{k^{m+1}_n}_{t_{m+1}}\) converges to \(A_{t_{m+1}}\) over the set \(\Omega \setminus E_{m+1}\) where \(P(E_{m+1})=0\). By construction over this subsequence the convergence for \(t_0,\cdots ,t_m\) still applies. With a diagonal argument we obtain the final result with \(E=\bigcup _n E_n\).
\((A_t)_{t\in [0,T]}\) is an increasing process.
Since \(\mathcal{A}^n_t\) is increasing on \(\mathcal{D}^T\) by lemma 8.30 also \(A\) is almost surely increasing on \(\mathcal{D}^T\). Since \(S,M\) are cadlag also \(A\) is cadlag (thus right-continuous). It follows that \(A\) must be increasing on \([0,T]\).
Let \(\tau \) be an \((\mathcal{F}_t)_{t\in [0,T]}\) stopping time. We have \(\lim _n\mathbb {E}[A^n_\tau ]=\mathbb {E}[A_\tau ]\).
Let \(\sigma _n:=\inf \left(t\in \mathcal{D}^T_n\vert t{\gt}\tau \right)\). By construction of \(A^n\) we have \(A^n_\tau =A^n_{\sigma _n}\). Also \(\sigma _n\searrow \tau \). Since \(S\) is of class \(D\) and cadlag we have
Let \(\tau \) be an \((\mathcal{F}_t)_{t\in [0,T]}\) stopping time. We have \(\limsup _n \mathcal{A}_\tau ^n = A_\tau \).
Firstly we notice that \(\liminf _n \mathbb {E}[A_\tau ^n] \leq \limsup _n \mathbb {E} [\mathcal{A}_\tau ^n ] \leq \mathbb {E}[\limsup _n \mathcal{A}_\tau ^n ] \leq \mathbb {E}[ A_\tau ]\), where the first inequality is justified by the definition of limsup and liminf and the fact that
the third inequality by 8.23. Let’s prove the second inequality: observe that
thus it follows that \(\mathcal{A}^n_\tau - (\mathcal{A}^n_\tau -A_1)_+\leq A_1\); since \(A_1\) is an integrable guardian the inverse Fatou Lemma may be applied to show together with limsup properties that
where the first equality is justified by the fact that \(\mathcal{A}^n_\tau \leq \mathcal{A}^n_1\rightarrow A_1\) almost surely. Due to lemma 8.32 and 8.23 the first sequence of inequalities is a sequence of equalities, thus we know that \(A_\tau - \limsup _n \mathcal{A}_\tau ^n \) is an a.s. nonnegative function with null expected value, and thus it must be almost everywhere null.
Let \(S = (S_t )_{0\leq t\leq T}\) be a cadlag submartingale of class \(D\). Then, \(S\) can be written in a unique way in the form \(S = M + A\) where \(M\) is a cadlag martingale and \(A\) is a predictable increasing process starting at \(0\).
By construction \(M\) is a cadlag martingale and \(A_0=0\) and by lemma 8.31 \(A\) is increasing. It suffices to show that \(A\) is predictable. \(A^n,\mathcal{A}^n\) are left continuous and adapted, and thus they are predictable (measurable wrt the predictable sigma algebra (the one generated by left-cont adapted processes)). It is enough to show that \(\omega -a.e.\), \(\forall t\in [0,T]\), \(\limsup _n\mathcal{A}^n_t(\omega )=A_t(\omega )\).
By lemma 8.24 that is true for any continuity point of \(A\). Since \(A\) is increasing it can only have a finite amount of jumps larger than \(1/k\) for any \(k\in \mathbb {N}\). Consider now \(\tau _{q,k}\) the family of stopping times equal to the \(q\)-th time that the process \(A_t\) has a jump higher than \(1/k\). This is a countable family. Given a time \(t\) and a trajectory \(\omega \) there are only two possibilities: either \(A\) is continuous or not at time \(t\) along \(\omega \). If \(A\) is continuous at time \(t\) we have \(\limsup _n\mathcal{A}^n_t(\omega )=A_t(\omega )\), if it jumps there exists \(q(\omega ),k(\omega )\) such that \(t=\tau _{q(\omega ),k(\omega )}(\omega )\). Due to lemma 8.33 we know that \(\limsup _n A^n_{\tau _{q,k}} = A_{\tau _{q,k}}\) for each \(q,k\) almost surely. Thus, since it is an intersection of a countable amount of almost sure events \(\forall \omega \in \Omega '\) with \(P(\Omega ')=1\), for each \(q,k\) \(\limsup _n A^n_{\tau _{q,k}}(\omega ) = A_{\tau _{q,k}}(\omega )\) (\(\omega \) does not depend upon \(q,k\)). Consequently, \(\forall \omega \in \Omega '\) we have \(\limsup _n\mathcal{A}^n_t(\omega )=\limsup _n\mathcal{A}^n_{\tau _{q(\omega ),k(\omega )}}(\omega )=A_{\tau _{q(\omega ),k(\omega )}}(\omega )=A_t(\omega )\)
8.4 Local version of the Doob-Meyer decomposition
Every local submartingale \(X\) with \(X_0 = 0\) is locally of class D.
By Lemma 7.22, it suffices to show that if \(X\) is a submartingale with \(X_0=0\) then it is locally of class D.
TODO
An adapted process \(X\) is a cadlag local submartingale iff \(X = M + A\) where \(M\) is a cadlag local martingale and \(A\) is a predictable, cadlag, locally integrable and increasing process starting at \(0\). The processes \(M\) and \(A\) are uniquely determined by \(X\) a.s.