12 Doob-Meyer Theorem
This chapter starts with a short review of the properties of the Doob decomposition of an adapted process indexed on a discrete set, and then follows [ which gives an elementary and short proof of the Doob-Meyer theorem.
12.1 Doob decomposition in discrete time
Let \(X : \mathbb {N} \to \Omega \to E\) be a process indexed by \(\mathbb {N}\), for \(E\) a Banach space. Let \((\mathcal{F}_n)_{n\in \mathbb {N}}\) be a filtration on \(\Omega \). The predictable part of \(X\) is the process \(A : \mathbb {N} \to \Omega \to E\) defined for \(n \ge 0\) by
In what follows, we fix a process \(X : \mathbb {N} \to \Omega \to E\) and a filtration \((\mathcal{F}_n)_{n \in \mathbb {N}}\), and denote by \(A\) the predictable part of \(X\).
We have \(A_0 = 0\).
For any integer \(n \ge 0\), \(A_{n+1} = A_n + \mathbb {E}[X_{n+1} - X_n \mid \mathcal{F}_n]\).
Let \(n \in \mathbb {N}\). Then
which concludes the proof.
The predictable part \(A\) is adapted to the filtration \((\mathcal{F}_{n+1})_{n \in \mathbb {N}}\).
Let \(X : \mathbb {N} \to \Omega \to E\) be a process indexed by \(\mathbb {N}\), for \(E\) a Banach space. Let \((\mathcal{F}_n)_{n\in \mathbb {N}}\) be a filtration on \(\Omega \) and let \(A\) be the predictable part of \(X\) for that filtration. The martingale part of \(X\) is the process \(M : \mathbb {N} \to \Omega \to E\) defined by \(M_n = X_n - A_n\).
The predictable part of a process is predictable.
Suppose that the filtration is sigma-finite. Then the martingale part of an adapted process \(X\) such that \(X_n\) is integrable for all \(n\) is a martingale.
The predictable part of a real-valued submartingale is an almost surely nondecreasing process.
Let \(X\) be a submartingale and let \(A\) be its predictable part. Then for all \(n \geq 0\), from Lemma 7.7 we have that almost surely
The first equality comes from Lemma 12.3. As \(\mathbb {N}\) is countable, we deduce that almost surely, for all \(n \in \mathbb {N}\), \(A_{n+1} \ge A_n\). Thus, \((A_n)_{n \in \mathbb {N}}\) is almost surely nondecreasing.
12.2 Komlòs Lemma
Firstly we will need Komlos’ Lemma.
Let \((f_n)_{n\in \mathbb {N}}\) be a sequence in a vector space \(E\) and \(\phi : E \to \mathbb {R}_+\) be a function such that \(\phi (f_n)\) is a bounded sequence. For \(\delta {\gt} 0\), let \(S_\delta = \{ (f, g) \mid \phi (f)/2 + \phi (g)/2 - \phi ((f+g)/2) \ge \delta \} \). Then there exist \(g_n\in convex(f_n,f_{n+1},\cdots )\) such that for all \(\delta {\gt} 0\), for \(N\) large enough and \(n, m \ge N\), \((g_n, g_m) \notin S_\delta \).
Let \(B\) be the bound of \((\phi (f_n))_{n\in \mathbb {N}}\). Then for all \(n\in \mathbb {N}\) and \(g\in convex(f_n,f_{n+1},\cdots )\) we have \(\phi (g)\le B\) by convexity of \(\phi \). Let \(r_n = \inf (\phi (g) \mid g\in convex(f_n, f_{n+1},\ldots ))\). By construction \((r_n)_{n\in \mathbb {N}}\) is nondecreasing. Let \(A = \sup _{n \ge 1} r_n\), which is finite (as \(A \le B\)) and for each \(n\) we may pick some \(g_n\in convex(f_n, f_{n+1},\ldots )\) such that \(\phi (g_n) \le A+1/n\) by \(\inf \) and \(\sup \) definitions.
Let \(\varepsilon \in (0, \delta /4)\). By properties of \(\sup \) there exists \(\bar{n}\) such that \(r_{\bar{n}} \ge A-\varepsilon \) and such that \(\frac{1}{\bar{n}} \le \varepsilon \). Let \(m \ge k \ge \bar{n}\). We have \((g_k+g_m)/2 \in convex(f_k,f_{k+1},\ldots )\) and it follows since \((r_n)_{n\in \mathbb {N}}\) is nondecreasing that \(\phi ((g_k+g_m)/2) \ge A - \varepsilon \). Hence due to the ordering of \(m,k,\bar{n}\),
Thus, for \(n, m \ge \bar{n}\), \((g_n, g_m) \notin S_\delta \).
Let \(H\) be a Hilbert space and \((f_n)_{n\in \mathbb {N}}\) a bounded sequence in \(H\). Then there exist functions \(g_n\in convex(f_n,f_{n+1},\cdots )\) such that \((g_n)_{n\in \mathbb {N}}\) converges in \(H\).
Consider \(\phi : H \to \mathbb {R}_+\) defined by \(\phi (f) = \| f\| _2^2\), which is convex. Then Lemma 12.9 applied to \((f_n)_{n\in \mathbb {N}}\) and \(\phi \) gives us functions \(g_n\in convex(f_n,f_{n+1},\cdots )\) such that for every \(\delta {\gt}0\) there exists \(N\) such that for \(n,m\geq N\), \((g_n,g_m)\notin S_\delta \). Thus for every \(\delta {\gt}0\) there exists \(N\) such that for \(n,m\geq N\),
But the left-hand side is equal to \(\| g_n - g_m\| _2^2/4\) by the parallelogram identity, hence \((g_n)_{n\in \mathbb {N}}\) is a Cauchy sequence in \(H\) and thus converges in \(H\) by completeness.
Let \(X\) be a normed vector space (over \(\mathbb {R}\)). Let \((x_n)_{n\in \mathbb {N}}\) be a sequence in \(X\) converging to \(x\) w.r.t. the topology of \(X\). Let \((N_n)_{n\in \mathbb {N}}\) be a sequence in \(\mathbb {N}\) such that \(n\leq N_n\) for every \(n\in \mathbb {N}\) (maybe here we could have \(N_n\) increasing WLOG). Let \((a_{n,m})_{n\in \mathbb {N},m\in \left\lbrace n,\cdots ,N_n\right\rbrace }\) be a triangular array in \(\mathbb {R}\) such that \(0\leq a_{n,m}\leq 1\) and \(\sum _{m=n}^{N_n}a_{n,m}=1\). Then \((\sum _{m=n}^{N_n}a_{n,m}x_m)_{n\in \mathbb {N}}\) converges to \(x\) uniformly w.r.t. the triangular array.
Let \(\epsilon {\gt}0\). By convergence of \(x_n\) we have \(\exists \bar{n}\) such that \(\forall n\geq \bar{n}\) \(|x_n-x|\leq \epsilon \). By triangular inequality it follows that
For \(i,n\in \mathbb {N}\) set \(f_{n}^{(i)}:=f_n \mathbb {1}_{(|f_n|\leq i)}\) such that \(f_{n}^{(i)}\in L^2\). There exists the sequence of convex weights \(\lambda _n^{n}, \ldots , \lambda _{N_n}^{n}\) such that the functions \( (\lambda _n^{n} f_n^{(i)} + \ldots +\lambda _{N_n}^{n} f_{N_n}^{(i)})_{n\in \mathbb {N}}\) converge in \(L^2\) for every \(i\in \mathbb {N}\) uniformly.
Firstly by lemma 12.10 over \((f_n^{(1)})_{n\in \mathbb {N}}\) there exist convex weights \(\prescript {1}{}{\lambda }^n_n,\cdots ,\prescript {1}{}{\lambda }^n_{N^1_n}\) such that \(g^1_n=\sum _{m=n}^{N^1_n}\prescript {1}{}{\lambda }^n_mf_m^{(1)}\) converges to some \(g^1\). Secondly apply the lemma to \((\tilde{g}^2_n=\sum _{m=n}^{N^1_n}\prescript {1}{}{\lambda }^n_mf^{(2)}_m)_{n\in \mathbb {N}}\), there exists convex weights \(\tilde{\lambda }^n_n,\cdots ,\tilde{\lambda }^n_{\tilde{N}_n}\) such that \(g^2_n=\sum _{m=n}^{\tilde{N}_n}\tilde{\lambda }^n_m\tilde{g}_m^{(2)}=\sum _{m=n}^{N^2_n}\prescript {2}{}{\lambda }^n_mf_m^{(2)}\) converges to some \(g^2\). Notice that \(\sum _{m=n}^{N^2_n}\prescript {2}{}{\lambda }^n_mf_m^{(1)}=\sum _{m=n}^{\tilde{N}_n}\tilde{\lambda }^n_m\tilde{g}_m^{(1)}\) and thus this sequence by lemma 12.11 converges still to \(g^1\). By iteration we may define \(\prescript {i}{}{\lambda }^n_n,\cdots ,\prescript {i}{}{\lambda }^n_{N^i_n}\) convex weights such that if used on \((f^j_n)_{n\in \mathbb {N}}\) they make the sequence convergent if \(1\leq j\leq i\). At this point consider \(\lambda ^n_m=\prescript {n}{}{\lambda }^n_m\). Since \(\forall m\geq i\) \(\sum _{j=n}^{N^m_n}\prescript {m}{}{\lambda }^n_j f^{(i)}_j\rightarrow g^i\) and even better \(\forall \epsilon {\gt}0\) \(\exists \bar{n}\), \(\forall n\geq \bar{n}\), \(\forall m\geq i\) \(|\sum _{j=n}^{N^m_n}\prescript {m}{}{\lambda }^n_j f^{(i)}_j - g^i|\leq \epsilon \) (this works by lemma 12.11 uniformity of convergence w.r.t. triangular array) this concludes.
Let \(( f_n)_{n\in \mathbb {N}}\) be a uniformly integrable sequence of functions on a probability space \((\Omega , \mathcal{F} , P)\). Then there exist functions \(g_n \in convex( f_n, f_{n+1}, \cdots )\) such that \((g_n)_{n\in \mathbb {N}}\) converges in \(L^1 (\Omega )\).
For \(i,n\in \mathbb {N}\) set \(f_{n}^{(i)}:=f_n \mathbb {1}_{(|f_n|\leq i)}\) such that \(f_{n}^{(i)}\in L^2\). Using 12.12 there exist for every \(n\) convex weights \(\lambda _n^{n}, \ldots , \lambda _{N_n}^{n}\) such that the functions \( \lambda _n^{n} f_n^{(i)} + \ldots +\lambda _{N_n}^{n} f_{N_n}^{(i)}\) converge in \(L^2\) for every \(i\in \mathbb {N}\). By uniform integrability, \(\lim _{i\to \infty }\| f^{(i)}_n- f_n\| _1=0\), uniformly with respect to \(n\). Hence, once again, uniformly with respect to \(n\),
Thus \((\lambda _n^{n} f_n + \ldots +\lambda _{N_n}^{n} f_{N_n})_{n\geq 1}\) is a Cauchy sequence in \(L^1\).
Komlòs lemma for nonnegative random variables
Let \((f_n)_{n\in \mathbb {N}}\) be a sequence of random variables with values in \([0, \infty ]\). Then there exist random variables \(g_n \in convex( f_n, f_{n+1}, \cdots )\) such that \((g_n)_{n\in \mathbb {N}}\) converges almost surely to a random variable \(g\).
Let \(\phi : (\Omega \to [0, \infty ]) \to [0, \infty ]\) be defined by \(\phi (X) = \mathbb {E}[e^{-X}]\). Then \(\phi \) is convex and \(\phi (f_n) \le 1\) for all \(n\). By Lemma 12.9, there exist \(g_n \in convex( f_n, f_{n+1}, \cdots )\) such that for all \(\delta {\gt}0\), for \(N\) large enough and \(n, m \ge N\),
For \(\varepsilon {\gt} 0\), let \(B_\varepsilon = \{ (x, y) \in [0, \infty ]^2 \mid \vert x - y \vert \ge \varepsilon \text{ and } \min \{ x, y\} \le 1/\varepsilon \} \). Then for all \(x, y\),
Hence for any pair of random variables \((X, Y)\) with values in \([0, \infty ]\),
On the other hand, for \((x, y) \in B_\varepsilon \), there exists \(\delta _\varepsilon {\gt} 0\) such that
Thus,
For \(n, m \ge N\) large enough so that we can apply the first inequality of this proof with \(\delta = \varepsilon \delta _\varepsilon \), we deduce that
As \(\varepsilon \) is arbitrary, we deduce that \((e^{-g_n})_{n\in \mathbb {N}}\) is a Cauchy sequence in \(L^1\) and thus converges in \(L^1\) to some random variable \(h\). Therefore, it has a subsequence \((e^{-g_{n_k}})_{k\in \mathbb {N}}\) converging almost surely to \(h\). Finally, the subsequence of \(g_n\) converges almost surely to \(g = -\log (h)\).
12.3 Doob-Meyer decomposition
For uniqueness of Doob-Meyer Decomposition we will need theorem 9.29.
We now start the construction for the existence part. Let \(T{\gt}0\) and recall that \(\mathcal{D}_n^T=\left\lbrace \frac{k}{2^n}T \mid k=0,\cdots 2^n\right\rbrace \).
TODO: everywhere below, \(S\) is a cadlag submartingale of class D on \([0,T]\)?
Define \(A_0=0\) and for \(t\in \mathcal{D}_n^T\) positive,
For \(t\in \mathcal{D}_n^T\), define \(M^n_t = S_t-A^n_t\) .
\((A^n_t)_{t\in \mathcal{D}_n^T}\) is a predictable process.
Trivial
\((M^n_t)_{t\in \mathcal{D}_n^T}\) is a martingale.
Trivial
\((A^n_t)_{t\in \mathcal{D}_n^T}\) is an increasing process.
\(S\) is a submartingale:
Let \(c{\gt}0\). Define the hitting time on \(\mathcal{D}^T_n\)
\(\tau _n(c)\) is a stopping time.
Since \(A^n_{t}\) is predictable, \(A^n_{t + 2^{-n}T}\) is adapted. The hitting time of an adapted process is a stopping time (we use the discrete time version of that result here, not the full Début theorem).
\(A^n_{\tau _n(c)} \le c\) and if \(\tau _n(c) {\lt} T\) then \(A^n_{\tau _n(c)+T2^{-n}} {\gt} c\).
Let \(a, b {\gt} 0\) with \(a \le b\). If \(\tau _n(b) {\lt} T\) then \(A^n_{\tau _n(b)+T2^{-n}} - A^n_{\tau _n(a)} \ge b - a\).
The sequence \((A^n_T)_{n\in \mathbb {N}}\) is uniformly integrable (bounded in \(L^1\) norm).
WLOG \(S_T=0\) and \(S_t\leq 0\) (else consider \(S_t-\mathbb {E}\left[S_T\vert \mathcal{F}_{t}\right]\)).
We have that \(0=S_T=M^n_T+A^n_T\). Thus
Since \(M^n\) is a martingale it follows by optional sampling that for any \((\mathcal{F}_t)_{t\in \mathcal{D}_n}\) stopping time \(\tau \)
Let \(c{\gt}0\). By Lemma 12.21, \(\tau _n(c)\) (Definition 12.20) is a stopping time. By construction \(A^n_{\tau _n(c)}\leq c\). It follows that
Since \((A^n_T{\gt}c)=(\tau _n(c){\lt}T)\) we have
Now we notice that \((\tau _n(c){\lt}T)\subseteq (\tau _n(c/2){\lt}T)\), thus
It follows
We may notice that
which goes to \(0\) uniformly in \(n\) as \(c\) goes to infinity. This implies that \(\int _{(A^n_T{\gt}c)}A^n_TdP\) is uniformly bounded in \(n\) due to the fact that \(S\) is of class \(D\). And so also the \(L^1\) norm is uniformly bounded.
The sequence \((M^n_T)_{n\in \mathbb {N}}\) is uniformly integrable (bounded in \(L^1\) norm).
\(M^n_T=S_T-A^n_T\), also \(S\) is of class \(D\) and \(A^n_T\) is uniformly integrable.
If \(f_n, f : [0, 1] \rightarrow \mathbb {R}\) are increasing functions such that \(f\) is right continuous and \(\lim _n f_n(t) = f (t)\) for \(t \in \mathcal{D}^T\), then \(\limsup _n f_n(t) \leq f (t)\) for all \(t \in [0, T]\).
Let \(t\in [0,T]\) and \(s\in \mathcal{D}^T\) such that \(t{\lt}s\). We have
Since the above is true uniformly in \(s\) in particular since \(f\) is right-continuous
If \(f_n, f : [0, 1] \rightarrow \mathbb {R}\) are increasing functions such that \(f\) is right continuous and \(\lim _n f_n(t) = f (t)\) for \(t \in \mathcal{D^T}\), if \(f\) is continuous in \(t\in [0,T]\) then \(\lim _n f_n(t) = f (t)\).
By lemma 12.26 it is enough to show that \(\liminf _n f_n(t)\geq f(t)\). Let \(s\in \mathcal{D}^T\) such that \(t{\gt}s\). We have
Since the above is true uniformly in \(s\) in particular since \(f\) is continuous in \(t\)
Define \(M^n_t\) on \([0,T]\) using \(M^n_t=\mathbb {E}[M^n_T\vert \mathcal{F}_t]\).
\(M^n_t\) admits a modification which is a cadlag martingale.
By theorem 11.10
From this point onwards \(M^n_t\) will be redefined as the modification from lemma 12.28.
There are convex weights \(\lambda ^n_n,\cdots ,\lambda ^n_{N_n}\) such that \(\mathcal{M}^n_T\stackrel{L^1}{\rightarrow }M\), where \(\mathcal{M}^n:=\lambda ^n_nM^n+\cdots +\lambda ^n_{N_n}M^{N_n}.\)
\(\mathcal{M}^n\) is cadlag.
By construction and 12.28
Let
\(M_t\) admits a martingale cadlag modification.
By construction \(M_t\) is a martingale and thus by theorem 11.10 admits a cadlag martingale modification (\(M_t\) is a version of \(\mathbb {E}[M\vert \mathcal{F}_t]\) and thus passing to modification does not pose any problem).
From this point onwards \(M^n_t\) will be redefined as the modification from lemma 12.31. Define
Extend now \(A^n\) as a left continuous process \(A^n_s:=\sum _{t\in \mathcal{D}^T_n}A^n_t\mathbb {1}_{]t-2^{-n},t]}(s)\)
\(\mathcal{A}^n=\lambda ^n_nA^n+\cdots +\lambda ^n_{N_n}A^{N_n}\)
\(A_t=S_t-M_t\)
For every \(t\in [0,T]\) we have \(\mathcal{M}^n_t\stackrel{L^1}{\rightarrow }M_t\).
We may notice that by Jensen’s inequality, the tower lemma and lemma 12.29
There exists a set \(E\subseteq \Omega \), \(P(E)=0\) and a subsequence \(k_n\) such that \(\lim _n\mathcal{A}^{k_n}_t(\omega )=A_t(\omega )\) for every \(t\in \mathcal{D}^T,\omega \in \Omega \setminus E\).
By Lemma 12.32
\(\mathcal{D}^T\) is countable we can arrange the elements as \((t_n)_{n\in \mathbb {N}}\). Given \(t_0\in \mathcal{D}^T\) there exists a subsequence \(k^{0}_n\) for which \(\mathcal{A}^{k^{0}_n}_{t_0}\) converges to \(A_{t_0}\) over the set \(\Omega \setminus E_{0}\) where \(P(E_{0})=0\). Suppose we have a sequence \(k^m_n\) for which \(\mathcal{A}^{k^j_n}_{t_j}\) converges to \(A_{t_j}\) over the set \(\Omega \setminus E_{m}\) where \(P(E_{m})=0\) for each \(j=0,\cdots ,m\). From this subsequence we may extract a new subsequence \(k^{m+1}_n\) for which \(\mathcal{A}^{k^{m+1}_n}_{t_{m+1}}\) converges to \(A_{t_{m+1}}\) over the set \(\Omega \setminus E_{m+1}\) where \(P(E_{m+1})=0\). By construction over this subsequence the convergence for \(t_0,\cdots ,t_m\) still applies. With a diagonal argument we obtain the final result with \(E=\bigcup _n E_n\).
\((A_t)_{t\in [0,T]}\) is an increasing process.
Since \(\mathcal{A}^n_t\) is increasing on \(\mathcal{D}^T\) by lemma 12.33 also \(A\) is almost surely increasing on \(\mathcal{D}^T\). Since \(S,M\) are cadlag also \(A\) is cadlag (thus right-continuous). It follows that \(A\) must be increasing on \([0,T]\).
Let \(\tau \) be an \((\mathcal{F}_t)_{t\in [0,T]}\) stopping time. We have \(\lim _n\mathbb {E}[A^n_\tau ]=\mathbb {E}[A_\tau ]\).
Let \(\sigma _n:=\inf \left(t\in \mathcal{D}^T_n\vert t{\gt}\tau \right)\). By construction of \(A^n\) we have \(A^n_\tau =A^n_{\sigma _n}\). Also \(\sigma _n\searrow \tau \). Since \(S\) is of class \(D\) and cadlag we have
Let \(\tau \) be an \((\mathcal{F}_t)_{t\in [0,T]}\) stopping time. We have \(\limsup _n \mathcal{A}_\tau ^n = A_\tau \).
Firstly we notice that \(\liminf _n \mathbb {E}[A_\tau ^n] \leq \limsup _n \mathbb {E} [\mathcal{A}_\tau ^n ] \leq \mathbb {E}[\limsup _n \mathcal{A}_\tau ^n ] \leq \mathbb {E}[ A_\tau ]\), where the first inequality is justified by the definition of limsup and liminf and the fact that
the third inequality by 12.26. Let’s prove the second inequality: observe that
thus it follows that \(\mathcal{A}^n_\tau - (\mathcal{A}^n_\tau -A_1)_+\leq A_1\); since \(A_1\) is an integrable guardian the inverse Fatou Lemma may be applied to show together with limsup properties that
where the first equality is justified by the fact that \(\mathcal{A}^n_\tau \leq \mathcal{A}^n_1\rightarrow A_1\) almost surely. Due to lemma 12.35 and 12.26 the first sequence of inequalities is a sequence of equalities, thus we know that \(A_\tau - \limsup _n \mathcal{A}_\tau ^n \) is an a.s. nonnegative function with null expected value, and thus it must be almost everywhere null.
Let \(S = (S_t )_{0\leq t\leq T}\) be a cadlag submartingale of class \(D\). Then, \(S\) can be written in a unique way in the form \(S = M + A\) where \(M\) is a cadlag martingale and \(A\) is a predictable increasing process starting at \(0\).
By construction \(M\) is a cadlag martingale and \(A_0=0\) and by lemma 12.34 \(A\) is increasing. It suffices to show that \(A\) is predictable. \(A^n,\mathcal{A}^n\) are left continuous and adapted, and thus they are predictable (measurable wrt the predictable sigma algebra (the one generated by left-cont adapted processes)). It is enough to show that \(\omega -a.e.\), \(\forall t\in [0,T]\), \(\limsup _n\mathcal{A}^n_t(\omega )=A_t(\omega )\).
By lemma 12.27 that is true for any continuity point of \(A\). Since \(A\) is increasing it can only have a finite amount of jumps larger than \(1/k\) for any \(k\in \mathbb {N}\). Consider now \(\tau _{q,k}\) the family of stopping times equal to the \(q\)-th time that the process \(A_t\) has a jump higher than \(1/k\). This is a countable family. Given a time \(t\) and a trajectory \(\omega \) there are only two possibilities: either \(A\) is continuous or not at time \(t\) along \(\omega \). If \(A\) is continuous at time \(t\) we have \(\limsup _n\mathcal{A}^n_t(\omega )=A_t(\omega )\), if it jumps there exists \(q(\omega ),k(\omega )\) such that \(t=\tau _{q(\omega ),k(\omega )}(\omega )\). Due to lemma 12.36 we know that \(\limsup _n A^n_{\tau _{q,k}} = A_{\tau _{q,k}}\) for each \(q,k\) almost surely. Thus, since it is an intersection of a countable amount of almost sure events \(\forall \omega \in \Omega '\) with \(P(\Omega ')=1\), for each \(q,k\) \(\limsup _n A^n_{\tau _{q,k}}(\omega ) = A_{\tau _{q,k}}(\omega )\) (\(\omega \) does not depend upon \(q,k\)). Consequently, \(\forall \omega \in \Omega '\) we have \(\limsup _n\mathcal{A}^n_t(\omega )=\limsup _n\mathcal{A}^n_{\tau _{q(\omega ),k(\omega )}}(\omega )=A_{\tau _{q(\omega ),k(\omega )}}(\omega )=A_t(\omega )\)
12.4 Local version of the Doob-Meyer decomposition
An adapted process \(X\) is a cadlag local submartingale iff \(X = M + A\) where \(M\) is a cadlag local martingale and \(A\) is a predictable, cadlag, locally integrable and increasing process starting at \(0\). The processes \(M\) and \(A\) are uniquely determined by \(X\) a.s.