Formalization of a Brownian motion and of stochastic integrals in Lean

13 Stochastic integral

The lecture notes at this link as well as chapter 18 of [ Kal21 ] are good references for this chapter. Some of the proofs are taken from [ Pas24 ] .

13.1 Total variation and Lebesgue-Stieltjes integral

TODO: in Mathlib, we can integrate with respect to the measure given by a right-continuous monotone function (StieltjesFunction.measure). This will be useful to integrate against the quadratic variation of a local martingale. However, we will also want to integrate with respect to a signed measure given by a càdlàg function with finite variation. We need to investigate what’s already in Mathlib. See Mathlib.Topology.EMetricSpace.BoundedVariation.

Definition 13.1
#

The (extended-real-valued) variation of a function \(f : T \to E\) on a set \(s\) inside a linear order is the supremum of \(\sum _i \mathrm{edist}(f(u_{i+1}, f(u_i)))\) over all finite increasing sequences \(u : N \to T\) in \(s\). We denote it by \(V_f(s)\) .

Definition 13.2
#

A function \(f : T \to E\) is of bounded variation on a set \(s\) if its variation \(V_f(s)\) is finite.

Definition 13.3
#

A function \(f : T \to E\) is of locally bounded variation on a set \(s\) if for every \(a, b\) in \(s\), the variation \(V_f(s \cap [a, b])\) is finite.

Lemma 13.4

If a real-valued function has bounded variation on a set, then it is a difference of monotone functions there.

Proof
Lemma 13.5

If a real-valued right-continuous function has bounded variation on a set, then it is a difference of right-continuous monotone functions there.

Proof
Definition 13.6
#

Let \(f : T \to ℝ\) be a cadlag monotone function on a conditionally complete linear order \(T\). We denote by \(df\) the measure on \(T\) defined by \(df((a, b]) = f(b) - f(a)\) for all \(a, b \in T\) .

TODO: in Mathlib this is only for \(T = \mathbb {R}\) for now, but will be generalized.

Definition 13.7

Let \(f : T \to E\) be a cadlag function of locally bounded variation on a conditionally complete linear order \(T\) (TODO: and probably more assumptions?). Then by Lemma 13.5, \(f\) can be written as a difference of two cadlag monotone functions, say \(f = f_+ - f_-\) . We denote by \(df\) the signed measure on \(T\) associated to \(f\) defined by \(df = df_+ - df_-\), in which \(df_+\) and \(df_-\) are given by Definition 13.6.

Lemma 13.8

\(\vert df \vert = df_+ + df_-\) .

Proof
Lemma 13.9

\(V_f([0, t]) = \vert df \vert ([0, t])\) .

Proof

13.2 Square integrable martingales

In this section, \(E\) denotes a complete normed space.

Definition 13.10 Square integrable martingales
#

Let \(T\) be a linear order with bottom element 0, on which we have a filtration \(\mathcal{F}\) satisfying the usual conditions. We say that a martingale \(M : T \to \Omega \to E\) is square integrable if it is càdlàg and \(\sup _{t \in T} \Vert M_t \Vert _{L^2} {\lt} \infty \) (Lean remark: use eLpNorm (M t) 2).

Lemma 13.11

If \(M\) and \(N\) are square integrable martingales and \(a \in \mathbb {R}\), then \(M + N\) and \(a M\) are square integrable martingales.

Proof

If \(M\) is a square integrable martingale, then \(\Vert M \Vert ^2\) is a submartingale.

Proof

Apply Lemma 7.15 with the convex function \(f: x \mapsto \Vert x \Vert ^2\) .

Lemma 13.13

For \(M\) a square integrable martingale, the function \(t \mapsto \Vert M_t \Vert _{L^2}\) is non-decreasing.

Proof

By Lemma 13.12, \(\Vert M_t \Vert ^2\) is a submartingale. Thus, for \(s \le t\) ,

\begin{align*} \Vert M_s \Vert _{L^2}^2 & = \mathbb {E}[\Vert M_s \Vert ^2] \\ & \le \mathbb {E}[\Vert M_t \Vert ^2] \\ & = \Vert M_t \Vert _{L^2}^2 \: . \end{align*}

For \(M\) a square integrable martingale, we have \(M_t \to M_\infty \) almost surely and in \(L^2\) as \(t \to \infty \) .

Proof

TODO: use a martingale convergence theorem. Check whether Theorem 7.80 is what we need.

For \(M\) a square integrable martingale,

\begin{align*} \sup _{t \in T} \Vert M_t \Vert _{L^2} & = \Vert M_\infty \Vert _{L^2} \: . \end{align*}
Proof
Definition 13.16

We denote by \(\mathcal{M}^2(E)\) or simply \(\mathcal{M}^2\) the space of equivalence classes with respect to indistinguishability of square integrable martingales \(T \to \Omega \to E\) .

Lemma 13.17

The space \(\mathcal{M}^2(E)\) is a real vector space.

Proof
Definition 13.18

We define a norm on \(\mathcal{M}^2\) by

\begin{align*} \Vert M \Vert = \Vert M_\infty \Vert _{L^2} \: . \end{align*}
Lemma 13.19

For \(M \in \mathcal{M}^2(E)\), \(\Vert M \Vert = 0\) if and only if \(M = 0\).

Proof

By Lemma 13.15, \(\Vert M \Vert = 0\) if and only if for all \(t \in T\), \(\Vert M_t \Vert _{L^2} = 0\) .

Definition 13.20

We define an inner product on \(\mathcal{M}^2\) by

\begin{align*} \langle M, N \rangle = \mathbb {E}[M_\infty N_\infty ] \: . \end{align*}

The space \(\mathcal{M}^2\) is a Hilbert space.

Proof

We already know that \(\mathcal{M}^2\) is an inner product space. We need to show that it is complete.

It suffices to show that every Cauchy sequence with a distance bound converges to a limit in \(\mathcal{M}^2\). Namely, we can consider sequences \((M^n)_{n \in \mathbb {N}}\) in \(\mathcal{M}^2\) such that for \(n, m \ge N\), \(\Vert M^n - M^m \Vert {\lt} 2^{-N}\) (See Metric.complete_of_convergent_controlled_sequences, or the EMetric version).

Let then \((M^n)_{n \in \mathbb {N}}\) be such a Cauchy sequence.

TODO

For \(V \in \mathcal{E}_{T, F}\) bounded by a constant \(D\), \(M \in \mathcal{M}^2(E)\) and a continuous bilinear map \(B: E \times F \to G\),

\begin{align*} \Vert (V \bullet _B M)_t \Vert _{L^2} \le 2 D \: \Vert B \Vert \: \sup _t \Vert M_t \Vert _{L^2} \end{align*}

TODO: this can be improved to \(D \: \Vert B \Vert \: \Vert M_t \Vert _{L^2}\)?

Proof

Let \(C\) be a bound on \(\Vert M_t \Vert _{L^2}\) for all \(t \in T\) . Let \((s_k {\lt} t_k)_{k \in \{ 1, ..., n\} }\) and \(\eta _k\) be the intervals and random variables defining \(V\) . Let \(D\) be a bound on \(\Vert \eta _k\Vert \). Then, for all \(t\) ,

\begin{align*} \Vert (V \bullet _B M)_t \Vert _{L^2} & \le \sum _{k=1}^n \Vert B(M^t_{t_k} - M^t_{s_k}, \eta _k) \Vert _{L^2} \: . \end{align*}

Since only at most one term of that sum is non-zero for each fixed \(t\) , we can bound the sum by the maximum of its terms. It suffices then to bound each term of that sum.

TODO: here we supposed that the intervals of the simple process are disjoint. Check with our Lean def.

For each \(k\) ,

\begin{align*} \Vert B(M^t_{t_k} - M^t_{s_k}, \eta _k) \Vert _{L^2} & \le \left\Vert \Vert B \Vert \: \Vert M^t_{t_k} - M^t_{s_k} \Vert \: \Vert \eta _k \Vert \right\Vert _{L^2} \\ & \le \Vert B \Vert \: \Vert M^t_{t_k} - M^t_{s_k} \Vert _{L^2} \: D \\ & \le 2 \Vert B \Vert \: C \: D \: . \end{align*}

For \(V \in \mathcal{E}_{T, F}\), \(M \in \mathcal{M}^2(E)\) and a continuous bilinear map \(B: E \times F \to G\), the elementary stochastic integral \(V \bullet _B M\) is in \(\mathcal{M}^2(G)\).

Proof

By Lemma 10.20, \(V \bullet _B M\) is càdlàg, and we know that it is a martingale by Lemma 10.28 . It remains to show that \(\sup _{t \in T} \Vert (V \bullet _B M)_t \Vert _{L^2} {\lt} \infty \) . By Lemma 13.22, this supremum is bounded by \(2 D \Vert B \Vert \sup _{t \in T} \Vert M_t \Vert _{L^2}\), which is finite since \(M \in \mathcal{M}^2(E)\) and \(V\) is bounded.

For \(V \in \mathcal{E}_{T, \mathbb {R}}\) and \(M, N \in \mathcal{M}^2\), we have

\begin{align*} \langle V \bullet _{\mathbb {R}} M, N \rangle _{\mathcal{M}^2} & = V \bullet _{\mathbb {R}} \langle M, N \rangle _{\mathcal{M}^2} \: . \end{align*}
Proof

13.3 Local martingales

TODO: filtrations should be assumed right-continuous and complete whenever needed.

If \(M\) is a cadlag local martingale, then \(\Vert M \Vert ^2\) is a cadlag local sub-martingale.

Proof
Definition 13.26 Quadratic variation

For \(M\) a local martingale, the predictable quadratic variation of \(M\) is defined as the predictable part of the Doob-Meyer decomposition of the local sub-martingale \(\Vert M \Vert ^2\) . We denote it by \(\langle M \rangle \) .

The quadratic variation \(\langle M \rangle \) of a local martingale \(M\) is a predictable process.

Proof

The quadratic variation \(\langle M \rangle \) of a local martingale \(M\) is cadlag.

Proof

The quadratic variation \(\langle M \rangle \) of a local martingale \(M\) is locally integrable.

Proof

\(\langle M \rangle _0 = 0\) .

Proof

The quadratic variation \(\langle M \rangle \) of a local martingale \(M\) is non-decreasing.

Proof

For \(M\) a local martingale, the process \(\Vert M_t \Vert ^2 - \langle M \rangle _t\) is a local martingale.

Proof
Definition 13.33

For \(M\) a local martingale, we denote by \(d\langle M \rangle \) the measure on \(\mathbb {R}_+\) associated to the non-decreasing càdlàg function \(\langle M \rangle \) .

Definition 13.34 Covariation

For \(M\) and \(N\) local martingales, the covariation \(\langle M, N \rangle \) is a stochastic process defined by polarization of the quadratic variation:

\begin{align*} \langle M, N \rangle _t = \frac{1}{4}\left(\langle M+N \rangle _t - \langle M-N \rangle _t \right) \: . \end{align*}
Lemma 13.35

The covariation \(\langle M, N \rangle \) of local martingales \(M\) and \(N\) is a predictable process.

Proof
Lemma 13.36

The covariation \(\langle M, N \rangle \) of local martingales \(M\) and \(N\) is cadlag.

Proof
Lemma 13.37

\(\langle M, N \rangle _0 = 0\) .

Proof
Lemma 13.38

For \(M, N\) local martingales, the process \(\langle M_t, N_t \rangle _E - \langle M, N \rangle _t\) is a local martingale.

Proof
\begin{align*} & \langle M_t, N_t \rangle _E - \langle M, N \rangle _t \\ & = \frac{1}{4}\left( \left(\Vert M_t + N_t \Vert ^2 - \langle M+N \rangle _t\right) - \left(\Vert M_t - N_t \Vert ^2 - \langle M-N \rangle _t\right) \right) \: . \end{align*}

The two differences are local martingales by Lemma 13.32, so their linear combination is also a local martingale.

Let \(M\) and \(N\) be square integrable martingales. Then

\begin{align*} \mathbb {E}\left[\langle M,N \rangle _\infty \right] = \langle M - M_0, N - N_0 \rangle _{\mathcal{M}^2} \: . \end{align*}
Proof
Lemma 13.40

Let \(B\) be a standard Brownian motion. Then the quadratic variation of \(B\) is given by \(\langle B \rangle _t = t\) .

Proof
Definition 13.41 Continuous semi-martingale

A continuous semi-martingale is a process that can be decomposed into a local martingale and a finite variation process. More formally, a process \(X : \mathbb {R}_+ \to \Omega \to E\) is a continuous semi-martingale if there exists a continuous local martingale \(M\) and a continuous adapted process \(A\) with locally finite variation and \(A_0 = 0\) such that

\begin{align*} X_t = M_t + A_t \end{align*}

for all \(t \ge 0\). The decomposition is a.s. unique.

13.4 Stochastic integral

TODO: relax continuity of the martingales, be clear about continuous quadratic variation vs general càdlàg quadratic variation.

For \(M\) a local martingale and \(X\) a stochastic process, we will use integrals of the form \(\int _0^t X_s \: d\langle M \rangle _s\) . Let’s explain what those integrals mean. For all \(\omega \in \Omega \), \(\langle M \rangle (\omega )\) is a right-continuous non-decreasing function (called a Stieltjes function in Mathlib) so it defines a measure on \(\mathbb {R}_+\), denoted by \(d\langle M \rangle \) . Then, for each fixed \(\omega \in \Omega \), if the function \(s \mapsto X_s(\omega )\) is integrable with respect to the measure \(d\langle M \rangle (\omega )\), the Bochner integral \(\int _0^t X_s(\omega ) \: d\langle M \rangle (\omega )\) is well-defined. By \(\int _0^t X_s \: d\langle M \rangle _s\), we mean the random variable \(\omega \mapsto \int _0^t X_s(\omega ) \: d\langle M \rangle (\omega )\) . If we also vary \(t\), we get a stochastic process.

13.4.1 Itô isometry

Lemma 13.42

The predictable \(\sigma \)-algebra on \(T \times \Omega \) is a sub-\(\sigma \)-algebra of the product \(\sigma \)-algebra \(\mathcal{B}(T) \otimes \mathcal{F}\) .

Proof
Definition 13.43 L2 space of predictable processes
#

Let \(\mu \) be a measure on \(T\) and \(P\) a measure on \(\Omega \). We denote by \(L^2(\mu , P)\) the space \(L^2(T \times \Omega , \mathcal{P}, \mu \times P)\), in which \(\mathcal{P}\) is the predictable \(\sigma \)-algebra, and the measure \(\mu \times P\) is the restriction of the product measure on \(\mathcal{B}(T) \otimes \mathcal{F}\) to \(\mathcal{P}\) .

TODO: the measures should be at least finite, I guess.

TODO: we will write that a stochastic process \(X : T \to \Omega \to E\) “is in \(L^2(\mu , P)\)”, but this means that the L2 equivalence class of the function \((t, \omega ) \mapsto X_t(\omega )\) is in \(L^2(\mu , P)\) . In Lean, we can use MemLp for the uncurried function.

Lemma 13.44

For \(X, Y \in L^2(\mu , P)\), we have

\begin{align*} \langle X, Y \rangle _{L^2(\mu , P)} = P\left[ \int _0^{\infty } \langle X_t, Y_t\rangle \: d\mu \right] \: . \end{align*}

The norm is \(\Vert X \Vert _{L^2(\mu , P)}^2 = P\left[ \int _0^{\infty } \Vert X_t \Vert ^2 \: d\mu \right]\) .

Proof

The inner product in \(L^2\) spaces is defined as the integral of the pointwise inner products.

\begin{align*} \langle X, Y \rangle _{L^2(\mu , P)} & = \int _{T \times \Omega } \langle X_t(\omega ), Y_t(\omega ) \rangle \: d(\mu \times P)(t, \omega ) \\ & = P\left[ \int _0^{\infty } \langle X_t, Y_t\rangle \: d\mu \right] \: . \end{align*}

Any simple process in \(\mathcal{E}_{T, E}\) is in \(L^2(\mu , P)\) .

(Lean remark: we mean that the uncurried version of its coercion to a function satisfies MemLp)

Proof

A simple process is bounded by definition.

Let \(V \in \mathcal{E}_{T, E}\) and \(\mu \) a measure on \(T\) . Let \(f\) be a right-continuous non-decreasing function such that for all \(s {\lt} t\), \(\mu ((s,t]) = f(t) - f(s)\) (the CDF of \(\mu \), but that’s only defined for \(\mathbb {R}\) in Lean). Then

\begin{align*} \Vert V \Vert ^2_{L^2(\mu , P)} & = P\left[ \sum _{k=1}^n \Vert \eta _k \Vert ^2 \: (f(t_k) - f(s_k)) \right] \: . \end{align*}
Proof

TODO: this proof (and the result of the lemma) assumes that the intervals defining the simple process are disjoint.

Let \(V \in \mathcal{E}_{T, E}\) , with \(V_t(\omega ) = \sum _{k=1}^n \eta _k(\omega ) \mathbf{1}_{(s_k, t_k]}(t)\) .

\begin{align*} \Vert V \Vert ^2_{L^2(\mu , P)} & = P\left[ \int _0^{\infty } \Vert V_t \Vert ^2 \: d\mu \right] \\ & = P\left[ \int _0^{\infty } \left\Vert \sum _{k=1}^n \eta _k \mathbf{1}_{(s_k, t_k]}(t) \right\Vert ^2 \: d\mu \right] \\ & = P\left[ \int _0^{\infty } \sum _{k=1}^n \Vert \eta _k \Vert ^2 \mathbf{1}_{(s_k, t_k]}(t) \: d\mu \right] \\ & = P\left[ \sum _{k=1}^n \Vert \eta _k \Vert ^2 \: \mu ((s_k, t_k]) \right] \\ & = P\left[ \sum _{k=1}^n \Vert \eta _k \Vert ^2 \: (f(t_k) - f(s_k)) \right] \: . \end{align*}
Definition 13.47 L2 space with respect to a square integrable martingale

Let \(M\) be a square integrable martingale. We define

\begin{align*} L^2(M) = L^2(d\langle M \rangle , P) = L^2(T \times \Omega , \mathcal{P}, d\langle M \rangle \times P) \end{align*}

in which \(\mathcal{P}\) is the predictable \(\sigma \)-algebra and \(d\langle M \rangle \) is the measure induced by the quadratic variation of \(M\).

For \(V \in \mathcal{E}\) and \(M \in \mathcal{M}^2\), then \(V \bullet M \in \mathcal{M}^2\) (by Lemma 13.23) and

\begin{align*} \Vert V \bullet M \Vert _{\mathcal{M}^2}^2 & = \Vert V \Vert _{L^2(M)}^2 \: . \end{align*}
Proof

There are two steps to the proof.

First, in order to make sense of \(\Vert V \Vert _{L^2(M)}\), we define the natural linear map from \(\mathcal{E}\) to \(L^2(M)\) via SimpleProcess.toFun. (Informally, this is identifying \(\mathcal{E}\) as a subset \(\mathcal{E} \subseteq L^2(M)\).) This induces the \(L^2(M)\)-norm on \(\mathcal{E}\), using something like NormedSpace.induced.

Next, we show that integration \(V \mapsto V \bullet M\) is an isometry from \(\mathcal{E}\) with the \(L^2(M)\)-norm, to \(\mathcal{M}^2\). The proof is TODO.

Lemma 13.49

Let \(X\in L^2(M)\) such that \(\int _0^t X_s \: d\langle M \rangle _s = 0\) for all \(t \ge 0\) a.s.. Then \(X = 0\) \((\mathbb {P} \times d\langle M \rangle )\)-almost everywhere.

Proof

For \(B\) a measurable set of \(\mathbb {R}_+\) and \(\omega \in \Omega \), let \(\nu _\omega (B) = \int _B X_s(\omega ) \: d\langle M \rangle _s(\omega )\) . This is a signed measure on \(\mathbb {R}_+\) . Then if for all \(t\), \(\int _0^t X_s(\omega ) \: d\langle M \rangle _s(\omega ) = 0\) then \(\nu _\omega ([0,t]) = 0\) for all \(t\). Those intervals generate the Borel \(\sigma \)-algebra on \(\mathbb {R}_+\), so \(\nu _\omega \) is the zero measure. Thus, for almost all \(\omega \), \(\nu _\omega \) is the zero measure.

The measure \(\nu _\omega \) is absolutely continuous with respect to the measure \(d\langle M \rangle (\omega )\) , and its Radon-Nikodym derivative is \(X(\omega )\) . Since \(\nu _\omega \) is the zero measure for almost all \(\omega \), we have that \(X(\omega ) = 0\) \(d\langle M \rangle (\omega )\)-almost everywhere for almost all \(\omega \) . Equivalently, \(X = 0\) \((\mathbb {P} \times d\langle M \rangle )\)-almost everywhere.

Lemma 13.50 Injectivity of the integral

Let \(X, Y \in L^2(M)\) such that \(\int _0^t X_s \: d\langle M \rangle _s = \int _0^t Y_s \: d\langle M \rangle _s\) for all \(t \ge 0\) a.s.. Then \(X = Y\) almost everywhere.

Proof

By linearity of the integral, we have \(\int _0^t (X_s - Y_s) \: d\langle M \rangle _s = 0\) for all \(t \ge 0\) a.s.. Then apply Lemma 13.49 to \(X - Y\) .

Lemma 13.51

Let \(X \in L^2(M)\) such that \(\langle X, V\rangle = 0\) for any simple process \(V\). Let \(A_t = \int _0^t X_s \: d\langle M \rangle _s\). Then \(A_t\) is a martingale.

Proof

By Lemma 10.25, it is enough to show that for any bounded real simple process \(V\), \(\mathbb {E}[(V \bullet A)_\infty ] = 0\) .

\begin{align*} \mathbb {E}\left[(V \bullet A)_\infty \right] & = \mathbb {E}\left[ \int _0^{\infty } V_t X_t \: d\langle M \rangle _t \right] \\ & = \langle X, V \rangle \\ & = 0 \: . \end{align*}

Let \(X \in L^2(M)\) such that \(\langle X, V\rangle = 0\) for any simple process \(V\). Then for all \(t\), \(\int _0^t X_s \: d\langle M \rangle _s = 0\).

Proof

\(A_t := \int _0^t X_s \: d\langle M \rangle _s\) is a finite variation process such that \(A_t\) is integrable for all \(t \ge 0\) . By Theorem 9.29, it is enough to show that \(A\) is a local martingale. We have by Lemma 13.51 that \(A\) is a martingale, and hence a local martingale (Lemma 9.26).

The set of simple processes is dense in \(L^2(M)\).

Proof

Since \(L^2(M)\) is a Hilbert space, it is enough to show that if \(X \in L^2(M)\) is orthogonal to all simple processes, then \(X = 0\) . Let \(X \in L^2(M)\) such that for any simple process \(V\), \(\mathbb {E}\left[ \int _0^{\infty } X_t V_t \: d\langle M \rangle _t \right] = 0\) . Let \(A_t = \int _0^t X_s \: d\langle M \rangle _s\). It suffices to show that \(A = 0\) by Lemma 13.49 . This is proved in Lemma 13.52 .

In Lean, this is stated as: the natural linear map toFun from \(\mathcal{E}\) to \(L^2(M)\) is IsDenseInducing. The Itô isometry is then defined using IsDenseInducing.extend.

Definition 13.54 Itô isometry

Let \(M \in \mathcal{M}^2\). Then the elementary stochastic integral map \(\mathcal{E} \to \mathcal{M}^2\) defined by \(V \mapsto V \bullet M\) extends to an isometry \(L^2(M) \to \mathcal{M}^2\).

Lemma 13.55

\(\langle X \cdot M, Y \cdot M \rangle _{\mathcal{M}^2} = (XY) \cdot \langle M, N \rangle _{\mathcal{M}^2}\).

Proof

13.4.2 Local martingales

Definition 13.56 \(L^2_{loc}(M)\)

Let \(M\) be a continuous local martingale. We define \(L^2_{loc}(M)\) as the space of predictable processes \(X\) such that for all \(t \ge 0\), \(\mathbb {E}\left[ \int _0^t X_s^2 \: d\langle M \rangle _s \right] {\lt} \infty \).

Definition 13.57 Stochastic integral for continuous local martingales

Let \(M\) be a continuous local martingale and let \(X \in L^2_{loc}(M)\). We define the local stochastic integral \(X \cdot M\) as the unique continuous local martingale with \((X \cdot M)_0 = 0\) such that for any continuous local martingale \(N\), almost surely,

\begin{align*} \langle X \cdot M, N \rangle = X \cdot \langle M, N \rangle \: . \end{align*}

13.4.3 Semi-martingales

Definition 13.58

For a continuous semi-martingale \(X = M + A\) and \(V \in L^2_{semi}(X)\) (to be defined) we define the stochastic integral as

\begin{align*} V \cdot X = V \cdot M + V \cdot A \: , \end{align*}

in which \(V \cdot M\) is the local stochastic integral defined in 13.57 and \(V \cdot A\) is the Lebesgue-Stieltjes integral with respect to the locally finite variation process \(A\).

For \(X = M + A\) and \(Y = N + B\), we define the covariation as

\begin{align*} [X, Y] = [M, N] \: . \end{align*}

13.5 Itô formula

Theorem 13.59 Integration by parts

Let \(X\) and \(Y\) be two continuous semi-martingales. Then we have almost surely

\begin{align*} X_t Y_t - X_0 Y_0 = (X \cdot Y)_t + (Y \cdot X)_t + [X,Y]_t \: . \end{align*}
Proof
Theorem 13.60 Itô’s formula

Let \(X^1, \ldots , X^d\) be continuous semi-martingales and let \(f : \mathbb {R}^d \to \mathbb {R}\) be a twice continuously differentiable function. Then, writing \(X = (X^1, \ldots , X^d)\), the process \(f(X)\) is a semi-martingale and we have

\begin{align*} f(X_t) & = f(X_0) + \sum _{i=1}^d \int _0^t \frac{\partial f}{\partial x_i}(X_s) \: dX^i_s + \frac{1}{2} \sum _{i,j=1}^d \int _0^t \frac{\partial ^2 f}{\partial x_i \partial x_j}(X_s) \: d[X^i, X^j]_s \: . \end{align*}
Proof