Lecture 18. Square integrable martingales and quadratic variations

It turns out that stochastic integrals may be defined for other stochastic processes than Brownian motions. The key properties that were used in the above approach were the martingale property and the square integrability of the Brownian motion.

As above, we consider a filtered probability space (\Omega,(\mathcal{F}_t)_{t \ge 0},\mathcal{F},\mathbb{P}) that satisfies the usual conditions. A martingale (M_t)_{t \ge 0} defined on this space is said to be square integrable if for every t \geq 0, \mathbb{E}\left( M_t^2 \right) < + \infty.

For instance, if (B_t)_{t \ge 0} is a Brownian motion on (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathcal{F},\mathbb{P}) and if (u_t)_{t \ge 0} is a process which is progressively measurable with respect to the filtration (\mathcal{F}_t)_{t \ge 0} such that for every t \ge 0, \mathbb{E} \left( \int_0^t u_s^2 ds \right)<+\infty then, the process M_t=\int_0^t u_s dB_s, \quad t \ge 0 is a square integrable martingale.

The most important theorem concerning continuous square integrable martingales is that they admit a quadratic variation. Before proving this theorem, we state a preliminary lemma.

Lemma. Let (M_t)_{0\le t \le T} be a continuous martingale such that
\sup_{\Delta_n[0,T]} \sum_{k=0}^{n-1} | M_{t_{k+1}^n} -M_{t_{k}^n}|<+\infty.
Then (M_t)_{0\le t \le T} is constant.

Proof.
We may assume M_0=0. For N \ge 0, let us consider the stopping time
T_N=\inf \left\{ s \in [0,T], |M_s| \ge N, \sup_{\Delta_n[0,s]} \sum_{k=0}^{n-1} | M_{t_{k+1}^n}  -M_{t_{k}^n}| \ge N \right\}\wedge T.
The stopped process (M_{t \wedge T_N})_{ 0 \le t \le T } is a martingale and therefore for s \le t,
\mathbb{E}((M_{t \wedge T_N} -M_{s \wedge T_N})^2)=\mathbb{E}(M_{t \wedge T_N}^2) -\mathbb{E}(M_{s \wedge T_N}^2).
Consider now a sequence of subdivisions \Delta_n[0,T] whose mesh tends to 0. By summing up the above inequality on the subdivision, we obtain
\mathbb{E}(M_{ T_N}^2)  =\sum_{k=0}^{n-1} \left( M_{t^n_k\wedge T_N}-M_{t^n_{k-1}\wedge T_N}\right)^2
\le \sup | M_{t^n_k\wedge T_N}-M_{t^n_{k-1}\wedge T_N} | \mathbb{E}\left( \sum_{k=0}^{n-1} \left| M_{t^n_k\wedge T_N}-M_{t^n_{k-1}\wedge T_N}\right|\right)
\le N  \sup | M_{t^n_k\wedge T_N}-M_{t^n_{k-1}\wedge T_N} |.
By letting n \to +\infty, we get \mathbb{E}(M_{ T_N}^2)=0. This implies M_{ T_N}=0. Letting now N \to \infty, we conclude M_T=0. \square

Theorem. Let (M_t)_{t \geq 0} be a martingale on (\Omega (\mathcal{F}_t)_{t \ge 0},\mathcal{F},\mathbb{P}) which is continuous and square integrable and such that M_0=0.There is a unique continuous and increasing process denoted (\langle M \rangle_t)_{t \geq 0} that satisfies the following properties:

  • \langle M \rangle_0=0;
  • The process (M_t^2 - \langle M \rangle_t)_{t \geq 0} is a martingale.

Actually for every t \ge 0 and for every sequence of subdivisions \Delta_n [0,t] such that \lim_{n \rightarrow +\infty}\mid\Delta_n [0,t]\mid=0,
the following convergence takes place in probability:
\lim_{n \rightarrow +\infty} \sum_{k=1}^{n} \left( M_{t^n_k}-M_{t^n_{k-1}}\right)^2=\langle M \rangle_t.
The process (\langle M \rangle_t)_{t \geq 0} is called the quadratic variation process of (M_t)_{t \geq 0}.

Proof.
We first assume that the martingale (M_t)_{t \geq 0} is bounded and prove that if \Delta_n [0,t] is a sequence of subdivisions of the interval [0,t] such that
\lim_{n \rightarrow +\infty}\mid\Delta_n [0,t]\mid=0,
then the limit
\lim_{n \rightarrow +\infty} \sum_{k=1}^{n} \left( M_{t^n_k}-M_{t^n_{k-1}}\right)^2
exists in L^2 and thus in probability.

Toward this goal, we introduce some notations. If \Delta [0,T] is a subdivision of the time interval [0,T] and if (X_t)_{t\ge 0} is a stochastic process, then we denote
S_t^{\Delta [0,T]}(X)=\sum_{i=0}^{k-1}\left( X_{t_{i+1}} -X_{t_i} \right)^2 +(X_t-X_{t_k})^2,
where k is such that t_k \le t  \le t_{k+1},

An easy computation on conditional expectations shows that if (X_t)_{t\ge 0} is a martingale, then the process X_t^2-S_t^{\Delta [0,T]}(X), \quad t \le T is also a martingale. Also, if \Delta [0,T] and \Delta' [0,T] are two subdivisions of the time interval [0,T], we will denote by \Delta \vee \Delta' [0,T] the subdivision obtained by putting together the points \Delta [0,T] and the points of \Delta' [0,T]. Let now \Delta_n [0,T] be a sequence of subdivisions of [0,T] such that
\lim_{n \rightarrow +\infty}\mid\Delta_n [0,T]\mid=0.
Let us show that the sequence S_T^{\Delta_n [0,T]}(M) is a Cauchy sequence in L^2. Since the process S^{\Delta_n [0,T]}(M)-S^{\Delta_p [0,T]}(M) is a martingale (as a difference of two martingales), we deduce that
\mathbb{E}\left( \left(S_T^{\Delta_n [0,T]}(M)-S_T^{\Delta_p [0,T]}(M) \right)^2 \right)
= \mathbb{E}\left(S_T^{\Delta_n \vee \Delta_p [0,T]}(S^{\Delta_n [0,T]}(M)-S^{\Delta_p [0,T]}(M))\right)
\le  2 \left( \mathbb{E}\left(S_T^{\Delta_n \vee \Delta_p [0,T]}(S^{\Delta_n [0,T]}(M))\right)+\mathbb{E}\left(S_T^{\Delta_n \vee \Delta_p [0,T]}(S^{\Delta_p [0,T]}(M))\right) \right).
Let us denote by s_k‘s the points of the subdivision \Delta_n \vee \Delta_p [0,T] and for fixed s_k, we denote by t_l the point of \Delta_n [0,T] which is the closest to s_k and such that t_l \le s_k \le t_{l+1}. We have
S_{s_{k+1}}^{\Delta_n [0,T]}(M)-S_{s_{k}}^{\Delta_n [0,T]}(M)=(M_{s_{k+1}} -M_{t_l})^2-(M_{s_{k}}  M_{t_l})^2
=(M_{s_{k+1}} -M_{s_k})(M_{s_{k+1}} +M_{s_k}-2M_{t_l}).
Therefore, from Cauchy-Schwarz inequality,
\mathbb{E}\left(S_T^{\Delta_n \vee \Delta_p [0,T]}(S^{\Delta_n [0,T]}(M))\right)\le \mathbb{E} \left( \sup_k (M_{s_{k+1}} +M_{s_k}-2M_{t_l})^4\right)^{1/2}\mathbb{E} \left( \left(S_T^{\Delta_n \vee \Delta_p [0,T]}(M) \right)^2\right)^{1/2}.
Since the martingale M is assumed to be continuous, when n,p \rightarrow +\infty,
\mathbb{E} \left( \sup_k (M_{s_{k+1}} +M_{s_k}-2M_{t_l})^4\right) \rightarrow 0.
Thus, in order to conclude, it suffices to prove that \mathbb{E} \left( \left(S_T^{\Delta_n \vee \Delta_p [0,T]}(M) \right)^2\right) is bounded. This fact is an easy consequence of the fact that M is assumed to be bounded. Therefore, in the L^2 sense the following convergence holds
\langle M \rangle_t =\lim_{n \rightarrow +\infty} \sum_{k=1}^{n}\left( M_{t^n_k} -M_{t^n_{k 1}}\right)^2.
The process (M_t^2 - \langle M \rangle_t)_{t \geq 0} is seen to be a martingale because for every n and T \ge 0, the process M_t^2-S_t^{\Delta_n [0,T]}(M), \quad t \le T is a martingale. Let us now show that the obtained process \langle M \rangle is a continuous process. From Doob’s inequality, for n,p \ge 0 and \varepsilon > 0,
\mathbb{P}\left( \sup_{0 \le t \le T} \left(S_t^{\Delta_n[0,T]}(M)-S_t^{\Delta_p [0,T]}(M) \right) \ge \varepsilon  \right)\le \frac{\mathbb{E}\left( \left(S_T^{\Delta_n[0,T]}(M)-S_T^{\Delta_p[0,T]}(M) \right)^2\right)}{\varepsilon^2}.

From Borel-Cantelli lemma, there exists therefore a sequence n_k such that the sequence of continuous stochastic processes \left( S_t^{\Delta_{n_k} [0,T]}(M)\right)_{0 \le t \le T} almost surely uniformly converges to the process \left( \langle M \rangle_t\right)_{0 \le t \le T}. This proves the existence of a continuous version for \langle M \rangle. Finally, to prove that \langle M \rangle is increasing, it is enough to consider a an increasing sequence of subdivisions whose mesh tends to 0. Let us now prove that \langle M \rangle is the unique process such that M^2-\langle M \rangle is a martingale. Let A and A' be two continuous and increasing stochastic processes such that A_0=A'_0=0 and such that (M_t^2 -A_t)_{ t\ge 0} and (M_t^2 -A'_t)_{ t\ge 0} are martingales. The process (N_t)_{t\ge 0}=(A_t -A'_t)_{t\ge 0} is then seen to be a martingale that has a bounded variation. From the previous lemma, this implies that (N_t)_{t\ge 0} is constant and therefore equal to 0 due to its initial condition.

We now turn to the case where (M_t)_{t \ge 0} is not necessarily bounded. Let us introduce the sequence of stopping times:
T_N=\inf \{ t \ge 0, |M_t | \ge N \}.
According to the previous arguments, for every N \ge 0, there is an increasing process A^N such that (M_{t\wedge T_N}^2-A^N_t)_{t \ge 0} is a martingale. By uniqueness of this process, it is clear that A^{N+1}_{t\wedge T_N}=A^N_t, therefore we can define a process A_t by requiring that A_t(\omega)= A^N_t(\omega) provided that T_N(\omega)\ge t. By using convergence theorems, it is then checked that (M_t^2-A_t)_{t \ge 0} is a martingale.

Finally, let \Delta_n[0,t] be a sequence of subdivisions whose mesh tends to 0. We have for every \varepsilon >0,
\mathbb{P} \left( \left|A_t - \sum_{k=1}^{n} \left( M_{t^n_k}-M_{t^n_{k-1}}\right)^2 \right|\ge \varepsilon  \right)
\mathbb{P} (T_N \le t)+\mathbb{P} \left( \left|A^N_t - \sum_{k=1}^{n} \left( M_{t^n_k\wedge T_N}-M_{t^n_{k-1}\wedge T_N}\right)^2 \right|\ge \varepsilon  \right).
This easily implies the announced convergence in probability of the quadratic variations to A_t \square

Exercise. Let (M_t)_{t \geq 0} be a square integrable martingale on a filtered probability space (\Omega,(\mathcal{F}_t)_{t \ge 0},\mathcal{F},\mathbb{P}). Assume that M_0=0. If \Delta [0,T] is a subdivision of the time interval [0,T] and if (X_t)_{t\ge 0} is a stochastic process, we denote
S_t^{\Delta [0,T]}(X)=\sum_{i=0}^{k-1}\left( X_{t_{i+1}} -X_{t_i} \right)^2 +(X_t-X_{t_k})^2,
where k is such that t_k \le t <t_{k+1}. Let \Delta_n [0,T] be a sequence of subdivisions of [0,T] such that
\lim_{n \rightarrow +\infty}\mid\Delta_n [0,T]\mid=0.  latex
Show that the following convergence holds in probability,
\lim_{n \rightarrow +\infty} \sup_{0\le t \le T} \left| S_t^{\Delta [0,T]}(M) - \langle M \rangle_t \right|=0.
Thus, in the previous theorem, the convergence is actually uniform on compact intervals.

We have already pointed out that stochastic integrals with respect to Brownian motion provide an example of square integrable martingale, they therefore have a quadratic variation. The next proposition explicitly computes this variation.

Proposition. Let (B_t)_{t \ge 0} be a Brownian motion on a filtered probability space (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathcal{F},\mathbb{P}) that satisfies the usual conditions. Let (u_t)_{t \ge 0}be a progressively measurable process such that for every t \ge 0, \mathbb{E} \left( \int_0^t u_s^2 ds \right)<+\infty. For t \ge 0:
\left\langle \int_0^{\cdot} u_s dB_s \right\rangle_t=\int_0^t u_s^2ds.

Proof.
Since the process \left( \int_0^t u_s^2ds \right)_{t \ge 0} is continuous, increasing and equals 0 when t=0, we just need to prove that
\left(  \int_0^{t} u_s dB_s \right)^2 - \int_0^t u_s^2ds
is a martingale.

If u \in \mathcal{E}, is a simple process, it is easily seen that for t \ge s:
\mathbb{E} \left( \left(  \int_0^{t} u_v dB_v \right)^2 \mid\mathcal{F}_s \right)
=\mathbb{E} \left( \left(  \int_0^{s} u_v dB_v +\int_s^{t} u_v dB_v \right)^2 \mid \mathcal{F}_s \right)
=\mathbb{E} \left( \left(  \int_0^{s} u_v dB_v \right)^2 \mid \mathcal{F}_s \right)+\mathbb{E} \left( \left(  \int_s^{t} u_v dB_v \right)^2 \mid \mathcal{F}_s \right)
=\left(  \int_0^{s} u_v dB_v \right)^2 +\mathbb{E} \left(   \int_s^{t} u_v^2 dv \mid \mathcal{F}_s \right).
We may then conclude by using the density of \mathcal{E} in L^2 (\Omega,(\mathcal{F}_t)_{t \ge 0},\mathbb{P}) \square

As a straightforward corollary of the existence of a quadratic variation for the square integrable martingales, we immediately obtain:

Corollary. Let (M_t)_{t \geq 0} and (N_t)_{t \ge 0} be two continuous square integrable martingales on (\Omega, (\mathcal{F}_t)_{t \ge 0},\mathcal{F},\mathbb{P}) such that M_0=N_0=0. There is a unique continuous process (\langle M ,N \rangle_t)_{t \geq 0} with bounded variation that satisfies:

  • \langle M ,N\rangle_0=0;
  • The process (M_t N_t - \langle M ,N \rangle_t)_{t \geq 0} is a martingale.

Moreover, for t \ge 0 and for every sequence \Delta_n [0,t] such that \lim_{n \rightarrow +\infty}\mid\Delta_n [0,t]\mid=0, the following convergence holds in probability:
\lim_{n \rightarrow +\infty} \sum_{k=1}^{n} \left( M_{t^n_k} -M_{t^n_{k-1}}\right)\left( N_{t^n_k} -N_{t^n_{k-1}}\right)=\langle M,N \rangle_t.
The process (\langle M ,N\rangle_t)_{t \geq 0} is called the quadratic covariation process of (M_t)_{t \geq 0} and (N_t)_{t \geq 0}.

Proof.
We may actually just use the formula
\langle M, N \rangle =\frac{1}{4} \left( \langle M+N \rangle - \langle M- N \rangle \right),
as a definition of the covariation and then check that the above properties are indeed satisfied \square

Exercise. Let (B_t^1)_{t\ge 0} and (B_t^2)_{t \ge 0} be two independent Brownian motions. Show that \langle B^1, B^2 \rangle_t =0.

This entry was posted in Stochastic Calculus lectures. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s