Lecture 35. Weak differentiability for solutions of stochastic differential equations and the existence of a smooth density

As usual, we consider a filtered probability space $\left( \Omega , (\mathcal{F}_t)_{t \geq 0} , \mathcal{F},\mathbb{P} \right)$ which satisfies the usual conditions and on which is defined a $n$-dimensional Brownian motion $(B_t)_{t \ge 0}$. Our purpose here, is to prove that solutions of stochastic differential equations are differentiable in the sense of Malliavin.
The following lemma is easy to prove by using the Wiener chaos expansion.

Lemma. Let $(u_s)_{0 \le s \le 1}$ be a progressively measurable process such that for every $0 \le s \le 1$, $u^i_s \in \mathbb{D}^{1,2}$ and
$\mathbb{E} \left(\int_0^1 \|u_s\|^2 ds \right)<+\infty, \quad \mathbb{E} \left(\int_0^1 \int_0^1 \|\mathbf{D}_s u_t\|^2 ds dt\right)<+\infty.$
Then $\int_0^1 u_s dB_s \in \mathbb{D}^{1,2}$ and
$\mathbf{D}_t\left( \int_0^1 u_s dB_s\right)=u_t+\sum_{i=1}^n \int_t^1 (\mathbf{D}_t u^i_s) dB^i_s.$

Proof. We make the proof when $n=1$ and use the notations introduced in the Wiener chaos expansion Lecture. For $f \in L^2([0,1])$, we have
$\mathbf{D}_t I_n( f^{\otimes n } )= f(t) I_{n-1} (f^{\otimes {(n-1)}}).$
But we can write,
$I_n( f^{\otimes n } )=\int_0^1f(t) \left( \int_{\Delta_{n-1}[0,t]} f^{\otimes {(n-1)}} dB_{t_1} \cdots dB_{t_{n-1}}\right) dB_t,$
and thus
$I_n( f^{\otimes n } )=\int_0^1 u_s dB_s,$
with $u_t= f(t)\int_{\Delta_{n-1}[0,t]} f^{\otimes {(n-1)}} dB_{t_1} \cdots dB_{t_{n-1}}$. Since
$f(t) I_{n-1} (f^{\otimes {(n-1)}})= f(t) \left( \int_{\Delta_{n-1}[0,t]} f^{\otimes {(n-1)}} dB_{t_1} \cdots dB_{t_{n-1}}\right)$
$+ f(t) \int_t^1 f(s)\left( \int_{\Delta_{n-2}[0,s]} f^{\otimes {(n-1)}} dB_{t_1} \cdots dB_{t_{n-2}}\right)dB_s$,
we get the result when $\int_0^1 u_s dB_s$ can be written as $I_n( f^{\otimes n } )$. By continuity of the Malliavin derivative on the space of chaos of order $n$, we conclude that the formula is true if $\int_0^1 u_s dB_s$ is a chaos of order $n$. The result finally holds in all generality by using the Wiener chaos expansion $\square$

We consider two functions $b : \mathbb{R}^n \to \mathbb{R}^n$ and $\sigma:\mathbb{R}^n \to \mathbb{R}^{n \times n}$ and we assume that $b$ and $\sigma$ are $C^\infty$ with derivatives at any order (more than 1) bounded.

As we know, there exists a bicontinuous process $(X_t^{x})_{t\ge 0, x \in \mathbb{R}^d}$ such that for $t \ge 0$,
$X_t^{x} =x +\int_0^t b(X_s^{x}) ds +\sum_{k=1}^n \int_0^t \sigma_k(X_s^{x}) dB^k_s.$
Moreover, for every $p \ge 1$, and $T \ge 0$
$\mathbb{E} \left( \sup_{0 \le t \le T} \| X^x_t \|^p\right) < +\infty.$

Theorem. For every $i=1,...,n$, $0 \le t \le 1$, $X_t^{x,i} \in \mathbb{D}^{\infty}$ and for $r \le t$,
$\mathbf{D}^j_r X_t^{x,i}= \sigma_{i,j}(X_r^{x}) +\sum_{l=1}^n \int_r^t \partial_l b_i(X_s^{x})\mathbf{D}^j_r X_s^{x,l} ds +\sum_{k,l=1}^n \int_r^t \partial_l\sigma_{i,k}(X_s^{x})\mathbf{D}^j_r X_s^{x,l} dB^k_s,$
where $\mathbf{D}^j_r X^i_t$ is the $j$-th component of $\mathbf{D}_r X^i_t$. If $r > t$, then $\mathbf{D}^j_r X_t^{x,i}=0$.

Proof. We first prove that $X_1^{x,i} \in \mathbb{D}^{1,p}$ for every $p \ge 1$. We consider the Picard approximations given by $X_0(t)=x$ and
$X_{n+1}(t) =x +\int_0^t b(X_n(s)) ds +\sum_{k=1}^n \int_0^t \sigma_k(X_n(s)) dB^k_s.$
By induction, it is easy to see that $X_n(t) \in \mathbb{D}^{1,p}$ and that for every $p \ge 1$, we have
$\Psi_n(t)=\sup_{0 \le r \le t} \mathbb{E} \left( \sup_{s \in [r,t]} \| \mathbf{D}_r X_n(s) \|^p \right)< +\infty,$
and
$\Psi_{n+1}(t)\le \alpha +\beta\int_0^t \Psi_n(s)ds.$
Then, we observe that $X_n(t)$ converges to $X_t^x$ in $L^p$ and that the sequence $\| X_n(t) \|_{1,p}$ is bounded. As a consequence $X_1^{x,i} \in \mathbb{D}^{1,p}$ for every $p \ge 1$. The equation for the Malliavin derivative is obtained by differentiating the equation satisfied by $X_t^x$. Higher order derivatives may be treated in a similar way with a few additional work $\square$

Combining this theorem with the uniqueness property for solutions of linear stochastic differential equations, we obtain the following representation for the Malliavin derivative of a solution of a stochastic differential equation:

Corollary:
$\mathbf{D}^j_r X^x_t=\mathbf{J}_{0 \rightarrow t}(x) \mathbf{J}_{0 \rightarrow r}^{-1}(x) \sigma_j (X^x_r),~~j=1,...,n, ~~ 0\leq r \leq t,$

where $(\mathbf{J}_{0 \rightarrow t}(x))_{ t \geq 0}$ is the first variation process defined by
$\mathbf{J}_{0 \rightarrow t}(x)=\frac{\partial X^x_t}{\partial x}(x).$

We now fix $x \in \mathbb{R}^n$ as the initial condition for our equation and denote by $\Gamma_t=\left( \sum_{j=1}^n \int_0^1 \mathbf{D}_r^j X_t^{i,x}\mathbf{D}_r^j X^{i',x}_t dr\right)_{1 \le i,i' \le n}$ the Malliavin matrix of $X^x_t$. From the previous corollary, we deduce that
$\Gamma_t(x)=\mathbf{J}_{0 \rightarrow t}(x) \int_0^t \mathbf{J}_{0 \rightarrow r}^{-1}(x) \sigma (X^x_r) \sigma (X^x_r)^* \mathbf{J}_{0 \rightarrow r}^{-1}(x)^* dr \mathbf{J}_{0 \rightarrow t}(x)^*.$

We are now finally in position to state the main theorem of the section:

Theorem. Assume that there exists $\lambda > 0$ such that for every $x \in \mathbb{R}^n$,
$\| \sigma (x) \|^2 \ge \lambda \| x \|^2,$
then for every $t > 0$ and $x \in \mathbb{R}^n$, the random variable $X_t^x$ has a smooth density with respect to the Lebesgue measure.

Proof:
We want to prove that $\Gamma_t(x)$ is invertible with inverse in $L^p$ for $p \ge 1$. Since $\mathbf{J}_{0 \rightarrow t}(x)$ is invertible and that its inverse solves a linear equation, we deduce that for every $p \ge 1$,
$\mathbb{E}\left( \| \mathbf{J}_{0 \rightarrow t}^{-1}(x) \|^p \right) < +\infty.$

We conclude that it is enough to prove that $C_t(x)$ is invertible with inverse in $L^p$ where
$C_t(x)= \int_0^t \mathbf{J}_{0 \rightarrow r}^{-1}(x) \sigma (X^x_r) \sigma (X^x_r)^* \mathbf{J}_{0 \rightarrow r}^{-1}(x)^* dr.$
By the uniform ellipticity assumption, we have
$C_t(x) \ge \lambda \int_0^t \mathbf{J}_{0 \rightarrow r}^{-1}(x) \mathbf{J}_{0 \rightarrow r}^{-1}(x)^* dr,$
where the inequality is understood in the sense that the difference of the two symmetric matrices is non negative. This implies that $C_t(x)$ is invertible. Moreover, it is an easy exercise to prove that if $M_t$ is a continuous map taking its values in the set of positive definite matrices, then we have
$\left(\int_0^ t M_s ds\right)^{-1} \le \frac{1}{t^2} \left(\int_0^ t M^{-1}_s ds\right).$
As a consequence, we obtain
$C^{-1}_t(x) \le \frac{1}{t^2 \lambda} \int_0^t \mathbf{J}_{0 \rightarrow r}(x)^* \mathbf{J}_{0 \rightarrow r}(x) dr.$
Since $\mathbf{J}_{0 \rightarrow r}(x)$ has moments in $L^p$ for all $p \ge 1$, we conclude that $C_t(x)$ is invertible with inverse in $L^p$ $\square$

This entry was posted in Stochastic Calculus lectures. Bookmark the permalink.