## Lecture 33. The Malliavin matrix and existence of densities

More generally, by using the same methods as in the previous Lecture, we can introduce iterated derivatives. If $F \in \mathcal{S}$, we set
$\mathbf{D}^k_{t_1,...,t_k} F = \mathbf{D}_{t_1} ...\mathbf{D}_{t_k} F$.
We may then consider $\mathbf{D}^k F$ as a square integrable random process indexed by $[0,1]^{k}$ and valued in $\mathbb{R}^n$. By using the integration by parts formula, it is possible to prove, as we did it in the previous Lecture, that for any $p \geq 1$, the operator $\mathbf{D}^k$ is closable on $\mathcal{S}$. We denote by $\mathbb{D}^{k,p}$ the domain of $\mathbf{D}^k$ in $L^p$, it is the closure of the class of cylindric random variables with respect to the norm
$\left\| F\right\| _{k,p}=\left( \mathbb{E}\left( F^{p}\right) +\sum_{j=1}^k \mathbb{E}\left( \left\| \mathbf{D}^j F\right\|_{\mathbf{L}^2 ([0,1]^j, \mathbb{R}^n)}^{p}\right) \right)^{\frac{1}{p}}$,
and
$\mathbb{D}^{\infty}=\bigcap_{p \geq 1} \bigcap_{k \geq 1} \mathbb{D}^{k,p}.$
We have the following key result which makes Malliavin calculus so useful when one wants to study the existence of densities for random variables.
Theorem.(P. Malliavin) Let $F=(F_1,...,F_m)$ be a $\mathcal{F}_1$ measurable random vector such that:

• For every $i=1,...,m$, $F_i \in \mathbb{D}^{\infty}$;
• The matrix
$\Gamma= \left( \int_0^1 \langle \mathbf{D}_s F^i , \mathbf{D}_s F^j \rangle_{\mathbb{R}^n} ds \right)_{1 \leq i,j \leq m}$
• is invertible.

Then $F$ has a density with respect to the Lebesgue measure. If moreover, for every $p > 1$,
$\mathbb{E} \left( \frac{1}{\mid \det \Gamma \mid ^p} \right) < \infty,$
then this density is $C^\infty$.

The matrix $\Gamma$ is often called the Malliavin matrix of the random vector $F$.

This theorem relies on the following lemma of Fourier analysis for which we shall use the following notation: If $\phi: \mathbb{R}^n \rightarrow \mathbb{R}$ is a smooth function then for $\alpha =(i_1,...,i_k) \in \{1,...,n\}^k$, we denote
$\partial_\alpha \phi =\frac{\partial^k}{\partial x_{i_1} \cdots \partial x_{i_k} } \phi.$
Lemma. Let $\mu$ be a probability measure on $\mathbb{R}^n$ such that for every smooth and compactly supported function $\phi :\mathbb{R}^n \rightarrow \mathbb{R}$,
$\left| \int_{\mathbb{R}^n} \partial_\alpha \phi d\mu \right| \le C_\alpha \| \phi \|_\infty,$
where $\alpha \in \{1,...,n\}^k$, $k \ge 1$, $C_\alpha > 0$. Then $\mu$ is absolutely continuous with respect to the Lebesgue measure with a smooth density.

Proof. The idea is to show that we may assume that $\mu$ is compactly supported and then use Fourier transforms techniques. Let $x_0 \in \mathbb{R}^n$, $R > 0$ and $R' > R$. Let $\Psi$ be a smooth function on $\mathbb{R}^n$ such that $\Psi =1$ on the ball $\mathbf{B} (x_0,R)$ and $\Psi=0$ outside the ball $\mathbf{B} (x_0,R')$. Let $\nu$ be the measure on $\mathbb{R}^n$ that has a density $\Psi$ with respect to $\mu$. It is easily seen, by induction and integrating by parts that for every smooth and compactly supported function $\phi :\mathbb{R}^n \rightarrow \mathbb{R}$,
$\left| \int_{\mathbb{R}^n} \partial_\alpha \phi d\nu \right| \le C'_\alpha \| \phi \|_\infty,$
where $\alpha \in \{1,...,n\}^k$, $k \ge 1$, $C'_\alpha > 0$. Now, if we can prove that under the above assumption $\nu$ has a smooth density, then we will able to conclude that $\phi$ has a smooth density because $x_0 \in \mathbb{R}^n$ and $R,R'$ are arbitrary. Let
$\hat{\nu}(y) =\int_{\mathbb{R}^n} e^{i \langle y,x \rangle} \nu (dx)$
be the Fourier transform of the measure $\mu$. The assumption implies that $\hat{\nu}$ is rapidly decreasing (apply the inequality with $\phi(x)=e^{i \langle y,x \rangle}$). We conclude that $\nu$ has a smooth density with respect to the Lebesgue measure and that this density $f$ is given by the inverse Fourier transform formula:
$f(x)=\frac{1}{(2\pi)^n} \int_{\mathbb{R}^n} e^{-i \langle y,x \rangle} \hat{\nu} (y) dy$ $\square$

We may now turn to the proof of the Theorem.

The proof relies on the integration by parts formula for the Malliavin derivative. Let $\phi$ be a smooth and compactly supported function on $\mathbb{R}^n$. Since $F_i \in \mathbb{D}^\infty$, we easily deduce that $\phi(F) \in \mathbb{D}^\infty$ and that
$\mathbf{D} \phi (F) =\sum_{i=1}^n \partial_i \phi (F) \mathbf{D} F_i.$
Therefore we obtain
$\int_0^1 \langle \mathbf{D}_t \phi (F),\mathbf{D}_t F_j \rangle dt= \sum_{i=1}^n \partial_i \phi (F) \int_0^1 \langle \mathbf{D}_t F_i, \mathbf{D}_t F_j \rangle dt.$
We conclude that
$\partial_i \phi (F)=\sum_{j=1}^n (\Gamma^{-1})_{i,j} \int_0^1 \langle \mathbf{D}_t \phi (F),\mathbf{D}_t F_j \rangle dt .$
As a consequence, we obtain
$\mathbb{E} \left(\partial_i \phi (F) \right) = \mathbb{E} \left(\sum_{j=1}^n (\Gamma^{-1})_{i,j} \int_0^1 \langle \mathbf{D}_t \phi (F),\mathbf{D}_t F_j \rangle dt \right)$
$=\sum_{j=1}^n \mathbb{E} \left( \int_0^1 \langle \mathbf{D}_t \phi (F), (\Gamma^{-1})_{i,j}\mathbf{D}_t F_j \rangle dt \right)$
$=\sum_{j=1}^n \mathbb{E} \left( \phi (F) \delta ( (\Gamma^{-1})_{i,j}\mathbf{D} F_j ) \right)$
$= \mathbb{E} \left( \phi (F) \delta \left( \sum_{j=1}^n (\Gamma^{-1})_{i,j}\mathbf{D} F_j \right) \right)$
By using inductively this integration by parts formula, it is seen that for every $\alpha \in \{1,...,n\}^k$, $k \ge 1$, there exists an integrable random variable $Z_\alpha$ such that,
$\mathbb{E} \left( \partial_\alpha \phi (F)\right)=\mathbb{E} \left( \phi (F) Z_\alpha \right).$

This entry was posted in Stochastic Calculus lectures. Bookmark the permalink.