## Lecture 28. The Feynman-Kac formula

It is now time to give some applications of the theory of stochastic differential equations to parabolic second order partial differential equations. In particular we are going to prove that solutions of such equations can represented by using solutions of stochastic differential equations. This representation formula is called the FeynmanKac formula.
As usual, we consider a filtered probability space $\left( \Omega , (\mathcal{F}_t)_{t \geq 0} , \mathcal{F},\mathbb{P} \right)$ which satisfies the usual conditions and on which is defined a $n$-dimensional Brownian motion $(B_t)_{t \ge 0}$. Again, we consider two functions $b : \mathbb{R}^n \to \mathbb{R}^n$ and $\sigma: \mathbb{R}^{n \times n}$ and we assume that there exists $C > 0$ such that
$\| b(x)-b(y) \| + \| \sigma (x) - \sigma (y) \| \le C \| x-y \|, x,y \in \mathbb{R}^n.$
Let $L$ be the second order differential operator
$L=\sum_{i=1}^n b_i(x) \frac{\partial}{\partial x_i} +\frac{1}{2}\sum_{i,j=1}^n a_{ij}(x)\frac{\partial^2}{\partial x_i \partial x_j} ,$
where $a_{ij}(x)=(\sigma(x)\sigma^*(x))_{ij}$.

As we know, there exists a bicontinuous process $(X_t^{x})_{t\ge 0, x \in \mathbb{R}^d}$ such that for $t \ge 0$,
$X_t^{x} =x +\int_0^t b(X_s^{x}) ds + \int_0^t \sigma(X_s^{x}) dB_s.$
Moreover, as it has been stressed before, for every $p \ge 1$, and $T \ge 0$
$\mathbb{E} \left( \sup_{0 \le t \le T} \| X^x_t \|^p\right) < +\infty.$
As a consequence, if $f:\mathbb{R}^n \to \mathbb{R}$ is a Borel function with polynomial growth, we can consider the function
$P_t f (x) =\mathbb{E}(f(X_t^x)).$
Theorem. For every $x \in \mathbb{R}^n$, $(X_t^{x})_{t\ge 0, x \in \mathbb{R}^d}$ is a Markov process with semigroup $(P_t)_{t \ge 0}$. More precisely, for every Borel function $f:\mathbb{R}^n \to \mathbb{R}$ with polynomial growth and every $t \ge s$,
$\mathbb{E}(f(X_t^x) \mid \mathcal{F}_s)=(P_{t-s}f )(X_s^x).$

Proof. The key point, here, is to observe that solutions are actually adapted to the natural filtration of the Brownian motion $(B_t)_{t \ge 0}$. More precisely, there exists on the space of continuous functions $[0,+\infty) \rightarrow \mathbb{R}^n$ a predictable functional such that for $t \ge 0$:
$X_t^{x_0}=F(x_0, (B_u)_{0 \le u \le t}).$
Indeed, let us first work on $[0,T]$ where $T$ is small enough. In that case, as seen previously, the process $(X^{x_0}_t)_{0 \le t \le T}$ is the unique fixed point of the application $\Phi$ defined by
$\Phi(X)_t =x_0+\int_0^t b(X_s^{x_0}) ds + \int_0^t \sigma(X_s^{x_0}) dB_s.$

Alternatively, one can interpret this by observing that $(X^{x_0}_t)_{0 \le t \le T}$ is the limit of the sequence of processes $(X^{n}_t)_{0 \le t \le T}$ inductively defined by
$X^{n+1}=\Phi (X^n), \quad X^0=x_0.$
It is easily checked that for each $X^n$ there is a predictable functional $F_n$ such that
$X_t^{n}=F_n(x_0, (B_u)_{0 \le u \le t}),$
which proves the above claim when $T$ is small enough. To get the existence of $F$ for any $T$, we can proceed

With this hands, we can now prove the Markov property. Let $s \ge 0$. For $t \ge 0$, we have
$X_{s+t}^{x_0} =X_s +\int_s^{s+t} b(X_u^{x_0}) du + \int_s^{s+t} \sigma(X_u^{x_0}) dB_u$
$=X_s +\int_0^{t} b(X_{u+s}^{x_0}) du + \int_0^{t} \sigma(X_s^{x_0}) d(B_{u+s}-B_s).$
Consequently, from uniqueness of solutions,
$X_{s+t}^{x_0}=F(X^{x_0}_s, (B_{u+s}-B_s)_{0 \le u \le t}).$
We deduce that for a Borel function $f:\mathbb{R}^n \rightarrow \mathbb{R}$ with polynomial growth,
$\mathbb{E} \left( f(X_{s+t}^{x_0}) \mid \mathcal{F}_s \right)=\mathbb{E} \left( f(F(X^{x_0}_s, (B_{u+s}-B_s)_{0 \le u \le t}))\mid \mathcal{F}_s \right)=P_t f (X^{x_0}_s),$
because $(B_{u+s}-B_s)_{0 \le u \le t}$ is a Brownian motion independent of $\mathcal{F}_s$ $\square$

Theorem Let $f:\mathbb{R}^n \to \mathbb{R}$ be a Borel function with polynomial growth and assume that the function
$u(t,x)=(P_tf)(x)$
is $C^{1,2}$, that is once differentiable with respect to $t$ and twice differentiable with respect to $x$. Then $u$ solves the Cauchy problem
$\frac{\partial u}{\partial t} (t,x)=Lu(t,x)$
in $[0,+\infty) \times \mathbb{R}^n$, with the initial condition $u(0,x)=f(x)$.

Proof. Let $T > 0$ and consider the function $v(t,x)=u(T-t,x)$. According the previous theorem, we have
$\mathbb{E}(f(X_T^x) \mid \mathcal{F}_t)=v(t,X_t^x).$
As a consequence, the process $v(t,X_t^x)$ is a martingale. But from Ito’s formula the bounded variation part of $v(t,X_t^x)$ is $\int_0^t \left( \frac{\partial v}{\partial t}(s, X_s^x) + L v(s, X_s^x) \right)ds$ which is therefore 0. We conclude
$\frac{\partial v}{\partial t}(0, x) + L v(0, x)=\lim_{t \to 0} \frac{1}{t} \int_0^t \left( \frac{\partial v}{\partial t}(s, X_s^x) + L v(s, X_s^x) \right)ds=0$ $\square$

Exercise Show that if $f$ is a $C^2$ function such that $\nabla f$ and $\nabla^2 f$ have polynomial growth, then the function $P_tf(x)$ is $C^{1,2}$. Here, we denote by $\nabla^2 f$ the Hessian matrix of $f$.

Theorem. Let $f:\mathbb{R}^n \to \mathbb{R}$ be a Borel function with polynomial growth. Let $u:[0,+\infty)\times \mathbb{R}^n \to \mathbb{R}$ be a solution of the Cauchy problem
$\frac{\partial u}{\partial t} (t,x)=Lu(t,x)$
with the initial condition $u(0,x)=f(x)$.
If there exists a locally integrable function $C$ and $p \ge 0$, such that for every $t \ge 0$ and $x \in \mathbb{R}^n$,
$\| \nabla u (t,x) \| \le C(t) (1+\|x \|^p),$
then $u(t,x)= P_t f(x)$.

Proof. Let $T > 0$ and, as before, consider the function $v(t,x)=u(T-t,x)$. As a consequence of Ito’s formula, we have
$v(t,X_t^x)=u(T,x)+M_t,$
where $M_t$ is a local martingale with quadratic variation $\sum_{i,j}^n \int_0^t a_{ij}(X_s^x) \frac{\partial u}{\partial x_i}(X_s^x) \frac{\partial u}{\partial x_j} (X_s^x) ds$. The conditions on $\sigma$ and $u$ imply that this quadratic variation is integrable. As a consequence, $v(t,X_t^x)$ is a martingale and thus $\mathbb{E} (v(T,X_t^x))=u(T,x)$ $\square$

The previous results may be extended to study parabolic equations with potential as well. More precisely, let $V:\mathbb{R}^n \to \mathbb{R}$ be a bounded function. If $f:\mathbb{R}^n \to \mathbb{R}$ is a Borel function with polynomial growth, we define
$P^V_t f (x) =\mathbb{E}\left(e^{\int_0^t V(X_s^x) ds }f(X_t^x)\right)$.
The same proofs as before will give the following theorems.

Theorem. For every $x \in \mathbb{R}^n$ and every Borel function $f:\mathbb{R}^n \to \mathbb{R}$ with polynomial growth and every $t \ge s$,
$\mathbb{E}\left( e^{\int_0^t V(X_u^x) du } f(X_t^x) \mid \mathcal{F}_s\right)=e^{\int_0^s V(X_u^x) du }(P^V_{t-s}f )(X_s^x).$

Theorem. Let $f:\mathbb{R}^n \to \mathbb{R}$ be a Borel function with polynomial growth and assume that the function
$u(t,x)=(P^V_tf)(x)$
is $C^{1,2}$, that is once differentiable with respect to $t$ and twice differentiable with respect to $x$. Then $u$ solves the Cauchy problem
$\frac{\partial u}{\partial t} (t,x)=Lu(t,x)+V(x)u(t,x)$
in $[0,+\infty) \times \mathbb{R}^n$, with the initial condition
$u(0,x)=f(x)$.

Theorem. Let $f:\mathbb{R}^n \to \mathbb{R}$ be a Borel function with polynomial growth. Let $u:[0,+\infty)\times \mathbb{R}^n \to \mathbb{R}$ be a solution of the Cauchy problem
$\frac{\partial u}{\partial t} (t,x)=Lu(t,x)+V(x)u(t,x)$
with the initial condition $u(0,x)=f(x)$. If there exists a locally integrable function $C$ and $p \ge 0$, such that for every $t \ge 0$ and $x \in \mathbb{R}^n$,
$\| \nabla u (t,x) \| \le C(t) (1+\|x \|^p)$,
then $u(t,x)= P^V_t f(x)$.

This entry was posted in Stochastic Calculus lectures. Bookmark the permalink.