Lecture 27. Stochastic differential equations. Regularity of the flow

In this lecture, we study the regularity of the solution of a stochastic differential equation with respect to its initial condition. The key tool is a multimensional parameter extension of the Kolmogorov continuity theorem whose proof is almost identical to the one-dimensional case.

Theorem. Let (\Theta_x)_{x \in [0,1]^d} be a n-dimensional stochastic process such that there exist positive constants \gamma, c, \varepsilon such that for every x,y \in [0,1]^d
\mathbb{E} \left( \| \Theta_x -\Theta_y \|^\gamma \right)\le C \| x -y \|^{d +\varepsilon}.
There exists a modification (\tilde{\Theta}_x)_{x \in [0,1]^d} of the process (\Theta_x)_{x \in [0,1]^d} such that for every \alpha \in [0, \varepsilon/\gamma) there exists a finite random variable K_\alpha such that for every x,y \in [0,1]^d
\| \tilde{\Theta}_x - \tilde{\Theta}_y \| \le K_\alpha  \| x-y \|^\alpha.

As above, we consider two functions b : \mathbb{R}^n \to \mathbb{R}^n and \sigma: \mathbb{R}^{n \times n} and we assume that there exists C > 0 such that
\| b(x)-b(y) \| + \| \sigma (x) - \sigma (y) \| \le C \| x-y \|, x,y \in \mathbb{R}^n.
As we already know, for every x  \in \mathbb{R}^n, there exists a continuous and adapted process (X_t^{x})_{t\ge 0} such that for t \ge 0,
X_t^{x} =x +\int_0^t b(X_s^{x}) ds + \int_0^t \sigma(X_s^{x}) dB_s.

Proposition. Let T > 0. For every p \ge 2, there exists a constant C_{p,T} > 0 such that for every 0 \le s \le t \le T and x,y  \in \mathbb{R}^n,
\mathbb{E} \left( \| X^x_t-X^y _s \|^p \right)\le C_{p,T} \left( \| x-y \|^p +|t-s|^{p/2} \right).
As a consequence, there exists a modification (\tilde{X}_t^{x})_{t\ge 0, x\in \mathbb{R}^n} of the process (X_t^{x})_{t\ge 0, x\in \mathbb{R}^n} such that for t \ge 0, x \in \mathbb{R}^n,
\tilde{X}_t^{x} =x +\int_0^t b(\tilde{X}_s^{x}) ds + \int_0^t \sigma(\tilde{X}_s^{x}) dB_s.
and such that (t,x) \to X^x_t (\omega) is continuous for almost every \omega.

Proof. As before, we can find K > 0 such that
\| b(x)-b(y) \| + \| \sigma (x) - \sigma (y) \| \le K \| x-y \|, x,y \in \mathbb{R}^n;
and \| b(x) \| + \| \sigma (x)  \| \le K (1 +\| x \|), x \in \mathbb{R}^n.

We fix x,y \in \mathbb{R}^n and p \ge 2. Let
h(t)=\mathbb{E} \left(  \|X_t^x-X_t^y\|^p \right).
By using the inequality \| a +b+c \|^p \le 3^{p-1} ( \| a \|^p + \| b \|^p +\| c\|^p ), we obtain
\|X_t^x-X_t^y\|^p \le 3^{p-1} \left(  \| x-y \|^p +\left(\int_0^t \| b(X_s^x)-b(X_s^y) \|  ds \right)^p  + \left\| \int_0^t ( \sigma(X_s^x) -\sigma(X_s^y))dB_s   \right\|^p  \right).
We now have
\left(\int_0^t \| b(X_s^x)-b(X_s^y) \|  ds \right)^p\le t^{p-1} \int_0^t \| b(X_s^x)-b(X_s^y) \|^p  ds\le K^p t^{p-1} \int_0^t \| X_s^x-X_s^y \|^p  ds,
and from Burkholder-Davis-Gundy inequality
\mathbb{E} \left( \left\| \int_0^t ( \sigma(X_s^x) -\sigma(X_s^y))dB_s   \right\|^p  \right) \le C_p \mathbb{E} \left( \left\| \int_0^t \| \sigma(X_s^x) -\sigma(X_s^y) \|^2 ds   \right\|^{p/2}  \right)
\le C_p K^2  \mathbb{E} \left( \left( \int_0^t \| X_s^x -X_s^y \|^2 ds \right)^{p/2}  \right)
\le C_p K^2 t^{p/2 -1} \mathbb{E} \left( \int_0^t \| X_s^x -X_s^y \|^p  ds  \right).
As a conclusion we obtain
h(t) \le 3^{p-1} \left(  \| x-y \|^p +(K^p t^{p-1}+C_p K^2 t^{p/2 -1})  \int_0^t h(s) ds  \right).
Gronwall’s inequality yields then
h(t)\le \phi(t)  \| x-y \|^p,
where \phi is a continuous function.

We have for 0\le s \le t \le T,
\| X_t^x -X_s^x \|^p   \le 2^{p-1}\left( \left\| \int_s^t b(X_u^{x}) ds\right\|^p  + \left\|  \int_s^t \sigma(X_u^{x}) dB_u  \right\|^p\right),
and
\left\| \int_s^t b(X_u^{x}) ds\right\|^p \le K^p  (t-s)^p ( 1+ \sup_{0 \le s \le T} \| X_s \|)^p,
\mathbb{E} \left( \left\|  \int_s^t \sigma(X_u^{x}) dB_u  \right\|^p\right)   \le C_p \mathbb{E} \left( \left(  \int_s^t \|\sigma(X_u^{x})\|^2 du  \right)^{p/2} \right)
\le C_pK^p  (t-s)^{p/2} \mathbb{E}  \left( \left(  1+ \sup_{0 \le s \le T} \| X_s \|  \right)^{p} \right)
The conclusion then easily follows by combining the two previous estimates \square

In the sequel, of course, we shall always work with this bicontinuous version of the solution.

Definition.The continuous process of continuous maps \Psi_t: x \to X_t^x is called the stochastic flow associated to the equation.

If the maps b and \sigma are moreover C^1, then the stochastic flow is itself differentiable and the equation for the derivative can be obtained by formally differentiating the equation with respect to its initial condition. We willl admit this result without proof:

Theorem. Let us assume that b and \sigma are C^1 Lipschitz functions, then for every t \ge 0, the flow \Psi_t associated to the equation is a flow of differentiable maps. Moreover, the first variation process J_t which is defined as the Jacobian matrix \frac{\partial \Psi_t}{\partial x} (x) is the unique solution of the matrix stochastic differential equation:
J_t=\mathbf{Id}+\int_0^t \frac{\partial b}{\partial x} (X_s^x)J_s ds+\sum_{i=1}^n \int_0^t  \frac{\partial \sigma_i }{\partial x} (X_s^x) J_s dB^i_s.

This entry was posted in Stochastic Calculus lectures. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s