Lecture 22. Davie’s estimate (1)

In this Lecture, we prove one of the fundamental estimates of rough paths theory. This estimate is due to Davie. It provides a basic estimate for the solution of the differential equation
$y(t)=y(0)+\sum_{i=1}^d \int_0^t V_i(y(s)) dx^i(s)$
in terms of the $p$-variation of the lift of $x$ in the free Carnot group of step $[p]$.

We first introduce the somehow minimal regularity requirement on the vector fields $V_i$‘s to study rough differential equations.

Definition. A vector field $V$ on $\mathbb{R}^n$ is called $\gamma$-Lipschitz if it is $[\gamma]$ times continuously differentiable and there exists a constant $M \ge 0$ such that the supremum norm of its $k$th derivatives $k=0, \cdots, [\gamma]$ and the $\gamma-[\gamma]$ Holder norm of its $[\gamma]$th derivative are bounded by $M$. The smallest $M$ that satisfies the above condition is the $\gamma$-Lipschitz norm of $V$ and will be denoted $\| V \|_{\text{Lip}^\gamma}$.

The fundamental estimate by Davie is the following;

Definition: Let $\gamma > p \ge 1$. Assume that $V_1, \cdots, V_d$ are $(\gamma-1)$-Lipschitz vector fields in $\mathbb{R}^n$. Let $x \in C^{1-var}([0,T], \mathbb{R}^d)$. Let $y$ be the solution of the equation
$y(t)=y(0)+\sum_{i=1}^d \int_0^t V_i(y(s)) dx^i(s), \quad 0 \le t \le T.$
There exists a constant $C$ depending only on $p$ and $\gamma$ such that for every $0 \le s < t \le T$,
$\| y \|_{p-var, [s,t]} \le C \left(\| V \|_{\text{Lip}^{\gamma-1}} \| S_{[p]} (x) \|_{p-var,[s,t]} +\| V \|^p_{\text{Lip}^{\gamma-1}} \| S_{[p]} (x) \|^p_{p-var,[s,t]} \right),$
where $S_{[p]} (x)$ is the lift of $x$ in $\mathbb{G}_{[p]}(\mathbb{R}^d)$.

We start with two preliminary lemmas, the first one being interesting in itself.

Lemma: Let $\gamma > 1$. Assume that $V_1, \cdots, V_d$ are $(\gamma-1)$-Lipschitz vector fields in $\mathbb{R}^n$. Let $x \in C^{1-var}([s,t], \mathbb{R}^d)$. Let $y$ be the solution of the equation
$y(v)=y(s)+\sum_{i=1}^d \int_s^v V_i(y(u)) dx^i(u), \quad s \le v \le t.$
There exists a constant $C$ depending only on $\gamma$ such that,
$\left\| y(t)-y(s)-\sum_{k=1}^{[\gamma]} \sum_{i_1,\cdots,i_k \in \{1,\cdots,d\}} V_{i_1}\cdots V_{i_k} \mathbf{I} (y(s)) \int_{\Delta^k[s,t]} dx^{i_1,\cdots,i_k} \right\|$
$\le C \left(\| V \|_{\text{Lip}^{\gamma-1}} \int_s^t \| dx_r\| \right)^\gamma,$
where $\mathbf{I}$ is the identity map.

Proof: For notational simplicity, we denote $n=[\gamma]$. An iterative use of the change of variable formula leads to
$y(t)-y(s)-\sum_{k=1}^{n} \sum_{i_1,\cdots,i_k \in \{1,\cdots,d\}} V_{i_1}\cdots V_{i_k} \mathbf{I} (y(s)) \int_{\Delta^k[s,t]} dx^{i_1,\cdots,i_k}$
$=\int_{s < r_1 < \cdots < r_n < t} \sum_{i_1,\cdots,i_n \in \{1,\cdots,d\}} ( V_{i_1}\cdots V_{i_n} \mathbf{I} (y(r_1)) -V_{i_1}\cdots V_{i_n} \mathbf{I} (y(s)))dx^{i_1}_{r_1} \cdots dx^{i_n}_{r_n}.$
Since $V_1, \cdots, V_d$ are $(\gamma-1)$-Lipschitz, we deduce that
$\| V_{i_1}\cdots V_{i_n} \mathbf{I} (y(r_1)) -V_{i_1}\cdots V_{i_n} \mathbf{I} (y(s))\| \le \| V \|_{\text{Lip}^{\gamma-1}}^n \|y(r_1)-y(s)\|^{\gamma-n}.$
Since,
$\|y(r_1)-y(s)\|\le \| V \|_{\text{Lip}^{\gamma-1}} \int_s^{r_1} \| dx_r\|,$
we deduce that
$\| V_{i_1}\cdots V_{i_n} \mathbf{I} (y(r_1)) -V_{i_1}\cdots V_{i_n} \mathbf{I} (y(s))\| \le \| V \|_{\text{Lip}^{\gamma-1}}^\gamma \left( \int_s^{t} \| dx_r\| \right)^{\gamma-n}.$
The result follows then easily by plugging this estimate into the integral
$\int_{s < r_1 < \cdots < r_n < t} ( V_{i_1}\cdots V_{i_n} \mathbf{I} (y(r_1)) -V_{i_1}\cdots V_{i_n} \mathbf{I} (y(s)))dx^{i_1}_{r_1} \cdots dx^{i_n}_{r_n}$ $\square$

The second lemma is an analogue of a result already used in previous lectures (Young-Loeve estimate, estimates on iterated integrals).

Lemma: Let $\Gamma: \{ 0 \le s \le t \le T \} \to \mathbb{R}^n$. Let us assume that:

• There exists a control $\tilde{\omega}$ such that
$\lim_{r \to 0} \sup_{(s,t)\in \Gamma, \tilde{\omega}(s,t) \le r } \frac{\| \Gamma_{s,t} \|}{r}=0;$
• There exists a control $\omega$ and $\theta > 1, \xi > 0, K \ge 0, \alpha > 0$ such that for $0 \le s \le t \le u\le T$,
$\| \Gamma_{s,u} \| \le \left( \| \Gamma_{s,t} \|+ \| \Gamma_{t,u} \| +\xi \omega(s,u)^\theta\right)\exp( K \omega(s,t)^\alpha).$

Then, for all $0 \le s < t \le T$,
$\| \Gamma_{s,t} \| \le \frac{\xi}{1-2^{1-\theta}} \omega(s,t)^\theta \exp\left( \frac{2K}{1-2^{-\alpha}} \omega(s,u)^\alpha\right).$

Proof:
For $\varepsilon > 0$, consider then the control
$\omega_\varepsilon (s,t)= \omega(s,t) +\varepsilon \tilde{\omega}(s,t)$
Define now
$\Psi(r)= \sup_{s,u, \omega_\varepsilon (s,u)\le r} \| \Gamma_{s,u}\|.$
If $s,u$ is such that $\omega_\varepsilon (s,u) \le r$, we can find a $t$ such that $\omega_\varepsilon(s,t) \le \frac{1}{2} \omega_\varepsilon(s,u)$, $\omega_\varepsilon(t,u) \le \frac{1}{2} \omega_\varepsilon(s,u)$. Indeed, the continuity of $\omega_\varepsilon$ forces the existence of a $t$ such that $\omega_\varepsilon(s,t)=\omega_\varepsilon(t,u)$. We obtain therefore
$\| \Gamma_{s,u}\|\le \left( 2 \Psi(r/2) + \xi r^\theta \right) \exp (K r^\alpha) ,$
which implies by maximization,
$\Psi(r)\le \left( 2 \Psi(r/2) + \xi r^\theta \right) \exp (K r^\alpha).$
We have $\lim_{r \to 0} \frac{\Psi (r)}{r} =0$ and an iteration easily gives
$\Psi (r) \le \frac{\xi}{1-2^{1-\theta}}r^\theta \exp\left( \frac{2K}{1-2^{-\alpha}} r^\alpha\right).$
We deduce
$\| \Gamma_{s,t} \| \le \frac{\xi}{1-2^{1-\theta}} \omega_\varepsilon (s,t)^\theta \exp\left( \frac{2K}{1-2^{-\alpha}} \omega_\varepsilon (s,u)^\alpha\right),$
and the result follows by letting $\varepsilon \to 0$ $\square$

This entry was posted in Rough paths theory. Bookmark the permalink.