## Lecture 6. Rough paths Fall 2017

In the previous lecture we defined the Young’s integral $\int y dx$ when $x \in C^{p-var} ([0,T], \mathbb{R}^d)$ and $y \in C^{q-var} ([0,T], \mathbb{R}^{e \times d})$ with $\frac{1}{p}+\frac{1}{q} > 1$. The integral path $\int_0^t ydx$ has then a bounded $p$-variation. Now, if $V: \mathbb{R}^d \to \mathbb{R}^{d \times d}$ is a Lipschitz map, then the integral, $\int V(x) dx$ is only defined when $\frac{1}{p}+\frac{1}{p} > 1$, that is for $p < 2$. With this in mind, it is apparent that Young’s integration should be useful to solve differential equations driven by continuous paths with bounded $p$-variation for $p < 2$. If $p \ge 2$, then the Young’s integral is of no help and the rough paths theory later explained is the correct one.

The basic existence and uniqueness result is the following. Throughout this lecture, we assume that $p < 2$.

Theorem: Let $x\in C^{p-var} ([0,T], \mathbb{R}^d)$ and let $V : \mathbb{R}^e \to \mathbb{R}^{e \times d}$ be a Lipschitz continuous map, that is there exists a constant $K > 0$ such that for every $x,y \in \mathbb{R}^e$,
$\| V(x)-V(y) \| \le K \| x-y \|.$
For every $y_0 \in \mathbb{R}^e$, there is a unique solution to the differential equation:
$y(t)=y_0+\int_0^t V(y(s)) dx(s), \quad 0\le t \le T.$
Moreover $y \in C^{p-var} ([0,T], \mathbb{R}^e)$.

Proof: The proof is of course based again of the fixed point theorem. Let $0 < \tau \le T$ and consider the map $\Phi$ going from the space $C^{p-var} ([0,\tau], \mathbb{R}^e)$ into itself, which is defined by
$\Phi(y)_t =y_0+\int_0^t V(y(s)) dx(s), \quad 0\le t \le \tau.$
By using basic estimates on the Young’s integrals, we deduce that
$\| \Phi(y^1)-\Phi(y^2) \|_{ p-var, [0,\tau]}$
$\le C \| x \|_{p-var,[0,\tau]} ( \| V(y^1)-V(y^2) \|_{ p-var, [0,\tau]} +\| V(y^1)(0)-V(y^2)(0)\|)$
$\le CK \| x \|_{p-var,[0,\tau]}( \| y^1-y^2 \|_{ p-var, [0,\tau]}+\| y^1(0)-y^2(0)\|).$
If $\tau$ is small enough, then $CK \| x \|_{p-var,[0,\tau]} < 1$, which means that $\Phi$ is a contraction of the Banach space $C^{p-var} ([0,\tau], \mathbb{R}^e)$ endowed with the norm $\| y \|_{p-var,[0,\tau]} +\| y(0)\|$.

The fixed point of $\Phi$, let us say $y$, is the unique solution to the differential equation:
$y(t)=y_0+\int_0^t V(y(s)) dx(s), \quad 0\le t \le \tau.$
By considering then a subdivision
$\{ \tau=\tau_1 < \tau_2 <\cdots <\tau_n=T \}$
such that $C K \| x \|_{p-var,[\tau_k,\tau_{k+1}]} < 1$, we obtain a unique solution to the differential equation:
$y(t)=y_0+\int_0^t V(y(s)) dx(s), \quad 0\le t \le T$ $\square$

As for the bounded variation case, the solution of a Young’s differential equation is a $C^1$ function of the initial condition,

Proposition: Let $x\in C^{p-var} ([0,T], \mathbb{R}^d)$ and let $V : \mathbb{R}^e \to \mathbb{R}^{e \times d}$ be a $C^1$ Lipschitz continuous map. Let $\pi(t,y_0)$ be the flow of the equation
$y(t)=y_0+\int_0^t V(y(s)) dx(s), \quad 0\le t \le T.$
Then for every $0\le t \le T$, the map $y_0 \to \pi (t,y_0)$ is $C^1$ and the Jacobian $J_t=\frac{\partial \pi(t,y_0)}{\partial y_0}$ is the unique solution of the matrix linear equation
$J_t=Id+ \sum_{i=1}^d \int_0^t DV_i(\pi(s,y_0))J_s dx^i(s).$

As we already mentioned it before, solutions of Young’s differential equations are continuous with respect to the driving path in the $p$-variation topology

Theorem: Let $x^n \in C^{p-var} ([0,T], \mathbb{R}^d)$ and let $V : \mathbb{R}^e \to \mathbb{R}^{e\times d}$ be a Lipschitz and bounded continuous map such that for every $x,y \in \mathbb{R}^d$,
$\| V(x)-V(y) \| \le K \| x-y \|.$
Let $y^n$ be the solution of the differential equation:
$y^n(t)=y(0)+\int_0^t V(y^n(s)) dx^n(s), \quad 0\le t \le T.$
If $x^n$ converges to $x$ in $p$-variation, then $y^n$ converges in $p$-variation to the solution of the differential equation:
$y(t)=y(0)+\int_0^t V(y(s)) dx(s), \quad 0\le t \le T.$

Proof: Let $0\le s \le t \le T$. We have
$\| y-y^n \|_{p-var,[s,t]}$
$= \left\| \int_0^\cdot V(y(u)) dx(u) -\int_0^\cdot V(y^n(u)) dx^n(u) \right\|_{p-var,[s,t]}$
$\le \left\| \int_0^\cdot (V(y(u))-V(y^n(u))) dx(u) + \int_0^\cdot V(y^n(u)) d( x(u)-x^n(u)) \right\|_{p-var,[s,t]}$
$\le \left\| \int_0^\cdot (V(y(u))-V(y^n(u))) dx(u) \right\|_{p-var,[s,t]}+\left\| \int_0^\cdot V(y^n(u)) d( x(u)-x^n(u)) \right\|_{p-var,[s,t]}$
$\le CK \| x\|_{p-var,[s,t]} \| y-y^n \|_{p-var,[s,t]}+C\| x-x^n \|_{p-var,[s,t]}(K \| y^n \|_{p-var,[s,t]}+\| V\|_{\infty, [0,T]})$
Thus, if $s,t$ is such that $CK \| x\|_{p-var,[s,t]} < 1$, we obtain
$\| y-y^n \|_{p-var,[s,t]} \le \frac{C(K \| y^n \|_{p-var,[s,t]}+\| V\|_{\infty, [0,T]})}{ 1-CK\| x\|_{p-var,[s,t]} } \| x-x^n \|_{p-var,[s,t]}.$
In the very same way, provided $CK \| x^n\|_{p-var,[s,t]} < 1$, we get
$\| y^n \|_{p-var,[s,t]} \le \frac{C\| V\|_{\infty, [0,T]}}{ 1-CK\| x^n\|_{p-var,[s,t]} }.$

Let us fix $0 < \varepsilon < 1$ and pick a sequence $0\le \tau_1 \le \cdots \le \tau_m=T$ such that $CK \| x\|_{p-var,[\tau_i,\tau_{i+1}]}+\varepsilon < 1$. Since $\| x^n\|_{p-var,[\tau_i,\tau_{i+1}]} \to \| x\|_{p-var,[\tau_i,\tau_{i+1}]}$, for $n \ge N_1$ with $N_1$ big enough, we have
$CK \| x^n\|_{p-var,[\tau_i,\tau_{i+1}]}+\frac{\varepsilon}{2} < 1.$
We deduce that for $n \ge N_1$,
$\| y^n \|_{p-var,[\tau_i,\tau_{i+1}]} \le \frac{2}{\varepsilon} C \| V\|_{\infty, [0,T]}$
and
$\| y-y^n \|_{p-var,[\tau_i,\tau_{i+1}]}$
$\le \frac{C(K \frac{2}{\varepsilon} C \| V\|_{\infty, [0,T]}+\| V\|_{\infty, [0,T]})}{ 1-CK\| x\|_{p-var,[\tau_i,\tau_{i+1}] }} \| x-x^n \|_{p-var,[\tau_i,\tau_{i+1}]}$
$\le \frac{C}{\varepsilon} \| V\|_{\infty, [0,T]} \left( \frac{2KC}{\varepsilon}+1 \right) \| x-x^n \|_{p-var,[\tau_i,\tau_{i+1}]}$
$\le \frac{C}{\varepsilon} \| V\|_{\infty, [0,T]} \left( \frac{2KC}{\varepsilon}+1 \right) \| x-x^n \|_{p-var,[0,T]}.$
For $n \ge N_2$ with $N_2 \ge N_1$ and big enough, we have
$\| x-x^n \|_{p-var,[0,T]} \le \frac{\varepsilon^3}{m},$
which implies
$\| y-y^n \|_{p-var,[0,T]} \le \frac{C}{\varepsilon} \| V\|_{\infty, [0,T]} \left( \frac{2KC}{\varepsilon}+1 \right) \varepsilon^3.$
$\square$

## HW3. MA3160 Fall 2017

Exercise 1. Two dice are simultaneously rolled. For each pair of events defined below, compute if they are independent or not.

(a) A1 ={thesumis7},B1 ={thefirstdielandsa3}.

(b) A2 = {the sum is 9}, B2 = {the second die lands a 3}.

(c) A3 = {the sum is 9}, B3 = {the first die lands even}.
(d) A4 = {the sum is 9}, B4 = {the first die is less than the second}.

(e) A5 = {two dice are equal}, B5 = {the sum is 8}.
(f) A6 = {two dice are equal}, B6 = {the first die lands even}.

(g) A7 = {two dice are not equal}, B7 = {the first die is less than the second}.

Exercise 2. Are the events A1, B1 and B3 from Exercise 1 independent?

Exercise 3. Suppose you toss a fair coin repeatedly and independently. If it comes up heads, you win a dollar, and if it comes up tails, you lose a dollar. Suppose you start with $20. What is the probability you will get to$150 before you go broke?

## Lecture 5. Rough paths. Fall 2017

In this lecture we define the Young‘s integral $\int y dx$ when $x \in C^{p-var} ([0,T], \mathbb{R}^d)$ and $y \in C^{q-var} ([0,T], \mathbb{R}^{e \times d})$ with $\frac{1}{p}+\frac{1}{q} >1$. The cornerstone is the following Young-Loeve estimate.

Theorem: Let $x \in C^{1-var} ([0,T], \mathbb{R}^d)$ and $y \in C^{1-var} ([0,T], \mathbb{R}^{e \times d})$. Consider now $p,q \ge 1$ with $\theta=\frac{1}{p}+\frac{1}{q} > 1$. The following estimate holds: for $0 \le s \le t \le T$,
$\left\| \int_s^t y(u)dx(u)-y(s)(x(t)-x(s)) \right\| \le \frac{1}{1-2^{1-\theta} }\| x \|_{p-var; [s,t]} \| y \|_{q-var; [s,t]}.$

Proof: For $0 \le s \le t \le T$, let us define
$\Gamma_{s,t} =\int_s^t y(u)dx(u) -y(s)(x(t)-x(s)) .$
We have for $s < t < u$,
$\Gamma_{s,u}-\Gamma_{s,t}-\Gamma_{t,u} =-y(s)(x(u)-x(s))+y(s)(x(t)-x(s))+y(t)(x(u)-x(t))= (y(s)-y(t))(x(t)-x(u)).$
As a consequence, we get
$\| \Gamma_{s,u}\|\le \| \Gamma_{s,t} \|+\| \Gamma_{t,u}\| +\| x \|_{p-var; [s,t]} \| y \|_{q-var; [t,u]}.$
Let now $\omega(s,t)=\| x \|^{1/\theta}_{p-var; [s,t]} \| y \|^{1/\theta}_{q-var; [s,t]}$. We claim that $\omega$ is a control. The continuity and the vanishing on the diagonal are obvious to check, so we just need to justify the superadditivity. Let $s < t < u$, we have from Holder’s inequality,
$\omega(s,t)+\omega(t,u)$
$=\| x \|^{1/\theta}_{p-var; [s,t]} \| y \|^{1/\theta}_{q-var; [s,t]}+\| x \|^{1/\theta}_{p-var; [t,u]} \| y \|^{1/\theta}_{q-var; [t,u]}$
$\le (\| x \|^{p}_{p-var; [s,t]} + \| x \|^{p}_{p-var; [t,u]})^{\frac{1}{p\theta}}(\| y \|^{q}_{q-var; [s,t]} + \| y \|^{q}_{q-var; [t,u]})^{\frac{1}{q\theta}}$
$\le \| x \|^{1/\theta}_{p-var; [s,u]} \| y \|^{1/\theta}_{q-var; [s,u]}=\omega(s,u).$
We have then
$\| \Gamma_{s,u}\|\le \| \Gamma_{s,t} \|+\| \Gamma_{t,u}\| +\omega(s,u)^\theta.$
For $\varepsilon > 0$, consider then the control
$\omega_\varepsilon (s,t)= \omega(s,t) +\varepsilon ( \| x \|_{1-var; [s,t]} + \| y \|_{1-var; [s,t]}).$
Define now
$\Psi(r)= \sup_{s,u, \omega_\varepsilon (s,u)\le r} \| \Gamma_{s,u}\|.$
If $s,u$ is such that $\omega_\varepsilon (s,u) \le r$, we can find a $t$ such that $\omega_\varepsilon(s,t) \le \frac{1}{2} \omega_\varepsilon(s,u)$, $\omega_\varepsilon(t,u) \le \frac{1}{2} \omega_\varepsilon(s,u)$. Indeed, the continuity of $\omega_\varepsilon$ forces the existence of a $t$ such that $\omega_\varepsilon(s,t)=\omega_\varepsilon(t,u)$. We obtain therefore
$\| \Gamma_{s,u}\|\le 2 \Psi(r/2) + r^\theta,$
which implies by maximization,
$\Psi(r)\le 2 \Psi(r/2) + r^\theta.$
By iterating $n$ times this inequality, we obtain
$\Psi(r)$
$\le 2^n \Psi\left(\frac{r}{2^n} \right) +\sum_{k=0}^{n-1} 2^{k(1-\theta)} r^\theta$
$\le 2^n \Psi\left(\frac{r}{2^n} \right) + \frac{1}{1-2^{1-\theta}} r^\theta.$
It is now clear that:
$\| \Gamma_{s,t} \|$
$\le \left\|\int_s^t (y(u)-y(s))dx(u) \right\|$
$\le \| x \|_{1-var; [s,t]} \| y-y(s) \|_{\infty; [s,t]}$
$\le ( \| x \|_{1-var; [s,t]} + \| y \|_{1-var; [s,t]})^2$
$\le \frac{1}{\varepsilon^2} \omega_\varepsilon (s,t)^2,$

Since
$\lim_{n \to \infty} 2^n \Psi\left(\frac{r}{2^n} \right) =0.$
We conclude
$\Psi(r) \le \frac{1}{1-2^{1-\theta}} r^\theta$
and thus
$\| \Gamma_{s,u}\| \le \frac{1}{1-2^{1-\theta}} \omega_\varepsilon(s,u) ^\theta$
Sending $\varepsilon \to 0$, finishes the proof $\square$

It is remarkable that the Young-Loeve estimate only involves $\| x \|_{p-var; [s,t]}$ and $\| y \|_{q-var; [s,t]}$. As a consequence, we obtain the following result whose proof is let to the reader:

Proposition: Let $x \in C^{p-var} ([0,T], \mathbb{R}^d)$ and $y \in C^{q-var} ([0,T], \mathbb{R}^{e \times d})$ with $\theta=\frac{1}{p}+\frac{1}{q} >1$. Let us assume that there exists a sequence $x^n \in C^{1-var} ([0,T], \mathbb{R}^d)$ such that $x^n \to x$ in $C^{p-var} ([0,T], \mathbb{R}^d)$ and a sequence $y^n \in C^{1-var} ([0,T], \mathbb{R}^{e \times d})$ such that $y^n \to x$ in $C^{q-var} ([0,T], \mathbb{R}^d)$, then for every $s < t$, $\int_s^t y^n(u)dx^n(u)$ converges to a limit that we call the Young’s integral of $y$ against $x$ on the interval $[s,t]$ and denote $\int_s^t y(u)dx(u)$.
The integral $\int_s^t y(u)dx(u)$ does not depend of the sequences $x^n$ and $y^n$ and the following estimate holds: for $0 \le s \le t \le T$,
$\left\| \int_s^t y(u)dx(u)-y(s)(x(t)-x(s)) \right\| \le \frac{1}{1-2^{1-\theta} }\| x \|_{p-var; [s,t]} \| y \|_{q-var; [s,t]}.$

The closure of $C^{1-var} ([0,T], \mathbb{R}^d)$ in $C^{p-var} ([0,T], \mathbb{R}^d)$ is $C^{0, p-var} ([0,T], \mathbb{R}^d)$ and we know that $C^{p+\varepsilon-var} ([0,T], \mathbb{R}^d) \subset C^{0, p-var} ([0,T], \mathbb{R}^d)$. It is therefore obvious to extend the Young’s integral for every $x \in C^{p-var} ([0,T], \mathbb{R}^d)$ and $y \in C^{q-var} ([0,T], \mathbb{R}^{e \times d})$ with $\theta=\frac{1}{p}+\frac{1}{q} >1$ and the Young-Loeve estimate still holds
$\left\| \int_s^t y(u)dx(u)-y(s)(x(t)-x(s)) \right\| \le \frac{1}{1-2^{1-\theta} }\| x \|_{p-var; [s,t]} \| y \|_{q-var; [s,t]}.$
From this estimate, we easily see that for $x \in C^{p-var} ([0,T], \mathbb{R}^d)$ and $y \in C^{p-var} ([0,T], \mathbb{R}^{e \times d})$ with $\frac{1}{p}+\frac{1}{q} > 1$ the sequence of Riemann sums
$\sum_{k=0}^{n-1} y(t_i)( x_{t_{i+1}}-x_{t_i})$
will converge to $\int_s^t y(u)dx(u)$ when the mesh of the subdivision goes to 0. We record for later use the following estimate on the Young’s integral, which is also an easy consequence of the Young-Loeve estimate (see Theorem 6.8 in the book for further details).

Proposition: Let $x \in C^{p-var} ([0,T], \mathbb{R}^d)$ and $y \in C^{q-var} ([0,T], \mathbb{R}^{e \times d})$ with $\frac{1}{p}+\frac{1}{q} > 1$. The integral path $t \to \int_0^t y(u)dx(u)$ is continuous with a finite $p$-variation and we have
$\left\|\int_0^\cdot y(u) dx(u) \right\|_{p-var, [s,t] }$
$\le C \| x \|_{p-var; [s,t]} \left( \| y \|_{q-var; [s,t]} + \| y \|_{\infty; [s,t]} \right)$
$\le 2C \| x \|_{p-var; [s,t]} \left( \| y \|_{q-var; [s,t]} + \| y(0)\| \right)$

Posted in Rough paths theory | 1 Comment

## MA3160. Fall 2017. HW2

Exercise 1. Suppose that A and B are pairwise disjoint events for which P(A) = 0.3 and P(B) = 0.5.

1.   What is the probability that B occurs but A does not?
2.   What is the probability that neither A nor B occurs?

Exercise 2. Forty percent of the students at a certain college are members neither of an academic club nor a Greek organization. Fifty percent are members of an academic club and thirty percent are members of a Greek organization. What is the probability that a randomly chosen student is

1.  member of an academic club or a Greek organization?
2.  member of an academic club and of a Greek organization?

Exercise 3. In a seminar attended by 12 students, what is the probability that at least two of them have birthday in the same month?

## Lecture 4. Rough paths. Fall 2017

Our next goal in this course is to define an integral that can be used to integrate rougher paths than bounded variation. As we are going to see, Young’s integration theory allows to define $\int y dx$ as soon as $y$ has finite $q$-variation and $x$ and has a finite $p$-variation with $1/p+1/q>1$. This integral is simply is a limit of Riemann sums as for the Riemann-Stiletjes integral. In this lecture we present some basic properties of the space of continuous paths with a finite $p$-variation. We present these results for $\mathbb{R}^d$ valued paths but most of the results extend without difficulties to paths valued in metric spaces (see chapter 5 in the book by Friz-Victoir).

Definition. A path $x:[s,t] \to \mathbb{R}^d$ is said to be of finite $p$-variation, $p > 0$ if the $p$-variation of $x$ on $[s,t]$, which is defined as
$\| x \|_{p-var; [s,t]} :=\left( \sup_{ \Pi \in \Delta[s,t]} \sum_{k=0}^{n-1} \| x(t_{k+1}) -x(t_k) \|^p \right)^{1/p},$
is finite. The space of continuous paths $x : [s,t] \to \mathbb{R}^d$ with a finite $p$-variation will be denoted by $C^{p-var} ([s,t], \mathbb{R}^d)$.

The notion of $p$-variation is only interesting when $p \ge 1$.

Proposition: Let $x:[s,t] \to \mathbb{R}^d$ be a continuous path of finite $p$-variation with $p < 1$. Then, $x$ is constant.

Proof: We have for $s \le u \le t$,
$\| x(u)-x(s)\|$
$\le ( \max \| x(t_{k+1}) -x(t_k) \|^{1-p} ) \left( \sum_{k=0}^{n-1} \| x(t_{k+1}) -x(t_k) \|^p \right)$
$\le ( \max \| x(t_{k+1}) -x(t_k) \|^{1-p} ) \| x \|^p_{p-var; [s,t]}.$

Since $x$ is continuous, it is also uniformly continuous on $[s,t]$. By taking a sequence of subdivisions whose mesh tends to 0, we deduce then that
$\| x(u)-x(s)\|=0,$
so that $x$ is constant $\square$
The following proposition is immediate:

Proposition: Let $x:[s,t] \to \mathbb{R}^d$, be a continuous path. If $p \le p'$ then
$\| x \|_{p'-var; [s,t]} \le \| x \|_{p-var; [s,t]}.$
As a consequence $C^{p-var} ([s,t], \mathbb{R}^d) \subset C^{p'-var} ([s,t], \mathbb{R}^d).$

We remind that a continuous map $\omega: \{ 0 \le s \le t \le T \} \to [0,\infty)$ that vanishes on the diagonal is called a control f if for all $s \le t \le u$,
$\omega(s,t)+\omega(t,u) \le \omega (s,u).$

Proposition: Let $x \in C^{p-var} ([0,T], \mathbb{R}^d)$. Then $\omega(s,t)= \| x \|^p_{p-var; [s,t]}$ is a control such that for every $s \le t$,
$\| x(s) -x(t) \| \le \omega(s,t)^{1/p}.$

Proof: It is immediate that
$\| x(s) -x(t) \| \le \omega(s,t)^{1/p},$
so we focus on the proof that $\omega$ is a control. If $\Pi_1 \in \Delta [s,t]$ and $\Pi_2 \in \Delta [t,u]$, then $\Pi_1 \cup \Pi_2 \in \Delta [s,u]$. As a consequence, we obtain
$\sup_{ \Pi_1 \in \Delta[s,t]} \sum_{k=0}^{n-1} \| x(t_{k+1}) -x(t_k) \|^p +\sup_{ \Pi_2 \in \Delta[t,u]} \sum_{k=0}^{n-1} \| x(t_{k+1}) -x(t_k) \|^p \le \sup_{ \Pi \in \Delta[s,u]} \sum_{k=0}^{n-1} \| x(t_{k+1}) -x(t_k) \|^p,$
thus
$\| x \|^p_{p-var, [s,t]}+ \| x \|^p_{p-var, [t,u]} \le \| x \|^p_{p-var, [s,u]}.$
The proof of the continuity is left to the reader (see also Proposition 5.8 in the book by Friz-Victoir) $\square$

In the following sense, $\| x \|^p_{p-var; [s,t]}$ is the minimal control of a path $x$.

Proposition: Let $x \in C^{p-var} ([0,T], \mathbb{R}^d)$ and let $\omega: \{ 0 \le s \le t \le T \} \to [0,\infty)$ be a control such that for $0 \le s \le t \le T$,
$\| x(s)-x(t) \| \le C \omega (s,t)^{1/p},$
then
$\| x \|_{p-var; [s,t]} \le C \omega(s,t)^{1/p}.$

Proof: We have
$\| x \|_{p-var; [s,t]}$
$= \left( \sup_{ \Pi \in \Delta[s,t]} \sum_{k=0}^{n-1} \| x(t_{k+1}) -x(t_k) \|^p \right)^{1/p}$
$\le \left( \sup_{ \Pi \in \Delta[s,t]} \sum_{k=0}^{n-1} C^p \omega(t_{k}, t_{k+1}) \right)^{1/p}$
$\le C \omega(s,t)^{1/p}$
$\square$

The next result shows that the set of continuous paths with bounded $p$-variation is a Banach space.

Theorem: Let $p \ge 1$. The space $C^{p-var} ([0,T], \mathbb{R}^d)$ endowed with the norm $\| x(0) \|+ \| x \|_{p-var, [0,T]}$ is a Banach space.

Proof: The proof is identical to the case $p=1$, so we let the careful reader check the details $\square$

Again, the set of smooth paths is not dense in $C^{p-var} ([0,T], \mathbb{R}^d)$ for the $p$-variation convergence topology. The closure of the set of smooth paths in the $p$-variation norm shall be denoted by $C^{0,p-var} ([0,T], \mathbb{R}^d)$. We have the following characterization of paths in $C^{0,p-var} ([0,T], \mathbb{R}^d)$.

Proposition: Let $p \ge 1$. $x \in C^{0,p-var} ([0,T], \mathbb{R}^d)$ if and only if
$\lim_{\delta \to 0} \sup_{ \Pi \in \Delta[s,t], | \Pi | \le \delta } \sum_{k=0}^{n-1} \| x(t_{k+1}) -x(t_k) \|^p=0.$

Proof: See Theorem 5.31 in the book by Friz-Victoir $\square$

The following corollary shall often be used in the sequel:

Corollary: If $1 \le p< q$, then $C^{p-var} ([0,T], \mathbb{R}^d) \subset C^{0,q-var} ([0,T], \mathbb{R}^d)$.

Proof: Let $\Pi \in \Delta[s,t]$ whose mesh is less than $\delta > 0$. We have
$\sum_{k=0}^{n-1} \| x(t_{k+1}) -x(t_k) \|^q$
$\le \left( \sum_{k=0}^{n-1} \| x(t_{k+1}) -x(t_k) \|^p\right) \max \| x(t_{k+1}) -x(t_k) \|^{p-q}$
$\le \| x \|^p_{p-var; [s,t]} \max \| x(t_{k+1}) -x(t_k) \|^{p-q}.$
As a consequence, we obtain
$\lim_{\delta \to 0} \sup_{ \Pi \in \Delta[s,t], | \Pi | \le \delta } \sum_{k=0}^{n-1} \| x(t_{k+1}) -x(t_k) \|^q=0$ $\square$

Posted in Rough paths theory | 3 Comments

## Lecture 3 Rough paths. Fall 2017

Let $x\in C^{1-var} ([0,T], \mathbb{R}^d)$ and let $V : \mathbb{R}^e \to \mathbb{R}^{e\times d}$ be a Lipschitz continuous map. In order to analyse the solution of the differential equation,
$y(t)=y_0+\int_0^t V(y(s)) dx(s),$
and make the geometry enter into the scene, it is convenient to see $V$ as a collection of vector fields $V=(V_1, \cdots, V_d)$, where the $V_i$‘s are the columns of the matrix $V$. The differential equation then of course writes
$y(t)=y_0+\sum_{i=1}^d \int_0^t V_i (y(s)) dx^i(s),$

Generally speaking, a vector field $V$ on $\mathbb{R}^{e}$ is a map
$\begin{array}{llll} V: & \mathbb{R}^{e}& \rightarrow & \mathbb{R}^{e} \\ & x & \rightarrow & (v_{1}(x),...,v_{e}(x)). \end{array}$
A vector field $V$ can be seen as a differential operator acting on differentiable functions $f: \mathbb{R}^{e} \rightarrow \mathbb{R}$ as follows:
$Vf(x)=\langle V(x), \nabla f (x) \rangle= \sum_{i=1}^e v_i (x) \frac{\partial f}{\partial x_i}.$
We note that $V$ is a derivation, that is for $f,g \in \mathcal{C}^{1} (\mathbb{R}^e , \mathbb{R} )$,
$V(fg)=(Vf)g +f (Vg).$
For this reason we often use the differential notation for vector fields and write:
$V=\sum_{i=1}^d v_i(x) \frac{\partial }{\partial x_i}.$
Using this action of vector fields on functions, the change of variable formula for solutions of differential equations takes a particularly concise form:

Proposition: Let $y$ be a solution of a differential equation that writes
$y(t)=y_0+\sum_{i=1}^d \int_0^t V_i (y(s)) dx^i(s),$
then for any $C^1$ function $f: \mathbb{R}^{e} \rightarrow \mathbb{R}$,
$f(y(t))=f(y_0)+\sum_{i=1}^d \int_0^t V_i f (y(s)) dx^i(s).$

Let $V$ be a Lipschitz vector field on $\mathbb{R}^e$. For any $y_0 \in \mathbb{R}^e$, the differential equation
$y(t)=y_0+\int_0^t V(y(s)) ds$
has a unique solution $y: \mathbb{R} \to \mathbb{R}^e$. By time homogeneity of the equation, the flow of this equation satisfies
$\pi ( t_1 , \pi( t_2 ,y_0 ) )=\pi (t_1 +t_2,y_0).$
and therefore $\{ \pi( t, \cdot), t \in \mathbb{R}\}$ is a one parameter group of diffeomorphisms $\mathbb{R}^e \to \mathbb{R}^e$. This group is generated by $V$ in the sense that for every $y_0 \in \mathbb{R}^e$,
$\lim_{t\to 0} \frac{\pi(t,y_0) -y_0}{t}=V(y_0).$
For these reasons, we write $\pi(t,y_0)=e^{tV}(y_0)$. Let us now assume that $V$ is a $C^1$ Lipschitz vector field on $\mathbb{R}^e$. If $\phi :\mathbb{R}^e \to \mathbb{R}^e$ is a diffeomorphism, the pull-back $\phi^{\ast}V$ of the vector field $V$ by the map $\phi$ is the vector field defined by the chain rule,
$\phi^{\ast}V (x)=(d \phi^{-1} )_{\phi (x) } \left( V (\phi(x)) \right)$. In particular, if $V'$ is another $C^1$ Lipschitz vector field on $\mathbb{R}^e$, then for every $t \in \mathbb{R}$, we have a vector field $(e^{tV})^{\ast} V'$. The Lie bracket $[V,V']$ between $V$ and $V'$ is then defined as
$[V,V']=\left( \frac{d}{dt} \right)_{t=0} (e^{tV})^{\ast}V'.$
It is computed that
$[ V, V' ](x)=\sum_{i=1}^e \left( \sum_{j=1}^e v_j (x) \frac{\partial v'_i}{\partial x_j}(x)- v'_j (x) \frac{\partial v_i}{\partial x_j}(x)\right)\frac{\partial}{\partial x_i}.$
Observe that the Lie bracket obviously satisfies $[V,V']=-[V',V]$ and the so-called Jacobi identity that is:
$[V,[V',V'']]+[V',[V'',V]]+[V'',[V,V']]=0.$
What the Lie bracket $[V,V']$ really quantifies is the lack of commutativity of the respective flows generated by $V$ and $V'$.

Lemma: Let $V,V'$ be two $C^1$ Lipschitz vector fields on $\mathbb{R}^e$. Then, $[V,V']=0$ if and only if for every $s,t \in \mathbb{R}$,
$e^{sV} e^{t V'}=e^{sV+tV'}=e^{t V'} e^{sV}.$

Proof: This is a classical result in differential geometry, so we only give one part the proof. From the very definition of the Lie bracket and the multiplicativity of the flow, we see that $[V,V']=0$ if and only if for every $s \in \mathbb{R}$, $(e^{sV})^{\ast}V'=V'$. Now, suppose that $[V,V']=0$. Let $y$ be the solution of the equation
$y(t)=y_0+\int_0^t V'(y(s)) ds.$
Since $(e^{sV})^{\ast}V'=V'$, we obtain that $e^{sV} (y(t))$ is also a solution of the equation. By uniqueness of solutions, we obtain that
$e^{sV}(y(t))=e^{tV'} ( e^{sV}(y_0)).$
As a conclusion,
$e^{sV} e^{t V'}=e^{t V'} e^{sV}$
$\square$

If we consider a differential equation
$y(t)=y_0+\sum_{i=1}^d \int_0^t V_i (y(s)) dx^i(s),$
as we will see it throughout this class, the Lie brackets $[V_i,V_j]$ play an important role in understanding the geometry of the set of solutions. The easiest result in that direction is the following:

Proposition: Let $x\in C^{1-var} ([0,T], \mathbb{R}^d)$ and let $V_1,\cdots, V_d$ be $C^1$ Lipschitz vector fields on $\mathbb{R}^e$. Assume that for every $1 \le i,j \le d$, $[V_i,V_j]=0$, then the solution of the differential equation
$y(t)=y_0+\sum_{i=1}^d \int_0^t V_i (y(s)) dx^i(s), \quad 0 \le t \le T,$
can be represented as
$y(t)= \exp \left( \sum_{i=1}^d x^i(t) V_i \right) (y_0).$

Proof: Let
$F(x_1,\cdots,x_n)= \exp \left( \sum_{i=1}^d x_i V_i \right) (y_0).$
Since the flows generated by the $V_i$‘s are commuting, we get that
$\frac{\partial F}{\partial x_i}(x)=V_i (F(x)).$
The change of variable formula for bounded variation paths implies then that $F(x^1(t),\cdots,x^n(t))$ is a solution and we conclude by uniqueness $\square$

## Rough paths theory Fall 2017. Lecture 2

In this lecture we establish the basic existence and uniqueness results concerning differential equations driven by bounded variation paths and prove the continuity in the 1-variation topology of the solution of an equation with respect to the driving signal.

Theorem: Let $x\in C^{1-var} ([0,T], \mathbb{R}^d)$ and let $V : \mathbb{R}^e \to \mathbb{R}^{e\times d}$ be a Lipschitz continuous map, that is there exists a constant $K > 0$ such that for every $x,y \in \mathbb{R}^e$,
$\| V(x)-V(y) \| \le K \| x-y \|.$
For every $y_0 \in \mathbb{R}^e$, there is a unique solution to the differential equation:
$y(t)=y_0+\int_0^t V(y(s)) dx(s), \quad 0\le t \le T.$
Moreover $y \in C^{1-var} ([0,T], \mathbb{R}^e)$.

Proof: The proof is a classical application of the fixed point theorem. Let $0 < \tau \le T$ and consider the map $\Phi$ going from the space of continuous functions $[0,\tau] \to \mathbb{R}^e$ into itself, which is defined by
$\Phi(y)_t =y_0+\int_0^t V(y(s)) dx(s), \quad 0\le t \le \tau.$
By using estimates on Riemann-Stieltjes integrals, we deduce that
$\| \Phi(y^1)-\Phi(y^2) \|_{ \infty, [0,\tau]}$
$\le \| V(y^1)-V(y^2) \|_{ \infty, [0,\tau]} \| x \|_{1-var,[0,\tau]}$
$\le K \| y^1-y^2 \|_{ \infty, [0,\tau]} \| x \|_{1-var,[0,\tau]}$
If $\tau$ is small enough, then $K \| x \|_{1-var,[0,\tau]} < 1$, which means that $\Phi$ is a contraction that admits a unique fixed point $y$. This $y$ is the unique solution to the differential equation:
$y(t)=y_0+\int_0^t V(y(s)) dx(s), \quad 0\le t \le \tau.$
By considering then a subdivision
$\{ \tau=\tau_1 < \tau_2 < \cdots < \tau_n=T \}$
such that $K \| x \|_{1-var,[\tau_k,\tau_{k+1}]} < 1$, we obtain a unique solution to the differential equation:
$y(t)=y_0+\int_0^t V(y(s)) dx(s), \quad 0\le t \le T$
$\square$

The solution of a differential equation is a continuous function of the initial condition, more precisely we have the following estimate:

Proposition: Let $x\in C^{1-var} ([0,T], \mathbb{R}^d)$ and let $V : \mathbb{R}^e \to \mathbb{R}^{e\times d}$ be a Lipschitz continuous map such that for every $x,y \in \mathbb{R}^e$,
$\| V(x)-V(y) \| \le K \| x-y \|.$
If $y^1$ and $y^2$ are the solutions of the differential equations:
$y^1(t)=y^1(0)+\int_0^t V(y^1(s)) dx(s), \quad 0\le t \le T,$
and
$y^2(t)=y^2(0)+\int_0^t V(y^2(s)) dx(s), \quad 0\le t \le T,$
then the following estimate holds:
$\| y^1 -y^2 \|_{\infty,[0,T]} \le \| y^1(0) -y^2(0) \| \exp \left( K \| x \|_{1-var,[0,T]} \right).$

Proof: We have
$\| y^1-y^2 \|_{\infty,[0,t]} \le \| y^1(0) -y^2(0) \| +K \int_0^t \| y^1-y^2 \|_{\infty,[0,s]} \| dx(s) \|,$
and conclude by Gronwall’s lemma $\square$

This continuity can be understood in terms of flows. Let $x\in C^{1-var} ([0,T], \mathbb{R}^d)$ and let $V : \mathbb{R}^e \to \mathbb{R}^{e\times d}$ be a Lipschitz map. Denote by $\pi (t,y_0)$, $0 \le t \le T$, $y_0 \in \mathbb{R}^e$, the unique solution of the equation
$y(t)=y_0+\int_0^t V(y(s)) dx(s), \quad 0\le t \le T.$
The previous proposition shows that for a fixed $0 \le t \le T$, the map $y_0 \to \pi (t,y_0)$ is Lipschitz continuous. The set $\{ \pi (t, \cdot), 0 \le t \le T \}$ is called the flow of the equation.
Under more regularity assumptions on $V$, the map $y_0 \to \pi (t,y_0)$ is even $C^1$ and the Jacobian map solves a linear equation.

Proposition: Let $x\in C^{1-var} ([0,T], \mathbb{R}^d)$ and let $V : \mathbb{R}^e \to \mathbb{R}^{e\times d}$ be a $C^1$ Lipschitz continuous map. Let $\pi(t,y_0)$ be the flow of the equation
$y(t)=y_0+\int_0^t V(y(s)) dx(s), \quad 0\le t \le T.$
Then for every $0\le t \le T$, the map $y_0 \to \pi (t,y_0)$ is $C^1$ and the Jacobian $J_t=\frac{\partial \pi(t,y_0)}{\partial y_0}$ is the unique solution of the matrix linear equation
$J_t=Id+ \sum_{i=1}^d\int_0^t DV_i(\pi(s,y_0))J_s dx(s)$,
where the $V_i$‘s denote the columns of the matrix $V$.

We finally turn to the important estimate showing that solutions of differential equations are continuous with respect to the driving path in the 1-variation topology

Theorem: Let $x^1,x^2 \in C^{1-var} ([0,T], \mathbb{R}^d)$ and let $V : \mathbb{R}^e \to \mathbb{R}^{e\times d}$ be a Lipschitz and bounded continuous map such that for every $x,y \in \mathbb{R}^e$,
$\| V(x)-V(y) \| \le K \| x-y \|.$
If $y^1$ and $y^2$ are the solutions of the differential equations:
$y^1(t)=y(0)+\int_0^t V(y^1(s)) dx^1(s), \quad 0\le t \le T,$
and
$y^2(t)=y(0)+\int_0^t V(y^2(s)) dx^2(s), \quad 0\le t \le T,$
then the following estimate holds:
$\| y^1 -y^2 \|_{1-var,[0,T]} \le \| V \|_\infty \left( 1+ K\| x_1 \|_{1-var,[0,T]} \exp \left( K \| x_1 \|_{1-var,[0,T]} \right) \right) \| x^1 -x^2 \|_{1-var,[0,T]} .$

Proof: We first give an estimate in the supremum topology. It is easily seen that the assumptions imply
$\| y^1 -y^2 \|_{\infty ,[0,t]} \le K \int_0^t \| y^1 -y^2 \|_{\infty ,[0,s]} \| dx^1(s) \| +\| V \|_\infty \| x^1 -x^2 \|_{1-var,[0,T]}.$
From Gronwall’s lemma, we deduce that
$\| y^1 -y^2 \|_{\infty ,[0,T]} \le \| V \|_\infty \exp \left( K \| x \|_{1-var,[0,T]} \right) \| x^1 -x^2 \|_{1-var,[0,T]} .$
Now, we also have for any $0\le s \le t \le T$,
$\| y^1(t)-y^2(t)-(y^1(s)-y^2(s))\|\le K \| y^1 -y^2 \|_{\infty ,[0,T]} \| x^1 \|_{1-var,[s,t]} +\| V\|_\infty \| x^1 -x^2 \|_{1-var,[s,t]} .$
This implies,
$\| y^1 -y^2 \|_{1-var,[0,T]} \le K \| y^1 -y^2 \|_{\infty ,[0,T]} \| x^1 \|_{1-var,[0,T]} +\| V\|_\infty \| x^1 -x^2 \|_{1-var,[0,T]}$
and yields the conclusion $\square$