MA3160. Fall 2017. Midterm 1 sample

Practice midterm 1

 

We will do the correction in class on 09/28.

Posted in Uncategorized | Leave a comment

HW4. MA360 Fall 2017

Exercise 1. Two dice are rolled. Consider the events A = {sum of two dice equals 3}, B = {sum of two dice equals 7 }, and C = {at least one of the dice shows a 1}.

(a) What is P(A | C)?

(b) What is P(B | C)?

(c) Are A and C independent? What about B and C?

 

Exercise 2. Suppose you roll two standard, fair, 6-sided dice. What is the probability that the sum is at least 9 given that you rolled at least one 6?

Exercise 3.  Color blindness is a sex-linked condition, and 5% of men and 0.25% of women are color blind. The population of the United States is 51% female. What is the probability that a color-blind American is a man?

 

Posted in MA3160 | Leave a comment

Lecture 7. Rough paths. Fall 2017

In the previous lecture we introduced the signature of a bounded variation path x as the formal series
\mathfrak{S} (x)_{s,t} =1 + \sum_{k=1}^{+\infty} \int_{\Delta^k [s,t]} dx^{\otimes k}.
If now x \in C^{p-var}([0,T],\mathbb{R}^d), p \ge 1 the iterated integrals \int_{\Delta^k [s,t]} dx^{\otimes k} can only be defined as Young integrals when p < 2. In this lecture, we are going to derive some estimates that allow to define the signature of some (not all) paths with a finite p variation when p \ge 2. These estimates are due to Terry Lyons in his seminal paper and this is where the rough paths theory really begins.

For P \in \mathbb{R} [[X_1,...,X_d]] that can be writen as
P=P_0+\sum_{k = 1}^{+\infty} \sum_{I \in \{1,...,d\}^k}a_{i_1,...,i_k} X_{i_1}...X_{i_k},
we define
\| P \| =|P_0|+\sum_{k = 1}^{+\infty} \sum_{I \in \{1,...,d\}^k}|a_{i_1,...,i_k}| \in [0,\infty].
It is quite easy to check that for P,Q \in \mathbb{R} [[X_1,...,X_d]]
\| PQ \| \le \| P \| \| Q\|.
Let x \in C^{1-var}([0,T],\mathbb{R}^d). For p \ge 1, we denote
\left\| \int dx^{\otimes k}\right\|_{p-var, [s,t]}=\left( \sup_{ \Pi \in \mathcal{D}[s,t]} \sum_{i=0}^{n-1} \left\| \int_{\Delta^k [t_i,t_{i+1}]} dx^{\otimes k} \right\|^p \right)^{1/p},
where \mathcal{D}[s,t] is the set of subdivisions of the interval [s,t]. Observe that for k \ge 2, in general
\int_{\Delta^k [s,t]} dx^{\otimes k}+ \int_{\Delta^k [t,u]} dx^{\otimes k} \neq \int_{\Delta^k [s,u]} dx^{\otimes k}.
Actually from the Chen’s relations we have
\int_{\Delta^n [s,u]} dx^{\otimes n}= \int_{\Delta^n [s,t]} dx^{\otimes k}+ \int_{\Delta^n [t,u]} dx^{\otimes k} +\sum_{k=1}^{n-1} \int_{\Delta^k [s,t]} dx^{\otimes k }\int_{\Delta^{n-k} [t,u]} dx^{\otimes (n-k) }.
It follows that \left\| \int dx^{\otimes k}\right\|_{p-var, [s,t]} needs not to be the p-variation of t \to \int_{\Delta^k [s,t]} dx^{\otimes k}.
The first major result of rough paths theory is the following estimate:

Proposition: Let p \ge 1. There exists a constant C \ge 0, depending only on p, such that for every x \in C^{1-var}([0,T],\mathbb{R}^d) and k \ge 0,
\left\| \int_{\Delta^k [s,t]} dx^{\otimes k} \right\| \le \frac{C^k}{\left( \frac{k}{p}\right)!} \left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [s,t]} \right)^k, \quad 0 \le s \le t \le T.

By \left( \frac{k}{p}\right)!, we of course mean \Gamma \left( \frac{k}{p}+1\right). Some remarks are in order before we prove the result. If p=1, then the estimate becomes
\left\| \int_{\Delta^k [s,t]} dx^{\otimes k} \right\| \le \frac{C^k}{k!} \| x \|_{1-var, [s,t]}^k,
which is immediately checked because
\left\| \int_{\Delta^k [s,t]} dx^{\otimes k} \right\|
\le \sum_{I \in \{1,...,d\}^k} \left\| \int_{\Delta^{k}[s,t]}dx^{I} \right\|
\le \sum_{I \in \{1,...,d\}^k} \int_{s \le t_1 \le t_2 \le \cdots \le t_k \le t} \| dx^{i_1}(t_1) \| \cdots \| dx^{i_k}(t_k)\|
\le \frac{1}{k!} \left( \sum_{j=1}^ d \| x^j \|_{1-var, [s,t]} \right)^k.

We can also observe that for k \le p, the estimate is easy to obtain because
\left\| \int_{\Delta^k [s,t]} dx^{\otimes k} \right\| \le \left\| \int dx^{\otimes k}\right\|_{\frac{p}{k}-var, [s,t]}.
So, all the work is to prove the estimate when k >p. The proof is split into two lemmas. The first one is a binomial inequality which is actually quite difficult to prove:

Lemma: For x,y >0, n \in \mathbb{N}, n \ge 0, and p \ge 1,
\sum_{j=0}^n \frac{x^{j/p}}{\left( \frac{j}{p}\right)!} \frac{y^{(n-j)/p}}{\left( \frac{n-j}{p}\right)!} \le p \frac{(x+y)^{n/p}}{ {\left( \frac{n}{p}\right)!}}.

Proof: See Lemma 2.2.2 in the article by Lyons or this proof for the sharp constant \square

The second one is a lemma that actually already was essentially proved in the Lecture on Young’s integral, but which was not explicitly stated.

Lemma: Let \Gamma: \{ 0 \le s \le t \le T \} \to \mathbb{R}^N. Let us assume that:

  • There exists a control \tilde{\omega} such that
    \lim_{r \to 0} \sup_{(s,t), \tilde{\omega}(s,t) \le r } \frac{\| \Gamma_{s,t} \|}{r}=0;
  • There exists a control \omega and \theta >1, \xi >0 such that for 0 \le s \le t \le u \le T,
    \| \Gamma_{s,u} \| \le \| \Gamma_{s,t} \|+ \| \Gamma_{t,u} \| +\xi \omega(s,u)^\theta.

Then, for all 0 \le s \le t \le T,
\| \Gamma_{s,t} \| \le \frac{\xi}{1-2^{1-\theta}} \omega(s,t)^\theta.

Proof:
See the proof of the Young-Loeve estimate or Lemma 6.2 in the book by Friz-Victoir \square

We can now turn to the proof of the main result.

Proof:
Let us denote
\omega(s,t)=\left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [s,t]} \right)^p.
We claim that \omega is a control. Indeed for 0 \le s \le t \le u \le T, we have from Holder’s inequality
\omega(s,t)+\omega(t,u)
= \left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [s,t]} \right)^p+\left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [t,u]} \right)^p
\le \left( \sum_{j=1}^{[p]}\left( \left\| \int dx^{\otimes j}\right\|^{p/j}_{\frac{p}{j}-var, [s,t]} + \left\| \int dx^{\otimes j}\right\|^{p/j}_{\frac{p}{j}-var, [t,u]}\right)^{1/p} \right)^p
\le \left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [s,u]} \right)^p =\omega(s,u).

It is clear that for some constant \beta > 0 which is small enough, we have for k \le p,
\left\| \int_{\Delta^k [s,t]} dx^{\otimes k} \right\| \le \frac{1}{\beta \left( \frac{k}{p}\right)!} \omega(s,t)^{k/p}.

Let us now consider
\Gamma_{s,t}= \int_{\Delta^{[p]+1} [s,t]} dx^{\otimes ([p]+1)}.
From the Chen’s relations, for 0 \le s \le t \le u \le T,
\Gamma_{s,u}= \Gamma_{s,t}+ \Gamma_{t,u}+\sum_{j=1}^{[p]} \int_{\Delta^j [s,t]} dx^{\otimes j }\int_{\Delta^{[p]+1-j} [t,u]} dx^{\otimes ([p]+1-j) }.
Therefore,
\| \Gamma_{s,u}\|
\le \| \Gamma_{s,t} \| + \| \Gamma_{t,u} \| +\sum_{j=1}^{[p]} \left\| \int_{\Delta^j [s,t]} dx^{\otimes j }\right\| \left\| \int_{\Delta^{[p]+1-j} [t,u]} dx^{\otimes ([p]+1-j) }\right\|
\le \| \Gamma_{s,t} \| + \| \Gamma_{t,u} \| +\frac{1}{\beta^2} \sum_{j=1}^{[p]} \frac{1}{ \left( \frac{j}{p}\right)!} \omega(s,t)^{j/p}\frac{1}{ \left( \frac{[p]+1-j}{p}\right)!} \omega(t,u)^{([p]+1-j)/p}
\le \| \Gamma_{s,t} \| + \| \Gamma_{t,u} \| +\frac{1}{\beta^2} \sum_{j=0}^{[p]+1} \frac{1}{ \left( \frac{j}{p}\right)!} \omega(s,t)^{j/p}\frac{1}{ \left( \frac{[p]+1-j}{p}\right)!} \omega(t,u)^{([p]+1-j)/p}
\le \| \Gamma_{s,t} \| + \| \Gamma_{t,u} \| +\frac{1}{\beta^2} p \frac{(\omega(s,t)+\omega(t,u))^{([p]+1)/p}}{ {\left( \frac{[p]+1}{p}\right)!}}
\le \| \Gamma_{s,t} \| + \| \Gamma_{t,u} \| +\frac{1}{\beta^2} p \frac{\omega(s,u)^{([p]+1)/p}}{ {\left( \frac{[p]+1}{p}\right)!}}.
On the other hand, we have
\| \Gamma_{s,t} \| \le A \| x \|_{1-var,[s,t]}^{[p]+1}.
We deduce from the previous lemma that
\| \Gamma_{s,t} \| \le \frac{1}{\beta^2} \frac{p}{1-2^{1-\theta}} \frac{\omega(s,t)^{([p]+1)/p}}{ {\left( \frac{[p]+1}{p}\right)!}},
with \theta=\frac{[p]+1}{p}. The general case k \ge p is dealt by induction. The details are let to the reader \square

 

Let x \in C^{1-var}([0,T],\mathbb{R}^d). Since
\omega(s,t)=\left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [s,t]} \right)^p
is a control, the estimate
\left\| \int_{\Delta^k [s,t]} dx^{\otimes k} \right\| \le \frac{C^k}{\left( \frac{k}{p}\right)!} \left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [s,t]} \right)^k, \quad 0 \le s \le t \le T.
easily implies that for k > p,
\left\| \int dx^{\otimes k} \right\|_{1-var, [s,t]} \le \frac{C^k}{\left( \frac{k}{p}\right)!} \omega(s,t)^{k/p}.
We stress that it does not imply a bound on the 1-variation of the path t \to \int_{\Delta^k [0,t]} dx^{\otimes k} . What we can get for this path, are bounds in p-variation:

Proposition: Let p \ge 1. There exists a constant C \ge 0, depending only on p, such that for every x \in C^{1-var}([0,T],\mathbb{R}^d) and k \ge 0,
\left\| \int_{\Delta^k [0,\cdot]} dx^{\otimes k} \right\|_{p-var, [s,t]} \le \frac{C^k}{\left( \frac{k}{p}\right)!} \omega(s,t)^{1/p} \omega(0,T)^{\frac{k-1}{p}}
where
\omega(s,t)= \left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [s,t]} \right)^p, \quad 0 \le s \le t \le T.

Proof: This is an easy consequence of the Chen’s relations. Indeed,

\left\| \int_{\Delta^k [0,t]} dx^{\otimes k} - \int_{\Delta^k [0,s]} dx^{\otimes k} \right\|
=\left\| \sum_{j=1}^k \int_{\Delta^j [s,t]} dx^{\otimes j} \int_{\Delta^{j-k} [0,s]} dx^{\otimes (k-j)} \right\|
\le \sum_{j=1}^k \left\| \int_{\Delta^j [s,t]} dx^{\otimes j} \right\| \left\| \int_{\Delta^{j-k} [0,s]} dx^{\otimes (k-j)} \right\|
\le C^k \sum_{j=1}^k \frac{1}{\left( \frac{j}{p}\right)!} \omega(s,t)^{j/p} \frac{1}{\left( \frac{k-j}{p}\right)!} \omega(s,t)^{(k-j)/p}
\le C^k \omega(s,t)^{1/p} \sum_{j=1}^k \frac{1}{\left( \frac{j}{p}\right)!} \omega(0,T)^{(j-1)/p} \frac{1}{\left( \frac{k-j}{p}\right)!} \omega(0,T)^{(k-j)/p}
\le C^k \omega(s,t)^{1/p} \omega(0,T)^{(k-1)/p}\sum_{j=1}^k \frac{1}{\left( \frac{j}{p}\right)!} \frac{1}{\left( \frac{k-j}{p}\right)!}.
and we conclude with the binomial inequality \square

We are now ready for a second major estimate which is the key to define iterated integrals of a path with p-bounded variation when p \ge 2.

Theorem: Let p \ge 1, K > 0 and x,y \in C^{1-var}([0,T],\mathbb{R}^d) such that
\sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}- \int dy^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \le 1,
and
\left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \right)^p+ \left( \sum_{j=1}^{[p]} \left\| \int dy^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \right)^p \le K.
Then there exists a constant C \ge 0 depending only on p and K such that for 0\le s \le t \le T and k \ge 1
\left\| \int_{\Delta^k [s,t]} dx^{\otimes k}- \int_{\Delta^k [s,t]} dy^{\otimes k} \right\| \le \left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}- \int dy^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \right) \frac{C^k}{\left( \frac{k}{p}\right)!} \omega(s,t)^{k/p} ,
\left\| \int_{\Delta^k [s,t]} dx^{\otimes k}\right\| +\left\| \int_{\Delta^k [s,t]} dy^{\otimes k} \right\| \le \frac{C^k}{\left( \frac{k}{p}\right)!} \omega(s,t)^{k/p}
where \omega is the control
\omega(s,t)= \frac{ \left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [s,t]} \right)^p+ \left( \sum_{j=1}^{[p]} \left\| \int dy^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [s,t]} \right)^p } { \left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \right)^p+ \left( \sum_{j=1}^{[p]} \left\| \int dy^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \right)^p }
+\left( \frac{\sum_{j=1}^{[p]} \left\| \int dx^{\otimes j} - \int dy^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [s,t]} }{\sum_{j=1}^{[p]} \left\| \int dx^{\otimes j} - \int dy^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [0,T]} } \right)^p

Proof: We prove by induction on k that for some constants C,\beta,
\left\| \int_{\Delta^k [s,t]} dx^{\otimes k}- \int_{\Delta^k [s,t]} dy^{\otimes k} \right\| \le \left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}- \int dy^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \right) \frac{C^k}{\beta \left( \frac{k}{p}\right)!} \omega(s,t)^{k/p},
\left\| \int_{\Delta^k [s,t]} dx^{\otimes k}\right\| +\left\| \int_{\Delta^k [s,t]} dy^{\otimes k} \right\| \le \frac{C^k}{\beta \left( \frac{k}{p}\right)!} \omega(s,t)^{k/p}

For k \le p, we trivially have
\left\| \int_{\Delta^k [s,t]} dx^{\otimes k}- \int_{\Delta^k [s,t]} dy^{\otimes k} \right\| \le \left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}- \int dy^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \right)^k \omega(s,t)^{k/p}
\le \left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}- \int dy^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \right) \omega(s,t)^{k/p}.
and
\left\| \int_{\Delta^k [s,t]} dx^{\otimes k}\right\| +\left\| \int_{\Delta^k [s,t]} dy^{\otimes k} \right\| \le K^{k/p} \omega(s,t)^{k/p}.
Not let us assume that the result is true for 0 \le j \le k with k > p. Let
\Gamma_{s,t}=\int_{\Delta^k [s,t]} dx^{\otimes (k+1)}- \int_{\Delta^k [s,t]} dy^{\otimes (k+1)}
From the Chen’s relations, for 0 \le s \le t \le u \le T,
\Gamma_{s,u}= \Gamma_{s,t}+ \Gamma_{t,u}
+\sum_{j=1}^{k} \int_{\Delta^j [s,t]} dx^{\otimes j }\int_{\Delta^{k+1-j} [t,u]} dx^{\otimes (k+1-j) }-\sum_{j=1}^{k} \int_{\Delta^j [s,t]} dy^{\otimes j }\int_{\Delta^{k+1-j} [t,u]} dy^{\otimes (k+1-j) }.
Therefore, from the binomial inequality
\| \Gamma_{s,u}\|
\le \| \Gamma_{s,t} \| + \| \Gamma_{t,u} \| +\sum_{j=1}^{k} \left\| \int_{\Delta^j [s,t]} dx^{\otimes j }- \int_{\Delta^j [s,t]} dy^{\otimes j } \right\| \left\| \int_{\Delta^{k+1-j} [t,u]} dx^{\otimes (k+1-j) }\right\|
+\sum_{j=1}^{k} \left\| \int_{\Delta^{j} [s,t]} dy^{\otimes j }\right\| \left\| \int_{\Delta^{k+1-j} [t,u]} dx^{\otimes (k+1-j) }- \int_{\Delta^{k+1-j} [t,u]} dy^{\otimes (k+1-j) } \right\|
\le \| \Gamma_{s,t} \| + \| \Gamma_{t,u} \| +\frac{1}{\beta^2}\tilde{\omega}(0,T) \sum_{j=1}^{k} \frac{C^j}{\left( \frac{j}{p}\right)!} \omega(s,t)^{j/p} \frac{C^{k+1-j}}{\left( \frac{k+1-j}{p}\right)!} \omega(t,u)^{(k+1-j)/p}
+\frac{1}{\beta^2}\tilde{\omega}(0,T) \sum_{j=1}^{k} \frac{C^j}{\left( \frac{j}{p}\right)!} \omega(s,t)^{j/p} \frac{C^{k+1-j}}{\left( \frac{k+1-j}{p}\right)!} \omega(t,u)^{(k+1-j)/p}
\le \| \Gamma_{s,t} \| + \| \Gamma_{t,u} \| +\frac{2p}{\beta^2} \tilde{\omega}(0,T) C^{k+1} \frac{ \omega(s,u)^{(k+1)/p}}{\left( \frac{k+1}{p}\right)! }
where
\tilde{\omega}(0,T)=\sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}- \int dy^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} .
We deduce
\| \Gamma_{s,t} \| \le \frac{2p}{\beta^2(1-2^{1-\theta})} \tilde{\omega}(0,T) C^{k+1} \frac{ \omega(s,t)^{(k+1)/p}}{\left( \frac{k+1}{p}\right)! }
with \theta= \frac{k+1}{p}. A correct choice of \beta finishes the induction argument \square

Posted in Rough paths theory | 3 Comments

Lecture 6. Rough paths. Fall 2017

In this lecture we introduce the central notion of the signature of a path x \in C^{1-var}([0,T],\mathbb{R}^d) which is a convenient way to encode all the algebraic information on the path x which is relevant to study differential equations driven by x. The motivation for the definition of the signature comes from formal manipulations on Taylor series.

Let us consider a differential equation
y(t)=y(s)+\sum_{i=1}^d \int_s^t V_i (y(u) )dx^i(u),
where the V_i‘s are smooth vector fields on \mathbb{R}^n.

If f: \mathbb{R}^{n} \rightarrow \mathbb{R} is a C^{\infty} function, by the change of variable formula,
f(y(t))=f(y(s))+\sum^{d}_{i=1}\int^{t}_{s}V_{i}f(y(u))dx^{i}(u).

Now, a new application of the change of variable formula to V_{i}f(y(s)) leads to
f(y(t))=f(y(s))+\sum^{d}_{i=1}V_{i}f(y(s))\int^{t}_{s}dx^{i}(u)+\sum^{d}_{i,j=1}\int^{t}_{s}\int^{u}_{s} V_{j}V_{i}f(y(v))dx^{j}(v)dx^{i}(u).

We can continue this procedure to get after N steps
f(y(t))=f(y(s))+\sum^{N}_{k=1}\sum_{I=(i_1,\cdots,i_k)}(V_{i_1}\cdots V_{i_k}f)(y(s))\int_{\Delta^{k}[s,t]}dx^{I}+R_{N}(s,t)
for some remainder term R_{N}(s,t), where we used the notations:

  • \Delta^{k}[s,t]=\{(t_1,\cdots,t_k)\in[s,t]^{k}, s\leq t_1\leq t_2\cdots\leq t_k\leq t\}
  • If I=\left(i_1,\cdots,i_k\right)\in\{1,\cdots,d\}^k is a word with length k, \int_{\Delta^{k}[s,t]}dx^{I}=\displaystyle \int_{s \le t_1 \le t_2 \le \cdots \le t_k \le t}dx^{i_1}(t_1)\cdots dx^{i_k}(t_k).

If we let N\rightarrow +\infty, assuming R_{N}(s,t) \to 0 (which is by the way true for t-s small enough if the V_i‘s are analytic), we are led to the formal expansion formula:
f(y(t))=f(y(s))+\sum^{+\infty}_{k=1}\sum_{I=(i_1,\cdots,i_k)}(V_{i_1}\cdots V_{i_k}f)(y(s))\int_{\Delta^{k}[s,t]}dx^{I}.
This shows, at least at the formal level, that all the information given by x on y is contained in the iterated integrals \int_{\Delta^{k}[s,t]}dx^{I}.

Let \mathbb{R} [[X_1,...,X_d]] be the non commutative algebra over \mathbb{R} of the formal series with d indeterminates, that is the set of series
Y=y_0+\sum_{k = 1}^{+\infty} \sum_{I \in \{1,...,d\}^k} a_{i_1,...,i_k} X_{i_1}...X_{i_k}.

Definition: Let x \in C^{1-var}([0,T],\mathbb{R}^d). The signature of x (or Chen’s series) is the formal series:
\mathfrak{S} (x)_{s,t} =1 + \sum_{k=1}^{+\infty} \sum_{I \in \{1,...,d\}^k} \left( \int_{\Delta^{k}[s,t]}dx^{I} \right) X_{i_1} \cdots X_{i_k}, \quad 0 \le s \le t \le T.

As we are going to see in the next few lectures, the signature is a fascinating algebraic object. At the source of the numerous properties of the signature lie the following so-called Chen’s relations

Lemma: Let x \in C^{1-var}([0,T],\mathbb{R}^d). For any word (i_1,...,i_n) \in \{ 1, ... , d \}^n and any 0 \le s \le t \le u \le T ,
\int_{\Delta^n [s,u]} dx^{(i_1,...,i_n)}=\sum_{k=0}^{n} \int_{\Delta^k [s,t]} dx^{(i_1,...,i_k)}\int_{\Delta^{n-k} [t,u]} dx^{(i_{k+1},...,i_n)},
where we used the convention that if I is a word with length 0, then \int_{\Delta^{0} [0,t]} dx^I =1.

Proof: It follows readily by induction on n by noticing that
\int_{\Delta^n [s,u]} dx^{(i_1,...,i_n)}=\int_s^u \left( \int_{\Delta^{n-1} [s,t_n]} dx^{(i_1,...,i_{n-1})} \right) dx^{i_n}(t_n) \square

To avoid heavy notations, it will be convenient to denote
\int_{\Delta^k [s,t]} dx^{\otimes k} =\sum_{I \in \{1,...,d\}^k} \left( \int_{\Delta^{k}[s,t]}dx^{I} \right) X_{i_1} \cdots X_{i_k}.

This notation actually reflects a natural algebra isomorphism between \mathbb{R} [[X_1,...,X_d]] and 1\oplus_{k=1}^{+\infty} (\mathbb{R}^d)^{\otimes k}. With this notation, observe that the signature writes then
\mathfrak{S} (x)_{s,t} =1 + \sum_{k=1}^{+\infty} \int_{\Delta^k [s,t]} dx^{\otimes k},
and that the Chen’s relations become
\int_{\Delta^n [s,u]} dx^{\otimes n}=\sum_{k=0}^{n} \int_{\Delta^k [s,t]} dx^{\otimes k }\int_{\Delta^{n-k} [t,u]} dx^{\otimes (n-k) }.
The Chen’s relations imply the following flow property for the signature:

Proposition: Let x \in C^{1-var}([0,T],\mathbb{R}^d). For any 0 \le s \le t \le u \le T ,
\mathfrak{S} (x)_{s,u} =\mathfrak{S} (x)_{s,t}\mathfrak{S} (x)_{t,u}

Proof: Indeed,
\mathfrak{S} (x)_{s,u}
=1 + \sum_{k=1}^{+\infty} \int_{\Delta^k [s,u]} dx^{\otimes k}
=1 + \sum_{k=1}^{+\infty}\sum_{j=0}^{k} \int_{\Delta^j [s,t]} dx^{\otimes j }\int_{\Delta^{k-j} [t,u]} dx^{\otimes (k-j) }
=\mathfrak{S} (x)_{s,t}\mathfrak{S} (x)_{t,u}
\square

Posted in Rough paths theory | 2 Comments

Lecture 6. Rough paths Fall 2017

In the previous lecture we defined the Young’s integral \int y dx when x \in C^{p-var} ([0,T], \mathbb{R}^d) and y \in C^{q-var} ([0,T], \mathbb{R}^{e \times d}) with \frac{1}{p}+\frac{1}{q} > 1. The integral path \int_0^t ydx has then a bounded p-variation. Now, if V: \mathbb{R}^d \to \mathbb{R}^{d \times d} is a Lipschitz map, then the integral, \int V(x) dx is only defined when \frac{1}{p}+\frac{1}{p} > 1, that is for p < 2. With this in mind, it is apparent that Young’s integration should be useful to solve differential equations driven by continuous paths with bounded p-variation for p < 2. If p \ge 2 , then the Young’s integral is of no help and the rough paths theory later explained is the correct one.

The basic existence and uniqueness result is the following. Throughout this lecture, we assume that p < 2.

Theorem: Let x\in C^{p-var} ([0,T], \mathbb{R}^d) and let V : \mathbb{R}^e \to \mathbb{R}^{e \times d} be a Lipschitz continuous map, that is there exists a constant K > 0 such that for every x,y \in \mathbb{R}^e,
\| V(x)-V(y) \| \le K \| x-y \|.
For every y_0 \in \mathbb{R}^e, there is a unique solution to the differential equation:
y(t)=y_0+\int_0^t V(y(s)) dx(s), \quad 0\le t \le T.
Moreover y \in C^{p-var} ([0,T], \mathbb{R}^e).

Proof: The proof is of course based again of the fixed point theorem. Let 0 < \tau \le T and consider the map \Phi going from the space C^{p-var} ([0,\tau], \mathbb{R}^e) into itself, which is defined by
\Phi(y)_t =y_0+\int_0^t V(y(s)) dx(s), \quad 0\le t \le \tau.
By using basic estimates on the Young’s integrals, we deduce that
\| \Phi(y^1)-\Phi(y^2) \|_{ p-var, [0,\tau]}
\le C \| x \|_{p-var,[0,\tau]} ( \| V(y^1)-V(y^2) \|_{ p-var, [0,\tau]} +\| V(y^1)(0)-V(y^2)(0)\|)
\le CK \| x \|_{p-var,[0,\tau]}( \| y^1-y^2 \|_{ p-var, [0,\tau]}+\| y^1(0)-y^2(0)\|).
If \tau is small enough, then CK \| x \|_{p-var,[0,\tau]} < 1, which means that \Phi is a contraction of the Banach space C^{p-var} ([0,\tau], \mathbb{R}^e) endowed with the norm \| y \|_{p-var,[0,\tau]} +\| y(0)\|.

The fixed point of \Phi, let us say y, is the unique solution to the differential equation:
y(t)=y_0+\int_0^t V(y(s)) dx(s), \quad 0\le t \le \tau.
By considering then a subdivision
\{ \tau=\tau_1 < \tau_2 <\cdots <\tau_n=T \}
such that C K \| x \|_{p-var,[\tau_k,\tau_{k+1}]} < 1, we obtain a unique solution to the differential equation:
y(t)=y_0+\int_0^t V(y(s)) dx(s), \quad 0\le t \le T \square

As for the bounded variation case, the solution of a Young’s differential equation is a C^1 function of the initial condition,

Proposition: Let x\in C^{p-var} ([0,T], \mathbb{R}^d) and let V : \mathbb{R}^e \to \mathbb{R}^{e \times d} be a C^1 Lipschitz continuous map. Let \pi(t,y_0) be the flow of the equation
y(t)=y_0+\int_0^t V(y(s)) dx(s), \quad 0\le t \le T.
Then for every 0\le t \le T, the map y_0 \to \pi (t,y_0) is C^1 and the Jacobian J_t=\frac{\partial \pi(t,y_0)}{\partial y_0} is the unique solution of the matrix linear equation
J_t=Id+ \sum_{i=1}^d \int_0^t DV_i(\pi(s,y_0))J_s dx^i(s).

As we already mentioned it before, solutions of Young’s differential equations are continuous with respect to the driving path in the p-variation topology

Theorem: Let x^n \in C^{p-var} ([0,T], \mathbb{R}^d) and let V : \mathbb{R}^e \to \mathbb{R}^{e\times d} be a Lipschitz and bounded continuous map such that for every x,y \in \mathbb{R}^d,
\| V(x)-V(y) \| \le K \| x-y \|.
Let y^n be the solution of the differential equation:
y^n(t)=y(0)+\int_0^t V(y^n(s)) dx^n(s), \quad 0\le t \le T.
If x^n converges to x in p-variation, then y^n converges in p-variation to the solution of the differential equation:
y(t)=y(0)+\int_0^t V(y(s)) dx(s), \quad 0\le t \le T.

Proof: Let 0\le s \le t \le T. We have
\| y-y^n \|_{p-var,[s,t]}
= \left\| \int_0^\cdot V(y(u)) dx(u) -\int_0^\cdot V(y^n(u)) dx^n(u) \right\|_{p-var,[s,t]}
\le \left\| \int_0^\cdot (V(y(u))-V(y^n(u))) dx(u) + \int_0^\cdot V(y^n(u)) d( x(u)-x^n(u)) \right\|_{p-var,[s,t]}
\le \left\| \int_0^\cdot (V(y(u))-V(y^n(u))) dx(u) \right\|_{p-var,[s,t]}+\left\| \int_0^\cdot V(y^n(u)) d( x(u)-x^n(u)) \right\|_{p-var,[s,t]}
\le CK \| x\|_{p-var,[s,t]} \| y-y^n \|_{p-var,[s,t]}+C\| x-x^n \|_{p-var,[s,t]}(K \| y^n \|_{p-var,[s,t]}+\| V\|_{\infty, [0,T]})
Thus, if s,t is such that CK \| x\|_{p-var,[s,t]} < 1, we obtain
\| y-y^n \|_{p-var,[s,t]} \le \frac{C(K \| y^n \|_{p-var,[s,t]}+\| V\|_{\infty, [0,T]})}{ 1-CK\| x\|_{p-var,[s,t]} } \| x-x^n \|_{p-var,[s,t]}.
In the very same way, provided CK \| x^n\|_{p-var,[s,t]} < 1, we get
\| y^n \|_{p-var,[s,t]} \le \frac{C\| V\|_{\infty, [0,T]}}{ 1-CK\| x^n\|_{p-var,[s,t]} }.

Let us fix 0 < \varepsilon < 1 and pick a sequence 0\le \tau_1 \le \cdots \le \tau_m=T such that CK \| x\|_{p-var,[\tau_i,\tau_{i+1}]}+\varepsilon < 1. Since \| x^n\|_{p-var,[\tau_i,\tau_{i+1}]} \to \| x\|_{p-var,[\tau_i,\tau_{i+1}]}, for n \ge N_1 with N_1 big enough, we have
CK \| x^n\|_{p-var,[\tau_i,\tau_{i+1}]}+\frac{\varepsilon}{2} < 1.
We deduce that for n \ge N_1,
\| y^n \|_{p-var,[\tau_i,\tau_{i+1}]} \le \frac{2}{\varepsilon} C \| V\|_{\infty, [0,T]}
and
\| y-y^n \|_{p-var,[\tau_i,\tau_{i+1}]}
\le \frac{C(K \frac{2}{\varepsilon} C \| V\|_{\infty, [0,T]}+\| V\|_{\infty, [0,T]})}{ 1-CK\| x\|_{p-var,[\tau_i,\tau_{i+1}] }} \| x-x^n \|_{p-var,[\tau_i,\tau_{i+1}]}
\le \frac{C}{\varepsilon} \| V\|_{\infty, [0,T]} \left( \frac{2KC}{\varepsilon}+1 \right) \| x-x^n \|_{p-var,[\tau_i,\tau_{i+1}]}
\le \frac{C}{\varepsilon} \| V\|_{\infty, [0,T]} \left( \frac{2KC}{\varepsilon}+1 \right) \| x-x^n \|_{p-var,[0,T]}.
For n \ge N_2 with N_2 \ge N_1 and big enough, we have
\| x-x^n \|_{p-var,[0,T]} \le \frac{\varepsilon^3}{m},
which implies
\| y-y^n \|_{p-var,[0,T]} \le \frac{C}{\varepsilon} \| V\|_{\infty, [0,T]} \left( \frac{2KC}{\varepsilon}+1 \right) \varepsilon^3.
\square

Posted in Uncategorized | Leave a comment

HW3. MA3160 Fall 2017

Exercise 1. Two dice are simultaneously rolled. For each pair of events defined below, compute if they are independent or not.

(a) A1 ={thesumis7},B1 ={thefirstdielandsa3}.

(b) A2 = {the sum is 9}, B2 = {the second die lands a 3}.

(c) A3 = {the sum is 9}, B3 = {the first die lands even}.
(d) A4 = {the sum is 9}, B4 = {the first die is less than the second}.

(e) A5 = {two dice are equal}, B5 = {the sum is 8}.
(f) A6 = {two dice are equal}, B6 = {the first die lands even}.

(g) A7 = {two dice are not equal}, B7 = {the first die is less than the second}.

Exercise 2. Are the events A1, B1 and B3 from Exercise 1 independent?

Exercise 3. Suppose you toss a fair coin repeatedly and independently. If it comes up heads, you win a dollar, and if it comes up tails, you lose a dollar. Suppose you start with $20. What is the probability you will get to $150 before you go broke?

Posted in MA3160 | Leave a comment

Lecture 5. Rough paths. Fall 2017

In this lecture we define the Young‘s integral \int y dx when x \in C^{p-var} ([0,T], \mathbb{R}^d) and y \in C^{q-var} ([0,T], \mathbb{R}^{e \times d}) with \frac{1}{p}+\frac{1}{q} >1. The cornerstone is the following Young-Loeve estimate.

Theorem: Let x \in C^{1-var} ([0,T], \mathbb{R}^d) and y \in C^{1-var} ([0,T], \mathbb{R}^{e \times d}). Consider now p,q \ge 1 with \theta=\frac{1}{p}+\frac{1}{q} > 1. The following estimate holds: for 0 \le s \le t \le T,
\left\| \int_s^t y(u)dx(u)-y(s)(x(t)-x(s)) \right\| \le \frac{1}{1-2^{1-\theta} }\| x \|_{p-var; [s,t]} \| y \|_{q-var; [s,t]}.

Proof: For 0 \le s \le t \le T, let us define
\Gamma_{s,t} =\int_s^t y(u)dx(u) -y(s)(x(t)-x(s)) .
We have for s < t < u,
\Gamma_{s,u}-\Gamma_{s,t}-\Gamma_{t,u} =-y(s)(x(u)-x(s))+y(s)(x(t)-x(s))+y(t)(x(u)-x(t))= (y(s)-y(t))(x(t)-x(u)).
As a consequence, we get
\| \Gamma_{s,u}\|\le \| \Gamma_{s,t} \|+\| \Gamma_{t,u}\| +\| x \|_{p-var; [s,t]} \| y \|_{q-var; [t,u]}.
Let now \omega(s,t)=\| x \|^{1/\theta}_{p-var; [s,t]} \| y \|^{1/\theta}_{q-var; [s,t]}. We claim that \omega is a control. The continuity and the vanishing on the diagonal are obvious to check, so we just need to justify the superadditivity. Let s < t < u, we have from Holder’s inequality,
\omega(s,t)+\omega(t,u)
=\| x \|^{1/\theta}_{p-var; [s,t]} \| y \|^{1/\theta}_{q-var; [s,t]}+\| x \|^{1/\theta}_{p-var; [t,u]} \| y \|^{1/\theta}_{q-var; [t,u]}
\le (\| x \|^{p}_{p-var; [s,t]} + \| x \|^{p}_{p-var; [t,u]})^{\frac{1}{p\theta}}(\| y \|^{q}_{q-var; [s,t]} + \| y \|^{q}_{q-var; [t,u]})^{\frac{1}{q\theta}}
\le \| x \|^{1/\theta}_{p-var; [s,u]} \| y \|^{1/\theta}_{q-var; [s,u]}=\omega(s,u).
We have then
\| \Gamma_{s,u}\|\le \| \Gamma_{s,t} \|+\| \Gamma_{t,u}\| +\omega(s,u)^\theta.
For \varepsilon > 0, consider then the control
\omega_\varepsilon (s,t)= \omega(s,t) +\varepsilon ( \| x \|_{1-var; [s,t]} + \| y \|_{1-var; [s,t]}).
Define now
\Psi(r)= \sup_{s,u, \omega_\varepsilon (s,u)\le r} \| \Gamma_{s,u}\|.
If s,u is such that \omega_\varepsilon (s,u) \le r, we can find a t such that \omega_\varepsilon(s,t) \le \frac{1}{2} \omega_\varepsilon(s,u), \omega_\varepsilon(t,u) \le \frac{1}{2} \omega_\varepsilon(s,u). Indeed, the continuity of \omega_\varepsilon forces the existence of a t such that \omega_\varepsilon(s,t)=\omega_\varepsilon(t,u) . We obtain therefore
\| \Gamma_{s,u}\|\le 2 \Psi(r/2) + r^\theta,
which implies by maximization,
\Psi(r)\le 2 \Psi(r/2) + r^\theta.
By iterating n times this inequality, we obtain
\Psi(r)
\le 2^n \Psi\left(\frac{r}{2^n} \right) +\sum_{k=0}^{n-1} 2^{k(1-\theta)} r^\theta
\le 2^n \Psi\left(\frac{r}{2^n} \right) + \frac{1}{1-2^{1-\theta}} r^\theta.
It is now clear that:
\| \Gamma_{s,t} \|
\le \left\|\int_s^t (y(u)-y(s))dx(u) \right\|
\le \| x \|_{1-var; [s,t]} \| y-y(s) \|_{\infty; [s,t]}
\le ( \| x \|_{1-var; [s,t]} + \| y \|_{1-var; [s,t]})^2
\le \frac{1}{\varepsilon^2} \omega_\varepsilon (s,t)^2,

Since
\lim_{n \to \infty} 2^n \Psi\left(\frac{r}{2^n} \right) =0.
We conclude
\Psi(r) \le \frac{1}{1-2^{1-\theta}} r^\theta
and thus
\| \Gamma_{s,u}\| \le \frac{1}{1-2^{1-\theta}} \omega_\varepsilon(s,u) ^\theta
Sending \varepsilon \to 0, finishes the proof \square

It is remarkable that the Young-Loeve estimate only involves \| x \|_{p-var; [s,t]} and \| y \|_{q-var; [s,t]}. As a consequence, we obtain the following result whose proof is let to the reader:

Proposition: Let x \in C^{p-var} ([0,T], \mathbb{R}^d) and y \in C^{q-var} ([0,T], \mathbb{R}^{e \times d}) with \theta=\frac{1}{p}+\frac{1}{q} >1. Let us assume that there exists a sequence x^n \in C^{1-var} ([0,T], \mathbb{R}^d) such that x^n \to x in C^{p-var} ([0,T], \mathbb{R}^d) and a sequence y^n \in C^{1-var} ([0,T], \mathbb{R}^{e \times d}) such that y^n \to x in C^{q-var} ([0,T], \mathbb{R}^d), then for every s < t, \int_s^t y^n(u)dx^n(u) converges to a limit that we call the Young’s integral of y against x on the interval [s,t] and denote \int_s^t y(u)dx(u).
The integral \int_s^t y(u)dx(u) does not depend of the sequences x^n and y^n and the following estimate holds: for 0 \le s \le t \le T,
\left\| \int_s^t y(u)dx(u)-y(s)(x(t)-x(s)) \right\| \le \frac{1}{1-2^{1-\theta} }\| x \|_{p-var; [s,t]} \| y \|_{q-var; [s,t]}.

The closure of C^{1-var} ([0,T], \mathbb{R}^d) in C^{p-var} ([0,T], \mathbb{R}^d) is C^{0, p-var} ([0,T], \mathbb{R}^d) and we know that C^{p+\varepsilon-var} ([0,T], \mathbb{R}^d) \subset C^{0, p-var} ([0,T], \mathbb{R}^d). It is therefore obvious to extend the Young’s integral for every x \in C^{p-var} ([0,T], \mathbb{R}^d) and y \in C^{q-var} ([0,T], \mathbb{R}^{e \times d}) with \theta=\frac{1}{p}+\frac{1}{q} >1 and the Young-Loeve estimate still holds
\left\| \int_s^t y(u)dx(u)-y(s)(x(t)-x(s)) \right\| \le \frac{1}{1-2^{1-\theta} }\| x \|_{p-var; [s,t]} \| y \|_{q-var; [s,t]}.
From this estimate, we easily see that for x \in C^{p-var} ([0,T], \mathbb{R}^d) and y \in C^{p-var} ([0,T], \mathbb{R}^{e \times d}) with \frac{1}{p}+\frac{1}{q} > 1 the sequence of Riemann sums
\sum_{k=0}^{n-1} y(t_i)( x_{t_{i+1}}-x_{t_i})
will converge to \int_s^t y(u)dx(u) when the mesh of the subdivision goes to 0. We record for later use the following estimate on the Young’s integral, which is also an easy consequence of the Young-Loeve estimate (see Theorem 6.8 in the book for further details).

Proposition: Let x \in C^{p-var} ([0,T], \mathbb{R}^d) and y \in C^{q-var} ([0,T], \mathbb{R}^{e \times d}) with \frac{1}{p}+\frac{1}{q} > 1. The integral path t \to \int_0^t y(u)dx(u) is continuous with a finite p-variation and we have
\left\|\int_0^\cdot y(u) dx(u) \right\|_{p-var, [s,t] }
\le C \| x \|_{p-var; [s,t]} \left( \| y \|_{q-var; [s,t]} + \| y \|_{\infty; [s,t]} \right)
\le 2C \| x \|_{p-var; [s,t]} \left( \| y \|_{q-var; [s,t]} + \| y(0)\| \right)

Posted in Rough paths theory | 1 Comment