HW9. MA3160 Fall 2017

Exercise. Let X, Y have joint density f(x,y)=c e^{-x-2y} if x,y \ge 0 and 0 otherwise.

(a)  Find c that makes this a joint pdf:

(b) Find P (X < Y ).

 

Posted in MA3160 | Leave a comment

HW8. MA3160 Fall 2017

Exercise 1. 

About 10% of the population is left-handed. Use the normal distribution to

approximate the probability that in a class of 150 students,

(a) at least 25 of them are left-handed.

(b) between 15 and 20 are left-handed.

 

Exercise 2.

Suppose the life of an Uphone has exponential distribution with mean life of 4 years. Let X denote the life of an Uphone (or time until it dies). Given that the Uphone has lasted 3 years, what is the probability that it will 5 more years.

 

Posted in MA3160 | Leave a comment

HW7. MA3160 Fall 2017

Let X be a random variable with probability density function:

f(x) =cx(5−x) 0≤x≤5, 0 otherwise.

(a) What is the value of c?
(b) What is the cumulative distribution function of X ? That is, find F (x) = P (X ≤ x).

(c) Use your answer in part (b) to find P (2 ≤ X ≤ 3).

(d) What is E[X]?

(e) What is Var(X)?

Posted in MA3160 | Leave a comment

Lecture 8. Rough paths Fall 2017

In this lecture, it is now time to harvest the fruits of the two previous lectures. This will allow us to finally define the notion of p-rough path and to construct the signature of such path.

A first result which is a consequence of the theorem proved in the previous lecture is the following continuity of the iterated iterated integrals with respect to a convenient topology. The proof uses very similar arguments to the previous two lectures, so we let it as an exercise to the student.

Theorem: Let p \ge 1, K > 0 and x,y \in C^{1-var}([0,T],\mathbb{R}^d) such that
\sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}- \int dy^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \le 1,
and
\left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \right)^p+ \left( \sum_{j=1}^{[p]} \left\| \int dy^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \right)^p \le K.
Then there exists a constant C \ge 0 depending only on p and K such that for k \ge 1
\left\| \int_{\Delta^k [0,\cdot]} dx^{\otimes k}- \int_{\Delta^k [0,\cdot]} dy^{\otimes k} \right\|_{p-var, [0,T]} \le \left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}- \int dy^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \right) \frac{C^k}{\left( \frac{k}{p}\right)!}.

This continuity result naturally leads to the following definition.

Definition: Let p \ge 1 and x \in C^{p-var}([0,T],\mathbb{R}^d). We say that x is a p-rough path if there exists a sequence x_n \in C^{1-var}([0,T],\mathbb{R}^d) such that x_n\to x in p-variation and such that for every \varepsilon > 0, there exists N \ge 0 such that for m,n \ge N,
\sum_{j=1}^{[p]} \left\| \int dx_n^{\otimes j}- \int dx_m^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \le \varepsilon.
The space of p-rough paths will be denoted \mathbf{\Omega}^p([0,T],\mathbb{R}^d).

From the very definition, \mathbf{\Omega}^p([0,T],\mathbb{R}^d) is the closure of C^{1-var}([0,T],\mathbb{R}^d) inside C^{p-var}([0,T],\mathbb{R}^d) for the distance
d_{\mathbf{\Omega}^p([0,T],\mathbb{R}^d)}(x,y)= \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}- \int dy^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} .

If x \in \mathbf{\Omega}^p([0,T],\mathbb{R}^d) and x_n \in C^{1-var}([0,T],\mathbb{R}^d) is such that x_n\to x in p-variation and such that for every \varepsilon > 0, there exists N \ge 0 such that for m,n \ge N,
\sum_{j=1}^{[p]} \left\| \int dx_n^{\otimes j}- \int dx_m^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \le \varepsilon,
then we define \int_{\Delta^k [s,t]} dx^{\otimes k} for k \le p as the limit of the iterated integrals \int_{\Delta^k [s,t]} dx_n^{\otimes k}. However it is important to observe that \int_{\Delta^k [s,t]} dx^{\otimes k} may then depend on the choice of the approximating sequence x_n. Once the integrals \int_{\Delta^k [s,t]} dx^{\otimes k} are defined for k \le p, we can then use the previous theorem to construct all the iterated integrals \int_{\Delta^k [s,t]} dx^{\otimes k} for k > p. It is then obvious that if x,y \in \mathbf{\Omega}^p([0,T],\mathbb{R}^d), then
1 + \sum_{k=1}^{[p]} \int_{\Delta^k [s,t]} dx^{\otimes k}=1 + \sum_{k=1}^{[p]} \int_{\Delta^k [s,t]} dy^{\otimes k}
implies that
1 + \sum_{k=1}^{+\infty } \int_{\Delta^k [s,t]} dx^{\otimes k}=1 + \sum_{k=1}^{+\infty} \int_{\Delta^k [s,t]} dy^{\otimes k}.
In other words the signature of a p-rough path is completely determinated by its truncated signature at order [p]:
\mathfrak{S}_{[p]} (x)_{s,t} =1 + \sum_{k=1}^{[p]} \int_{\Delta^k [s,t]} dx^{\otimes k}.
For this reason, it is natural to present a p-rough path by this truncated signature at order [p] in order to stress that the choice of the approximating sequence to contruct the iterated integrals up to order [p] has been made. This will be further explained in much more details when we will introduce the notion of geometric rough path over a rough path.

The following results are straightforward to obtain from the previous lectures by a limiting argument.

Lemma: Let x \in \mathbf{\Omega}^p([0,T],\mathbb{R}^d), p \ge 1. For 0 \le s \le t \le u \le T , and n \ge 1,
\int_{\Delta^n [s,u]} dx^{\otimes n}=\sum_{k=0}^{n} \int_{\Delta^k [s,t]} dx^{\otimes k }\int_{\Delta^{n-k} [t,u]} dx^{\otimes (n-k) }.

Theorem: Let p \ge 1. There exists a constant C \ge 0, depending only on p, such that for every x \in\mathbf{\Omega}^p([0,T],\mathbb{R}^d) and k \ge 1,
\left\| \int_{\Delta^k [s,t]} dx^{\otimes k} \right\| \le \frac{C^k}{\left( \frac{k}{p}\right)!} \left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}\right\|^{1/j}_{\frac{p}{j}-var, [s,t]} \right)^k, \quad 0 \le s \le t \le T.

If p \ge 2, the space \mathbf{\Omega}^p([0,T],\mathbb{R}^d) is not a priori a Banach space (it is not a linear space) but it is a complete metric space for the distance
d_{\mathbf{\Omega}^p([0,T],\mathbb{R}^d)}(x,y)= \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}- \int dy^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} .
The structure of \mathbf{\Omega}^p([0,T],\mathbb{R}^d) will be better understood in the next lectures, but let us remind that if 1 \le p < 2, then \mathbf{\Omega}^p([0,T],\mathbb{R}^d) is the closure of C^{1-var}([0,T],\mathbb{R}^d) inside C^{p-var}([0,T],\mathbb{R}^d) for the variation distance it is therefore what we denoted C^{0,p-var}([0,T],\mathbb{R}^d). As a corollary we deduce

Proposition: Let 1 \le p < 2. Then x \in \mathbf{\Omega}^p([0,T],\mathbb{R}^d) if and only if
\lim_{\delta \to 0} \sup_{ \Pi \in \mathcal{D}[s,t], | \Pi | \le \delta } \sum_{k=0}^{n-1} \| x(t_{k+1}) -x(t_k) \|^p=0,
where \mathcal{D}[s,t] is the set of subdivisions of [s,t]. In particular, for p < q < 2,
C^{q-var}([0,T],\mathbb{R}^d) \subset \mathbf{\Omega}^p([0,T],\mathbb{R}^d).

 

We are now ready to define solutions of linear differential equations driven by p-rough paths, p \ge 1 and present the Lyons’ continuity theorem in this setting. Let x \in \mathbf{\Omega}^p([0,T],\mathbb{R}^d) be a p-rough path with truncated signature \sum_{k=0}^{[p]} \int_{\Delta^k [s,t]} dx^{\otimes k}, and let x_n \in C^{1-var}([0,T],\mathbb{R}^d) be an approximating sequence such that
\sum_{j=1}^{[p]} \left\| \int dx^{\otimes j}- \int dx_n^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \to 0.

Let us consider matrices M_1,\cdots,M_d \in \mathbb{R}^{n \times n}. We have the following theorem:

Theorem: Let y_n:[0,T] \to \mathbb{R}^n be the solution of the differential equation
y_n(t)=y(0)+\sum_{i=1}^d \int_0^t M_i y_n(s)d x^i_n(s).
Then, when n \to \infty, y_n converges in the p-variation distance to some y \in C^{p-var}([0,T],\mathbb{R}^n) . y is called the solution of the rough differential equation
y(t)=y(0)+\sum_{i=1}^d \int_0^t M_i y(s)d x^i(s).

Proof: It is a classical result that the solution of the equation
y_n(t)=y(0)+\sum_{i=1}^d \int_0^t M_i y_n(s)d x^i_n(s),
can be expanded as the convergent Volterra series:
y_n(t)=y_n(s)+ \sum^{+\infty}_{k=1}\sum_{I=(i_1,\cdots,i_k)} M_{i_1}\cdots M_{i_k} \left( \int_{\Delta^{k}[s,t]}dx_n^{I} \right) y_n(s).
Therefore, in particular, for n,m \ge 0,
y_n(t)-y_p(t)=\sum^{+\infty}_{k=1}\sum_{I=(i_1,\cdots,i_k)} M_{i_1}\cdots M_{i_k} \left( \int_{\Delta^{k}[0,t]}dx_n^{I}- \int_{\Delta^{k}[0,t]}dx_p^{I} \right) y(0),
which implies that
\| y_n(t)-y_m(t) \| \le \sum^{+\infty}_{k=1}M^k \left\| \int_{\Delta^{k}[0,t]}dx_n^{\otimes k}- \int_{\Delta^{k}[0,t]}dx_m^{\otimes k} \right\| \| y(0) \|
with M=\max \{ \| M_1 \| , \cdots , \| M_d \| \}. From the theorems of the previous lectures, there exists a constant C \ge 0 depending only on p and
\sup_n \sum_{j=1}^{[p]} \left\| \int dx_n^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]}
such that for k \ge 1 and n,m big enough:
\left\| \int_{\Delta^k [0,\cdot]} dx_n^{\otimes k}- \int_{\Delta^k [0,\cdot]} dx_m^{\otimes k} \right\|_{p-var, [0,T]} \le \left( \sum_{j=1}^{[p]} \left\| \int dx_n^{\otimes j}- \int dx_m^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \right) \frac{C^k}{\left( \frac{k}{p}\right)!}.
As a consequence, there exists a constant \tilde{C} such that for n,m big enough:
\| y_n(t)-y_m(t) \| \le \tilde{C} \sum_{j=1}^{[p]} \left\| \int dx_n^{\otimes j}- \int dx_m^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} .
This already proves that y_n converges in the supremum topology to some y. We now have
(y_n(t)-y_n(s))-(y_m(t)-y_m(s))
=\sum^{+\infty}_{k=1}\sum_{I=(i_1,\cdots,i_k)} M_{i_1}\cdots M_{i_k} \left( \int_{\Delta^{k}[s,t]}dx_n^{I}y_n(s) -\int_{\Delta^{k}[s,t]}dx_m^{I} y_m(s)\right),
and we can bound
\left\| \int_{\Delta^{k}[s,t]}dx_n^{I}y_n(s) -\int_{\Delta^{k}[s,t]}dx_m^{I} y_m(s) \right\|
\le \left\| \int_{\Delta^{k}[s,t]}dx_n^{I} \right\| \| y_n(s)-y_m(s) \|+\| y_m(s) \| \left\| \int_{\Delta^{k}[s,t]}dx_n^{I} - \int_{\Delta^{k}[s,t]}dx_m^{I}\right\|
\le \left\| \int_{\Delta^{k}[s,t]}dx_n^{I} \right\| \| y_n-y_m \|_{\infty, [0,T]} +\| y_m \|_{\infty, [0,T]} \left\| \int_{\Delta^{k}[s,t]}dx_n^{I} - \int_{\Delta^{k}[s,t]}dx_m^{I}\right\|
Again, from the theorems of the previous lectures, there exists a constant C \ge 0, depending only on p and
\sup_n \sum_{j=1}^{[p]} \left\| \int dx_n^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]}
such that for k \ge 1 and n,m big enough
\left\| \int_{\Delta^k [s,t]} dx_n^{\otimes k} \right\| \le \frac{C^k}{\left( \frac{k}{p}\right)!} \omega(s,t)^{k/p}, \quad 0 \le s \le t \le T.
\left\| \int_{\Delta^k [s,t]} dx_n^{\otimes k}- \int_{\Delta^k [s,t]} dx_m^{\otimes k} \right\| \le \left( \sum_{j=1}^{[p]} \left\| \int dx_n^{\otimes j}- \int dx_m^{\otimes k} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \right) \frac{C^k}{\left( \frac{k}{p}\right)!} \omega(s,t)^{k/p} ,
where \omega is a control such that \omega(0,T)=1. Consequently, there is a constant \tilde{C}, such that
\| (y_n(t)-y_n(s))-(y_m(t)-y_m(s)) \|
\le \tilde{C} \left( \| y_n-y_m \|_{\infty, [0,T]} + \sum_{j=1}^{[p]} \left\| \int dx_n^{\otimes j}- \int dx_m^{\otimes k} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \right) \omega(s,t)^{1/p}
This implies the estimate
\| y_n -y_m \|_{p-var,[0,T]} \le \tilde{C} \left( \| y_n-y_m \|_{\infty, [0,T]} + \sum_{j=1}^{[p]} \left\| \int dx_n^{\otimes j}- \int dx_m^{\otimes k} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \right)
and thus gives the conclusion \square

With just a little more work, it is possible to prove the following stronger result whose proof is let to the reader.
Theorem: Let y_n:[0,T] \to \mathbb{R}^n be the solution of the differential equation
y_n(t)=y(0)+\sum_{i=1}^d \int_0^t M_i y_n(s)d x^i_n(s).
and y be the solution of the rough differential equation:
y(t)=y(0)+\sum_{i=1}^d \int_0^t M_i y(s)d x^i(s).
Then, y \in \mathbf{\Omega}^p([0,T],\mathbb{R}^d) and when n \to \infty,
\sum_{j=1}^{[p]} \left\| \int dy^{\otimes j}- \int dy_n^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,T]} \to 0.

We can get useful estimates for solutions of rough differential equations. For that, we need the following analysis lemma:

Proposition: For x \ge 0 and p \ge 1,
\sum_{k=0}^{+\infty} \frac{x^k}{\left( \frac{k}{p} \right)!} \le p e^{x^p}.

Proof: For \alpha \ge 0, we denote
E_\alpha(x)=\sum_{k=0}^{+\infty} \frac{x^k}{\left( k \alpha \right)!}.
This is a special function called the Mittag-Leffler function. From the binomial inequality
E_\alpha(x)^2
=\sum_{k=0}^{+\infty} \left( \sum_{j=0}^k \frac{1}{\left( j \alpha \right)!\left( (k-j) \alpha \right)!}\right)x^k
\le \frac{1}{\alpha}\sum_{k=0}^{+\infty} 2^{\alpha k} \frac{x^k}{\left( k \alpha \right)!}=\frac{1}{\alpha}E_\alpha(2^\alpha x).
Thus we proved
E_\alpha(x)\le\frac{1}{\alpha^{1/2}}E_\alpha(2^\alpha x)^{1/2}.
Iterating this inequality, k times we obtain
E_\alpha(x)\le \frac{1}{\alpha^{\sum_{j=1}^k \frac{1}{2^j}}} E_\alpha(2^{\alpha k}x)^{1/(2k)}.
It is known (and not difficult to prove) that
E_\alpha(x) \sim_{x \to \infty} \frac{1}{\alpha} e^{x^{1/\alpha}}.
By letting k \to \infty we conclude
E_\alpha(x) \le \frac{1}{\alpha} e^{x^{1/\alpha}}.
\square

This estimate provides the following result:

Proposition: Let y be the solution of the rough differential equation:
y(t)=y(0)+\sum_{i=1}^d \int_0^t M_i y(s)d x^i(s).
Then, there exists a constant C depending only on p such that for 0 \le t \le T,
\| y(t) \| \le p \| y(0)\| e^{ CM \left( \sum_{j=1}^{[p]} \left\| \int dx^{\otimes j} \right\|^{1/j}_{\frac{p}{j}-var, [0,t]} \right)^p},
where M=\max \{ \| M_1 \|, \cdots, \|M_d\| \}.

Proof: We have
y(t)=y(0)+ \sum^{+\infty}_{k=1}\sum_{I=(i_1,\cdots,i_k)} M_{i_1}\cdots M_{i_k} \left( \int_{\Delta^{k}[0,t]}dx^{I} \right) y(0).
Thus we obtain
y(t)\le \left( 1+ \sum^{+\infty}_{k=1}\sum_{I=(i_1,\cdots,i_k)} M^k \left\| \int_{\Delta^{k}[0,t]}dx^{I} \right\| \right) \| y(0) \|,
and we conclude by using estimates on iterated integrals of rough paths together with the previous lemma \square

Posted in Rough paths theory | Leave a comment

HW6. MA3160 Fall 2017

 

Exercise 1. Patricia receives  an average of two texts every 2 hours. If we assume that the number of texts is Poisson distributed, what is the probability that she receives five or more texts in a 9 hours period?

Exercise 2.  A UConn student claims that she can distinguish Dairy Bar ice cream from Friendly’s ice cream. As a test, she is given ten samples of ice cream (each sample is either from the Dairy Bar or Friendly’s) and asked to identify each one. She is right eight times. What is the probability that she would be right exactly eight times if she guessed randomly for each sample?

Posted in MA3160 | Leave a comment

HW5. MA3160 Fall 2017

Exercise 1. Three balls are randomly chosen with replacement from an urn containing 5 blue, 4 red, and 2 yellow balls. Let X denote the number of red balls chosen.

(a) What are the possible values of X?
(b) What are the probabilities associated to each value?

 

Exercise 2. Suppose X is a random variable such that E[X] = 50 and Var(X) = 12. Calculate the following quantities.

(a) E[X^2]
(b) E [3X + 2]

(c) E [(X+2)^2]

(d) Var[−X]

Posted in Uncategorized | Leave a comment

Annales de la faculte des sciences de Toulouse

Annales de la Faculte des Sciences de Toulouse is a peer-reviewed international  journal with a long tradition of excellence (going back to 1887 and Thomas Stieltjes). The journal periodically publishes surveys by the recipients of the Fermat Prize.  The Editorial Board encourages high level submissions.

Submissions in all areas of mathematics are accepted and decisions are usually made within 3 months. The electronic version is free and accessible without subscription.

Posted in Mathematicians | Leave a comment