HW1. MA3160 Fall 2017

  1. Suppose a License plate must consist of a combination of 8 numbers or letters. How many license plates are there if:
    1. there can only be letters?
    2.  the first three places are numbers and the last five are letters?
    3. the first four places are numbers and the last four are letters, but there can not be any repetitions in the same license plate?
  2.  A school of 60 students has awards for the top math, English, history and science student in the school
    1. How many ways can these awards be given if each student can only win one award?
    2. How many ways can these awards be given if students can win multiple awards?
  3.   An iPhone password can be made up of any 6 digit combination.
    1. How many different passwords are possible?
    2. How many are possible if all the digits are odd?
  4. Suppose you are organizing your textbooks on a book shelf. You have three chemistry books, 5 math books, 5 history books and 5 English books.
    1. How many ways can you order the textbooks if you must have math books first, English books second, chemistry third, and history fourth?
    2. How many ways can you order the books if each subject must be ordered together?
Posted in MA3160 | Leave a comment

Rough paths theory Fall 2017. Lecture 1

The first few lectures are essentially reminders of undergraduate real analysis materials. We will cover some aspects of the theory of differential equations driven by continuous paths with bounded variation. The point is to fix some notations that will be used throughout the course and to stress the importance of the topology of convergence in 1-variation if we are interested in stability results for solutions with respect to the driving signal.

If s \le t, we will denote by \Delta [s,t], the set of subdivisions of the interval [s,t], that is \Pi \in \Delta [s,t] can be written
\Pi=\left\{ s= t_0 < t_1 < \cdots < t_n =t \right\}.

Definition: A continuous path x : [s,t] \to \mathbb{R}^d is said to have a bounded variation on [s,t], if the 1-variation of x on [s,t], which is defined as
\| x \|_{1-var; [s,t]} :=\sup_{ \Pi \in \Delta[s,t]} \sum_{k=0}^{n-1} \| x(t_{k+1}) -x(t_k) \|,
is finite. The space of continuous bounded variation paths x : [s,t] \to \mathbb{R}^d, will be denoted by C^{1-var} ([s,t], \mathbb{R}^d).

\| \cdot \|_{1-var; [s,t]} is not a norm, because constant functions have a zero 1-variation, but it is obviously a semi-norm. If x is continuously differentiable on [s,t], it is easily seen (Exercise !) that
\| x \|_{1-var, [s,t]}=\int_s^t \| x'(s) \| ds.

Proposition: Let x \in C^{1-var} ([0,T], \mathbb{R}^d). The function (s,t)\to \| x \|_{1-var, [s,t]} is additive, i.e for 0 \le s \le t \le u \le T,
\| x \|_{1-var, [s,t]}+ \| x \|_{1-var, [t,u]}= \| x \|_{1-var, [s,u]},
and controls x in the sense that for 0 \le s \le t \le T,
\| x(s)-x(t) \| \le \| x \|_{1-var, [s,t]}.
The function s \to \| x \|_{1-var, [0,s]} is moreover continuous and non decreasing.

Proof: If \Pi_1 \in \Delta [s,t] and \Pi_2 \in \Delta [t,u], then \Pi_1 \cup \Pi_2 \in \Delta [s,u]. As a consequence, we obtain
\sup_{ \Pi_1 \in \Delta[s,t]} \sum_{k=0}^{n-1} \| x(t_{k+1}) -x(t_k) \| +\sup_{ \Pi_2 \in \Delta[t,u]} \sum_{k=0}^{n-1} \| x(t_{k+1}) -x(t_k) \| \le \sup_{ \Pi \in \Delta[s,u]} \sum_{k=0}^{n-1} \| x(t_{k+1}) -x(t_k) \|,
thus
\| x \|_{1-var, [s,t]}+ \| x \|_{1-var, [t,u]} \le \| x \|_{1-var, [s,u]}.
Let now \Pi \in \Delta[s,u]:
\Pi=\left\{ s= t_0 < t_1 < \cdots < t_n =u \right\}.
Let k=\max \{ j, t_j \le t\}. By the triangle inequality, we have
\sum_{j=0}^{n-1} \| x(t_{j+1}) -x(t_j) \|
\le \sum_{j=0}^{k-1} \| x(t_{j+1}) -x(t_j) \| + \sum_{j=k}^{n-1} \| x(t_{j+1}) -x(t_j) \|
\le \| x \|_{1-var, [s,t]}+ \| x \|_{1-var, [t,u]}.
Taking the \sup of \Pi \in \Delta[s,u] yields
\| x \|_{1-var, [s,t]}+ \| x \|_{1-var, [t,u]} \ge \| x \|_{1-var, [s,u]},
which completes the proof. The proof of the continuity and monoticity of s \to \| x \|_{1-var, [0,s]} is let to the reader \square

This control of the path by the 1-variation norm is an illustration of the notion of controlled path which is very useful in rough paths theory.

Definition: A map \omega: \{ 0 \le s \le t \le T \} \to [0,\infty) is called superadditive if for all s \le t \le u,
\omega(s,t)+\omega(t,u) \le \omega (s,u).
If, in adition, \omega is continuous and \omega(t,t)=0, we call \omega a control. We say that a path x:[0,T] \to \mathbb{R} is controlled by a control \omega, if there exists a constant C < 0, such that for every 0 \le s \le t \le T,
\| x(t) -x(s) \| \le C \omega(s,t).

Obviously, Lipschitz functions have a bounded variation. The converse is of course not true: t\to \sqrt{t} has a bounded variation on [0,1] but is not Lipschitz. However, any continuous path with bounded variation is the reparametrization of a Lipschitz path in the following sense.

Proposition: Let x \in C^{1-var} ([0,T], \mathbb{R}^d). There exist a Lipschitz function y:[0,1] \to \mathbb{R}^d, and a continuous and non-decreasing function \phi:[0,T]\to [0,1] such that x=y\circ \phi.

Proof: We assume \| x \|_{1-var, [0,T]} \neq 0 and consider
\phi(t)=\frac{ \| x \|_{1-var, [0,t]} }{ \| x \|_{1-var, [0,T]} }.
It is continuous and non decreasing. There exists a function y such that x=y\circ \phi because \phi(t_1)=\phi(t_2) implies x(t_1)=x(t_2). We have then, for s \le t,
\| y( \phi(t)) -y ( \phi(s)) \|=\| x(t) -x (s) \| \le \| x \|_{1-var, [s,t]} =\| x \|_{1-var, [0,T]} (\phi(t)-\phi(s) ) \square

The next result shows that the set of continuous paths with bounded variation is a Banach space.

Theorem: The space C^{1-var} ([0,T], \mathbb{R}^d) endowed with the norm \| x(0) \|+ \| x \|_{1-var, [0,T]} is a Banach space.

Proof: Let x^n \in C^{1-var} ([0,T], \mathbb{R}^d) be a Cauchy sequence. It is clear that
\| x^n -x^m \|_\infty \le \| x^n(0)-x^m(0) \|+ \| x^n-x^m \|_{1-var, [0,T]}.
Thus, x^n converges uniformly to a continuous path x :[0,T] \to \mathbb{R}. We need to prove that x has a bounded variation. Let
\Pi=\{ 0=t_0 <t_1 < \cdots <t_n=T \}
be a a subdivision of [0,T]. There is m \ge 0, such that \| x - x^m \|_\infty \le \frac{1}{2n}, thus
\sum_{k=0}^{n-1} \|x(t_{k+1})-x(t_k) \|
\le \sum_{k=0}^{n-1} \|x(t_{k+1})-x^m(t_{k+1}) \| +\sum_{k=0}^{n-1} \|x^m(t_{k})-x(t_k) \| +\| x^m \|_{1-var,[0,T]}
\le 1+\sup_{n} \| x^n \|_{1-var,[0,T]}.
Thus, we have
\| x \|_{1-var,[0,T]} \le 1+\sup_{n} \| x^n \|_{1-var,[0,T]} < \infty
\square

For approximations purposes, it is important to observe that the set of smooth paths is not dense in C^{1-var} ([0,T], \mathbb{R}^d) for the 1-variation convergence topology. The closure of the set of smooth paths in the 1-variation norm, which shall be denoted by C^{0,1-var} ([0,T], \mathbb{R}^d) is the set of absolutely continuous paths.

Proposition: Let x \in C^{1-var} ([0,T], \mathbb{R}^d). Then, x \in C^{0,1-var} ([0,T], \mathbb{R}^d) if and only if there exists y \in L^1([0,T]) such that,
x(t)=x(0)+\int_0^t y(s) ds.

Proof: First, let us assume that
x(t)=x(0)+\int_0^t y(s) ds,
for some y \in L^1([0,T]). Since smooth paths are dense in L^1([0,T]), we can find a sequence y^n in L^1([0,T]) such that \| y-y^n \|_1 \to 0. Define then,
x^n(t)=x(0)+\int_0^t y^n(s) ds.
We have
\| x-x^n \|_{1-var,[0,T]}=\| y-y^n \|_1.
This implies that x \in C^{0,1-var} ([0,T], \mathbb{R}^d). Conversely, if x \in C^{0,1-var} ([0,T], \mathbb{R}^d), there exists a sequence of smooth paths x^n that converges in the 1-variation topology to x. Each x^n can be written as,
x^n(t)=x^n(0)+\int_0^t y^n(s) ds.
We still have
\| x^m-x^n \|_{1-var,[0,T]}=\| y^m-y^n \|_1,
so that y^n converges to some y in L^1. It is then clear that
x(t)=x(0)+\int_0^t y(s) ds
\square

Exercise: Let x \in C^{1-var} ([0,T], \mathbb{R}^d). Show that x is the limit in 1-variation of piecewise linear interpolations if and only if x \in C^{0,1-var} ([0,T], \mathbb{R}^d).

 

Let y:[0,T] \to \mathbb{R}^{e \times d} be a piecewise continuous path and x \in C^{1-var} ([0,T], \mathbb{R}^d). It is well-known that we can integrate y against x by using the RiemannStieltjes integral which is a natural extension of the Riemann integral. The idea is to use the Riemann sums
\sum_{k=0}^{n-1} y(t_k) (x(t_{k+1})-x(t_k)),
where \Pi=\{ 0 =t_0 < t_1 < \cdots < t_n =T \}. It is easy to prove that, when the mesh of the subdivision \Pi goes to 0, the Riemann sums converge to a limit which is independent from the sequence of subdivisions that was chosen. The limit is then denoted \int_0^T y(t) dx(t) and called the Riemann-Stieltjes integral of y against x. Since x has a bounded variation, it is easy to see that, more generally,
\sum_{k=0}^{n-1} y(\xi_k) (x(t_{k+1})-x(t_k)),
with t_k \le \xi_k \le t_{k+1} would also converge to \int_0^T y(t) dx(t). If
x(t)=x(0)+\int_0^t g(s) ds
is an absolutely continuous path, then it is not difficult to prove that we have
\int_0^T y(t) dx(t) =\int_0^T y(t) g(t) dt,
where the integral on the right hand side is understood in Riemann’s sense.

We have
\left\| \sum_{k=0}^{n-1} y(t_k) (x(t_{k+1})-x(t_k))\right\|
\le \sum_{k=0}^{n-1} \| y(t_k)\| \| (x(t_{k+1})-x(t_k))\|
\le \sum_{k=0}^{n-1} \| y(t_k)\| \| (x(t_{k+1})-x(t_k))\|
\le \sum_{k=0}^{n-1} \| y(t_k)\| \| x \|_{1-var,[t_k,t_{k+1}]}.
Thus, by taking the limit when the mesh of the subdivision goes to 0, we obtain the estimate
\left\| \int_0^T y(t) dx(t) \right\| \le \int_0^T \| y(t) \| \| dx(t) \| \le \| y \|_{\infty, [0,T]} \| x \|_{1-var,[0,T]},
where \int_0^T \| y(t) \| \| dx(t) \| is the notation for the Riemann-Stieltjes integral of \| y \| against the bounded variation path l(t)= \| x \|_{1-var,[0,t]}. We can also estimate the Riemann-Stieltjes integral in the 1-variation distance. We collect the following estimate for later use:

Proposition: Let y,y':[0,T] \to \mathbb{R}^{e \times d} be a piecewise continuous path and x,x' \in C^{1-var} ([0,T], \mathbb{R}^d). We have
\left\| \int_0^{\cdot} y'(t) dx'(t)-\int_0^{\cdot} y(t) dx(t) \right\|_{1-var,[0,T]} \le \| x \|_{1-var,[0,T]} \| y-y' \|_{\infty, [0,T]} + \| y' \|_{\infty, [0,T]} \| x -x'\|_{1-var,[0,T]}.

The Riemann-Stieltjes satisfies the usual rules of calculus, for instance the integration by parts formula takes the following form
Proposition: Let y \in C^{1-var} ([0,T], \mathbb{R}^{e \times d} ) and x\in C^{1-var} ([0,T], \mathbb{R}^d).
\int_0^T y(t) dx(t)+\int_0^T dy(t) x(t)=y(T)x(T) -y(0)x(0).

We also have the following change of variable formula:

Proposition: Let x\in C^{1-var} ([0,T], \mathbb{R}^d) and let \Phi: \mathbb{R}^d \to \mathbb{R}^e be a C^1 map. We have
\Phi (x(T)) =\Phi (x(0)) + \int_0^T \Phi'(x(t)) dx(t).

Proof: From the mean value theorem
\Phi (x(T)) -\Phi (x(0))=\sum_{k=0}^{n-1} (\Phi (x(t_{k+1})) -\Phi (x(t_k)))=\sum_{k=0}^{n-1}\Phi'(x_{\xi_k}) (x(t_{k+1}) -x(t_k)),
with t_k \le \xi_k \le t_{k+1}. The result is then obtained by taking the limit when the mesh of the subdivision goes to 0 \square

We finally state a classical analysis lemma, Gronwall’s lemma, which provides a wonderful tool to estimate solutions of differential equations.

Proposition: Let x \in C^{1-var} ([0,T], \mathbb{R}^d) and let \Phi: [0,T] \to [0,\infty) be a bounded measurable function. If,
\Phi(t) \le A+B\int_0^t \Phi(s) \| d x(s)\|, \quad 0 \le t \le T,
for some A,B \ge 0, then
\Phi(t) \le A \exp (B \| x \|_{1-var,[0,t]} )\quad 0 \le t \le T.

Proof: Iterating the inequality
\Phi(t) \le A+B\int_0^t \Phi(s) \| d x(s)\|
N times, we get
\Phi(t) \le A+\sum_{k=1} ^n AB^{k} \int_0^ t \int_0^{t_1} \cdots \int_0^{t_{k-1}} \| d x(t_k)\| \cdots \| dx(t_1) \| +R_n(t),
where R_n(t) is a remainder term that goes to 0 when n \to \infty. Observing that
\int_0^ t \int_0^{t_1} \cdots \int_0^{t_{k-1}} \| d x(t_k)\| \cdots \| dx(t_1) \|=\frac{ \| x \|^k_{1-var,[0,t]} }{k!}
and sending n to \infty finishes the proof \square

Posted in Rough paths theory | 3 Comments

MA3160 Probability. Syllabus Fall 2017

The main educational resource for MA3160 is the following webpage: UConn Undergraduate Probability OER.

No book is required and the course will mostly be based on the lecture notes posted here.

There will be two midterm exams (in class) and a final exam whose dates will be communicated later.

There will be weekly homework assignments (not graded).  Each Thursday, at the end of the class, there will be a 15 minutes quiz consisting of one or two of the homework problems picked randomly.

The final grade will be made of 20% first midterm, 20% second midterm, 20% quiz and 40% final exam.

The following topics will be covered.

  1. Introduction: What is probability theory and why do we care ?
  2. Sets
  3. Combinatorics
  4. The probability set-up
  5. Independence
  6. Conditional probability
  7. Random variables
  8. Some discrete distributions
  9. Continuous distributions
  10. Normal distribution
  11. Normal approximation
  12. Some continuous distributions
  13. Multivariate distributions
  14. Expectations
  15. Moment generating functions
  16. Limit laws
Posted in MA3160 | Leave a comment

Rough paths theory. Fall 2017

Rough path

During the Fall 2017, I will be teaching rough paths theory at the University of Connecticut. The course will be mainly based on those   notes and the lectures already posted on this blog in 2013 (when I first taught the class at Purdue University).

Since I first taught the class, the theory of rough paths has found many further applications. A natural and far reaching development is the theory of regularity structures for which Martin Hairer was awarded the Fields medal in 2014 (see my post)  . I will therefore update the lectures to reflect those developments. A good introduction to the theory of regularity structures is the book by Peter Friz and Martin Hairer which I will be using as a complement reading.

Posted in Rough paths theory | Leave a comment

MA5311. Take home exam

Exercise 1. Solve Exercise 44 in Chapter 1 of the book.

Exercise 2.  Solve Exercise 3 in Chapter 1 of the book.

Exercise 3.  Solve Exercise 39 in Chapter 1 of the book.

Exercise 4. The heat kernel on \mathbb{S}^1 is given by p(t,y) =\frac{1}{2\pi}\sum_{m \in \mathbb{Z}} e^{-m^2 t} e^{im y} =\frac{1}{\sqrt{4\pi t}} \sum_{k \in \mathbb{Z}} e^{-\frac{(y -2k\pi)^2}{4t} }.

  • By using the subordination identity e^{-\tau | \alpha | } =\frac{\tau}{2\sqrt{\pi}} \int_0^{+\infty} \frac{e^{-\frac{\tau^2}{4t}-t \alpha^2}}{t^{3/2}} dt, \quad \tau \neq 0, \alpha \in \mathbb{R}, show that for \tau > 0, \frac{1+e^{-2\pi \tau}}{1-e^{-2\pi \tau}} =\frac{1}{2\pi} \sum_{k \in \mathbb{Z}} \frac{2\tau}{\tau^2+n^2}
  • The Bernoulli numbers B_k are defined via the series expansion \frac{x}{e^x -1}=\sum_{k=0}^{+\infty} B_k \frac{x^k}{k!}. By using the previous identity show that for k \in \mathbb{N}, k \neq 0, \sum_{n=1}^{+\infty} \frac{1}{n^{2k}} =(-1)^{k-1} \frac{(2\pi)^{2k} B_{2k} }{2(2k)!}.

 

Exercise 5. Show that the heat kernel on the torus \mathbb{T}^n=\mathbb{R}^n / (2 \pi \mathbb{Z})^n is given by p(t,y) = \frac{1}{(4\pi t)^{n/2}} \sum_{k \in \mathbb{Z}^n} e^{-\frac{\|y+2k\pi\|^2}{4t} }=\frac{1}{(2\pi)^n} \sum_{l\in \mathbb{Z}^n} e^{i l \cdot y -\| l \|^2 t}.

Posted in Uncategorized | Leave a comment

MA5161. Take home exam

Exercise 1. The Hermite polynomial of order n is defined as
H_n (x)=(-1)^n e^{\frac{x^2}{2}} \frac{d^n}{dx^n} e^{-\frac{x^2}{2}}.

  • Compute H_0, H_1,H_2,H_3.
  • Show that if (B_t)_{t \ge 0} is a Brownian motion, then the process \left(t^{n/2}H_n (\frac{B_t}{\sqrt{t}})\right)_{t \ge 0} is a martingale.
  • Show that
    t^{n/2}H_n (\frac{B_t}{\sqrt{t}})=n! \int_0^t \int_0^{t_1} ... \int_0^{t_{n-1}} dB_{s_1}...dB_{s_n}.

 

Exercise 2. (Probabilistic proof of Liouville theorem) By using martingale methods, prove that if f:\mathbb{R}^n \to \mathbb{R} is a bounded harmonic function, then f is constant.

Exercise 3. Show that if (M_t)_{t \ge 0} is a local martingale of a Brownian filtration (\mathcal{F}_t)_{t \ge 0}, then there is a unique progressively measurable process (u_t)_{t \ge 0} such that for every t \ge 0, \mathbb{P} \left(\int_0^t u_s^2 ds < +\infty \right)=1 and M_t=\mathbb{E} (M_0)+ \int_0^{t} u_s dB_s.
Exercise 4 [Skew-product decomposition]
Let (B_t)_{t \ge 0} be a complex Brownian motion started at z \neq 0.

  1. Show that for t \ge 0,
    B_t=z \exp\left( \int_0^t \frac{dB_s}{B_s} \right).
  2.  Show that there exists a complex Brownian motion (\beta_t)_{t \ge 0} such that
    B_t=z \exp{\left( \beta_{\int_0^t \frac{ds}{\rho_s^2} }\right)},
    where \rho_t =| B_t |.
  3. Show that the process (\rho_t)_{t \ge 0} is independent from the Brownian motion (\gamma_t)_{t \ge 0}=(\mathbf{Im} ( \beta_t))_{t \ge 0}.
  4. We denote \theta_t=\mathbf{Im}\left( \int_0^t \frac{dB_s}{B_s} \right) which can be interpreted as a winding number around 0 of the complex Brownian motion paths. For r>| z|, we consider the stopping time
    T_r =\inf \{ t \ge 0, | B_t | = r \}.
  5. Compute for every r>| z|, the distribution of the random variable
    \frac{1}{\ln (r/|z|)}\theta_{T_r}.
  6. Prove Spitzer theorem: In distribution, we have the following convergence
    \frac{ 2 \theta_t}{\ln t} \to_{+\infty} C,
    where C is a Cauchy random variable with parameter 1 that is a random variable with density \frac{1}{\pi (1+ x^2)}.

 

Posted in Uncategorized | Leave a comment

MA5311. HW due April 7

Solve Exercises 10,14,15,16,18,19 in Chapter 1 of the book “The Laplacian on  a Riemannian manifold”.

Posted in Differential Topology | Leave a comment