Lecture Characterization of Infinitely Divisible Distributions

Size: px
Start display at page:

Download "Lecture Characterization of Infinitely Divisible Distributions"

Transcription

1 Lecture 10 1 Characterization of Infinitely Divisible Distributions We have shown that a distribution µ is infinitely divisible if and only if it is the weak limit of S n := X n,1 + + X n,n for a uniformly infinitesimal triangular array of independent random variables (X n,i ) 1 i n. Examples include the delta measure ported at a constant, the Gaussian distribution, the compound distribution, and their convolutions. We will show that every infinitely divisible distribution is essentially the convolution of a delta measure, a Gaussian distribution, and a compound Poisson distribution. Our starting point is the result (Accompanying Laws) that every infinitely divisible µ is necessarily the weak limit of a uniformly infinitesimal triangular array (X n,i ) 1 i n, where each X n,i a n,i is compound distributed for some a n,i R. Let φ n and φ n,i denote respectively the characteristic functions of S n and X n,i a n,i. Since X n,i a n,i is a compound Poisson random variable, we have { ( φ n,i (t) = exp c n,i 1 )} { e itx ( µ n,i (dx) =: exp e itx 1 ) } M n,i (dx) for some c n,i 0 and some probability measure µ n,i on R, where we have set M n,i (dx) := c n,i µ n,i (dx). Let A n := n i=1 a n,i and M n (dx) = n i=1 M n,i(dx). Then φ n (t) = n i=1 { (e e ian,it φ n,i (t) = exp ita n + itx 1 ) } M n (dx) φ(t). n Identifying the limit φ is thus reduced to identifying the possible limits for the exponent ita n + ( e itx 1 ) M n (dx). If A n converges to some A R and M n converges weakly to some finite measure M on R, then clearly φ n (t) φ(t) = e ita+ (e itx 1)M(dx), which is the characteristic function of a compound Poisson random variable shifted by A. However, it is in general not necessary that M n converges weakly to a finite measure. Recall that a compound Poisson random variable with characteristic function e (e itx 1)M n(dx) can be written as N n i=1 ξ n,i, where N n is a Poisson random variable with mean M n (R), while (ξ n,i ) i N are i.i.d. random variables with distribution M n (dx)/m n (R). It may happen that M n (R) so that the number of summands N n tends to infinity in probability, however it is compensated by M n (dx)/m n (R), the distribution of ξ n,i, converging weakly to 0 as n. In other words, M n may have more and more of its mass near the origin. To make sense of such convergence, we write (e Ψ n (t) := ita n + itx 1 ) (e M n (dx) = itb n + itx 1 itθ(x) ) M n (dx) e itx 1 itθ(x) = itb n + x (x )M n (dx), (1.1) where θ : R R is any bounded continuous function with θ(x) = x + O( x 3 ) near 0, and B n = A n + θ(x)m n (dx). 1

2 The standard choices for θ are θ(x) = x, or θ(x) = sin x. Note that eitx 1 itθ(x) is a 1+x bounded continuous function on R and is equal to t at x = 0, and hence Ψ n in (1.1) converges to a well-defined limit Ψ(t) := itb σ t e itx 1 itθ(x) x (x )M(dx) (1.) if B n B R, and (x )M n (dx) converges weakly to a finite measure ν with σ := ν({0}) and M(dx) := ν(dx) on R\{0}. We remark that even when B n converges, A n and θ(x)mn (dx) may both diverge as n. We will make the above heuristics rigorous and show that every infinitely divisible distribution is uniquely determined by a triple (B, σ, M) as in (1.). First, we show that e Ψ(t) is indeed a characteristic function. For the following results, we assume that θ( ) is bounded and continuous, with θ(x) = x + O( x 3 ) as x 0, which will be relaxed at the end of our discussion. Theorem 1.1 [Existence of i.d.d. with exponent Ψ] Let θ( ) be a bounded and continuous function with θ(x) = x + O( x 3 ) as x 0. Let B R, σ 0, and M(dx) be a measure on R satisfying M({0}) = 0 and (x )M(dx) <. (1.3) Then e Ψ(t), defined as in (1.), is the characteristic function of an infinitely divisible distribution (i.d.d.) µ. We call (B, σ, M) the Lévy triple for µ w.r.t. θ, and M the Lévy measure. Remark. Note that in the Lévy triple, B depends on θ( ), while σ and M do not. Proof. Let M n (dx) := 1 { x 1/n} M(dx), and let Ψ n (t) = itb σ t e itx 1 itθ(x) x (x )M n (dx). Since M n is a finite measure, e Ψn(t) is the characteristic function of a probability distribution, which is the convolution of a delta measure, a Gaussian distribution, and a compound Poisson distribution. Note that eitx 1 itθ(x) is a bounded continuous function on R by the choice of θ, while (x )M n (x) is a sequence of measures increasing to the finite measure (x )M(dx). Therefore by the bounded convergence theorem, Ψ n (t) Ψ(t), and hence e Ψn(t) e Ψ(t) for each t R. Lastly we note that Ψ(0) = 0. To show the continuity of Ψ in t, note that in Ψ(t) = itb σ t e itx 1 itθ(x) x (x )M(dx), the integrand eitx 1 itθ(x) is continuous in t for each x R\{0}, and is uniformly bounded in x R\{0} and t [ T, T ] for any T > 0, which can be seen by Taylor expanding e itx and θ(x) in a small neighborhood of 0. We can then apply the bounded convergence theorem to conclude that Ψ(t) is continuous in t, and hence so is e Ψ(t). By Lévy s continuity theorem, e Ψ(t) must be the characteristic function of some distribution µ. The same argument shows that for any a 0, e aψ(t) is also a characteristic function, and hence µ is infinitely divisible. Next we study the convergence for a sequence of distributions with Lévy triples (B n, σ n, M n ) w.r.t. θ( ), which will be used to show that every infinitely divisible µ admits a Lévy triple.

3 Theorem 1. [Convergence] Let (µ n ) n N be a sequence of probability distributions with Lévy triples (B n, σ n, M n ) n N w.r.t. a bounded continuous θ with θ(x) = x + O( x 3 ) as x 0. Then µ n converges weakly to a limit µ if and only if the following conditions are satisfied: (i) B n B R as n, (ii) ν n (dx) := σ nδ 0 (dx) + (x )M n (dx) converges weakly to some ν(dx) := σ δ 0 (dx) + (x )M(dx) with M({0}) = 0, i.e., fν n (dx) fν(dx) as n for all bounded continuous f : R R, in which case (B, σ, M) is the Lévy triple for µ w.r.t. θ. Proof. Note that for each t R, g t (x) := eitx 1 itθ(x) defines a bounded continuous function on R with g t (0) := t. If conditions (i) and (ii) are satisfied, then for each t R, Ψ n (t) := itb n σ nt e itx 1 itθ(x) + (x )M n (dx) = itb n + g t (x)ν n (dx) itb + n x g t (x)ν(dx) = itb σ t e itx 1 itθ(x) x (x )M(dx) =: Ψ(t). The assumption ν n ν implicitly implies that ν(r) < since fν(dx) must be well-defined for all bounded continuous f, and hence M satisfies (1.3) and is a Lévy measure. Therefore by Theorem 1.1, (B, σ, M) is the Lévy triple for a probability measure µ. Since e Ψn(t) e Ψ(t) for each t R, it follows that µ n µ. Conversely, let us assume that µ n µ. Our strategy is to show first that the sequence of measures (ν n ) n N is tight, and then show that for every subsequence along which ν n converges, the limits in (i) and (ii) exist and are unique. Let φ n and φ be respectively the characteristic function of µ n and µ. Then φ n (t) = e Ψn(t) converges uniformly to φ(t) on bounded intervals. In particular, on [ T, T ] for T small enough such that φ(t) does not vanish, log φ n (t) converges uniformly to log φ(t), which is continuous and vanishes at t = 0. For each ɛ > 0, we can first choose n 0 N sufficiently large and then δ > 0 sufficiently small to obtain that n n 0 t [ δ,δ] log φ n (t) = n n 0 t [ δ,δ] n n 0 t [ T,T ] ( log φ n (t) log φ(t) + log φ(t) ) log φ n (t) log φ(t) + log φ(t) ɛ. t [ δ,δ] By choosing δ even smaller, we can extend the uniformity to all n N, i.e., ɛ > 0, δ > 0 such that t [ δ,δ] n N log φ n (t) ɛ. (1.4) Note that log φ n (t) = Re(Ψ n (t)) = σ nt 1 cos tx + x (x )M n (dx) = h t (x)ν n (dx), where h t (x) := 1 cos tx 0 for x 0 and h t (0) = t /. We can then restate (1.4) as lim δ 0 t [ δ,δ] n N h t (x)ν n (dx) = 0. (1.5) 3

4 For each l > 0, we can choose δ > 0 sufficiently small such that inf x: x l h δ (x) = C(δ, l) > 0, which can be verified by Taylor expansion. Using (1.5) with t = δ, we obtain h δ (x) ν n ([ l, l]) n N n N C(δ, l) ν n(dx) C(δ, l) 1 h δ (x)ν n (dx) <. (1.6) n N [ l,l] On the other hand, (1.5) also implies that lim δ 0 t [ δ,δ] n N x 1 (1 cos tx)ν n (dx) = 0. We have seen time and again the close link between (1 cos tx)ν n (dx) for t close to 0, and ν n {x : x > l} for l large. More precisely, if we integrate the above equation over t [ δ, δ] and divide by δ, we then obtain lim δ 0 n N x 1 ( 1 sin δx ) ν n (dx) = 0. δx Since inf y 1 (1 sin y/y) = C > 0, if we let δ = 1/l and restrict the integral to x l, we then obtain lim C ν n {x : x l} = 0. (1.7) l n N Together with (1.6), this implies that n N ν n (R) <, and the family of finite measures {ν n } n N on R is tight. Therefore {ν n } n N is relatively compact. Let ν be the weak limit along a subsequence (ν ni ) i N. By what we have shown in the first part of the proof, the sequence of probability measures µ ni with Lévy triple (0, σ n i, M ni ), where we have replaced B ni by 0, converges weakly to a limit. Since µ ni with Lévy triple (B ni, σ n i, M ni ) also converges weakly, the following exercise shows that B ni must converge to some B R. Exercise 1.3 For each n N, let B n R and let µ n be a probability measure on R. If µ n converges weakly to a limit, and so does µ n δ Bn, then B n B for some B R. We have thus verified conditions (i) and (ii) along the subsequence (µ ni ) i N, and the limit µ admits a Lévy triple (B, σ, M), with σ ν(dx) := ν({0}) and M(dx) := 1 {x 0}. To extend to the full sequence (µ n ) n N, it only remains to show that (ν n ) n N has a unique subsequential weak limit (this also implies the uniqueness of the limit B by Exercise 1.3), which is equivalent to showing that µ admits a unique Lévy triple. This is the content of Theorem 1.4 below. Theorem 1.4 [Uniqueness] Each probability distribution µ is associated with at most one Lévy triple (B, σ, M) for a given θ( ). Proof. Suppose that µ has characteristic function φ, with φ(t) = e Ψ i(t) and Ψ i (t) = itb i σ i t e itx 1 itθ(x) x (x )M i (dx) for two Lévy triples (B i, σ i, M i), i = 1,. Since φ is continuous and never zero, while Ψ 1 (0) = Ψ (0) and Ψ 1, Ψ are both continuous as shown in the proof of Theorem 1.1, we can take logarithm of φ to conclude that Ψ 1 = Ψ. We leave it as an exercise to show that Exercise 1.5 If M is a Lévy measure, then lim t 1 t ( e itx 1 itθ(x) ) M(dx) = 0. 4

5 Therefore namely that σ1 = σ, which gives Ψ 1 (t) lim t t = σ 1 = lim Ψ (t) t t = σ, ψ(t) := itb 1 + (e itx 1 itθ(x) ) M 1 (dx) = itb (e itx 1 itθ(x) ) M (dx). Let us compute the following discrete analogue of ψ, i.e., for any s > 0, ψ(t + s) + ψ(t s) ψ(t) s ψ(t) := = e itx (1 cos sx)m 1 (dx) = e itx (1 cos sx)m (dx). Since 0 1 cos sx s x, (1 cos sx)m 1 (dx) and (1 cos sx)m (dx) are finite measures with the same Fourier transform, and hence they are equal. Therefore M 1 = M on {x R : 1 cos sx 0}. Note that {x : 1 cos sx 0 for some s > 0} = R\{0}, therefore M 1 = M on R\{0}, which together with the assumption M 1 ({0}) = M ({0}) = 0 imply that M 1 = M. Given σ 1 = σ and M 1 = M, it is then immediate that B 1 = B. Theorems 1. and 1.4 can be used to show that every infinitely divisible distribution µ admits a unique Lévy triple with respect to θ, which is bounded and continuous with θ(x) = x + O( x 3 ) as x 0. This is known as the Lévy-Khintchine representation of µ. We formulate below a version which relaxes the assumption on θ so as to accommodate the standard choice of θ(x) = x1 { x 1}. We emphasize however that in Theorem 1., we cannot let θ be discontinuous due to the weak convergence statement. Corollary 1.6 [Lévy-Khintchine representation] Given θ : R R, which is bounded with θ(x) = x + O(x ) as x 0, every infinitely divisible distribution µ admits a unique Lévy triple (B, σ, M) with respect to θ( ). Proof. By the theorem on Accompanying Laws, every infinitely divisible µ necessarily arises as the weak limit of a uniformly infinitesimal triangular array of independent shifted compound Poisson random variables, which clearly admit representations in terms of Lévy triples. Therefore Theorem 1. implies that µ also admits a Lévy triple (B, σ, M) with respect to θ(x) = sin x, which is furthermore unique by Theorem 1.4. For more general θ, we can write ibt σ t (e itx 1 it sin x ) M(dx) = i(b C)t σ t (e itx 1 itθ(x) ) M(dx), so that (B, σ, M) is a Lévy triple for µ w.r.t. sin x if and only if (B C, σ, M) is a Lévy triple for µ with respect to θ( ), where C := (sin x θ(x))m(dx) is finite by our assumption on θ. The uniqueness of the Lévy triple w.r.t. θ( ) then follows from that for sin x. Remark 1.7 [Computing Lévy triple] Theorem 1. can be used to identify the Lévy triple for an infinitely divisible distribution µ, assuming that µ n n µ for a given sequence of probability measures µ n. We can use µ n to construct an accompanying triangular array of shifted compound Poisson random variables which also converges to µ. The Lévy triples for the shifted compound Poisson random variables can be easily identified in terms of µ n. We can then apply Theorem 1. to identify the Lévy triple for µ. 5

6 Exercise 1.8 Show that the mean 1 exponential distribution is infinitely divisible, and compute its Lévy triple w.r.t. θ(x) = x1 { x 1}. Remark 1.9 [Lévy Processes] A stochastic process X := (X t ) t 0 is called a Lévy process, if it satisfies: (i) For any k N and 0 = t 0 < t 1 < t k, (X ti X ti 1 ) 1 i k is a collection of independent random variables, (ii) For any 0 s t, X t X s is equally distributed with X t s. These properties imply that for any t > 0, the distribution µ t of X t is infinitely divisible, and hence is uniquely characterized by a Lévy triple (B t, σ t, M t ) (let us take θ(x) = x1 { x 1} to fix B t ). The characteristic function of X t is of the form e Ψt(u), where Ψ t (u) := iub t σ t u + (e iux 1 iuθ(x) ) M t (dx) is known as the characteristic exponent for µ t. Properties (i) and (ii) also imply that for any s 1,..., s k > 0, k Ψ s1 + +s k = Ψ si. This implies that in fact Ψ t = tψ 1. The law of the Lévy process (X t ) t 0 is thus uniquely determined by the Lévy exponent Ψ := Ψ 1, or equivalently, determined by its Lévy triple (B, σ, M) w.r.t. θ(x) = x1 { x 1}. If (B, σ, M) = (0, 1, 0), X is the standard Brownian motion. If (B, σ, M) = (0, 0, δ 1 ), X is the Poisson process. Generally, B encodes a part of the constant drift for the process (the other part being encoded in θ( )), σ encodes the Gaussian (or Brown motion) component of the process, while M encodes the jumps of the process analogous to a compound Poisson process. Note that M assigns finite measure to the complement of any ball centered at the origin, which implies that X can only make finitely many jumps of a given size or larger during any finite time interval. However, M may assign infinite measure to every neighborhood of the origin, in which case X makes infinitely many small jumps in any finite time interval. In such a case, either x 1 x M(dx) < so that the jumps are absolutely summable over any finite time interval; or x 1 x M(dx) = and the jumps are not absolutely summable, and it is necessary to introduce an infinite drift (represented by the term iuθ(x)m(dx) in Ψ(u), which formally is undefined because x 1 x M(dx) = ), so that after such a centering, the infinitely many small positive and negative jumps balance each other out in such a way that X t is well-defined for each t 0. In particular, for the Lévy process with B = σ = 0 and M(dx) = 1 x>0 dx, even though the Lévy measure is ported on (0, ) so that only positive x jumps are possible, the process X can become negative due to the infinite negative drift represented by the term iuθ(x)m(dx) in Ψ(u). i=1 6

Infinitely divisible distributions and the Lévy-Khintchine formula

Infinitely divisible distributions and the Lévy-Khintchine formula Infinitely divisible distributions and the Cornell University May 1, 2015 Some definitions Let X be a real-valued random variable with law µ X. Recall that X is said to be infinitely divisible if for every

More information

Poisson random measure: motivation

Poisson random measure: motivation : motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps

More information

Statistical inference on Lévy processes

Statistical inference on Lévy processes Alberto Coca Cabrero University of Cambridge - CCA Supervisors: Dr. Richard Nickl and Professor L.C.G.Rogers Funded by Fundación Mutua Madrileña and EPSRC MASDOC/CCA student workshop 2013 26th March Outline

More information

Definition: Lévy Process. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 2: Lévy Processes. Theorem

Definition: Lévy Process. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 2: Lévy Processes. Theorem Definition: Lévy Process Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 2: Lévy Processes David Applebaum Probability and Statistics Department, University of Sheffield, UK July

More information

Notes 9 : Infinitely divisible and stable laws

Notes 9 : Infinitely divisible and stable laws Notes 9 : Infinitely divisible and stable laws Math 733 - Fall 203 Lecturer: Sebastien Roch References: [Dur0, Section 3.7, 3.8], [Shi96, Section III.6]. Infinitely divisible distributions Recall: EX 9.

More information

STOCHASTIC ANALYSIS FOR JUMP PROCESSES

STOCHASTIC ANALYSIS FOR JUMP PROCESSES STOCHASTIC ANALYSIS FOR JUMP PROCESSES ANTONIS PAPAPANTOLEON Abstract. Lecture notes from courses at TU Berlin in WS 29/1, WS 211/12 and WS 212/13. Contents 1. Introduction 2 2. Definition of Lévy processes

More information

18.175: Lecture 15 Characteristic functions and central limit theorem

18.175: Lecture 15 Characteristic functions and central limit theorem 18.175: Lecture 15 Characteristic functions and central limit theorem Scott Sheffield MIT Outline Characteristic functions Outline Characteristic functions Characteristic functions Let X be a random variable.

More information

Jump-type Levy Processes

Jump-type Levy Processes Jump-type Levy Processes Ernst Eberlein Handbook of Financial Time Series Outline Table of contents Probabilistic Structure of Levy Processes Levy process Levy-Ito decomposition Jump part Probabilistic

More information

the convolution of f and g) given by

the convolution of f and g) given by 09:53 /5/2000 TOPIC Characteristic functions, cont d This lecture develops an inversion formula for recovering the density of a smooth random variable X from its characteristic function, and uses that

More information

18.175: Lecture 13 Infinite divisibility and Lévy processes

18.175: Lecture 13 Infinite divisibility and Lévy processes 18.175 Lecture 13 18.175: Lecture 13 Infinite divisibility and Lévy processes Scott Sheffield MIT Outline Poisson random variable convergence Extend CLT idea to stable random variables Infinite divisibility

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

Numerical Methods with Lévy Processes

Numerical Methods with Lévy Processes Numerical Methods with Lévy Processes 1 Objective: i) Find models of asset returns, etc ii) Get numbers out of them. Why? VaR and risk management Valuing and hedging derivatives Why not? Usual assumption:

More information

General Theory of Large Deviations

General Theory of Large Deviations Chapter 30 General Theory of Large Deviations A family of random variables follows the large deviations principle if the probability of the variables falling into bad sets, representing large deviations

More information

1 Infinitely Divisible Random Variables

1 Infinitely Divisible Random Variables ENSAE, 2004 1 2 1 Infinitely Divisible Random Variables 1.1 Definition A random variable X taking values in IR d is infinitely divisible if its characteristic function ˆµ(u) =E(e i(u X) )=(ˆµ n ) n where

More information

Math 341: Probability Seventeenth Lecture (11/10/09)

Math 341: Probability Seventeenth Lecture (11/10/09) Math 341: Probability Seventeenth Lecture (11/10/09) Steven J Miller Williams College Steven.J.Miller@williams.edu http://www.williams.edu/go/math/sjmiller/ public html/341/ Bronfman Science Center Williams

More information

The strictly 1/2-stable example

The strictly 1/2-stable example The strictly 1/2-stable example 1 Direct approach: building a Lévy pure jump process on R Bert Fristedt provided key mathematical facts for this example. A pure jump Lévy process X is a Lévy process such

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

The Lévy-Itô decomposition and the Lévy-Khintchine formula in31 themarch dual of 2014 a nuclear 1 space. / 20

The Lévy-Itô decomposition and the Lévy-Khintchine formula in31 themarch dual of 2014 a nuclear 1 space. / 20 The Lévy-Itô decomposition and the Lévy-Khintchine formula in the dual of a nuclear space. Christian Fonseca-Mora School of Mathematics and Statistics, University of Sheffield, UK Talk at "Stochastic Processes

More information

Levy Process and Infinitely Divisible Law

Levy Process and Infinitely Divisible Law Stat205B: Probability Theory (Spring 2003) Lecture: 26 Levy Process an Infinitely Divisible Law Lecturer: James W. Pitman Scribe: Bo Li boli@stat.berkeley.eu Levy Processes an Infinitely Divisible Law

More information

9 Brownian Motion: Construction

9 Brownian Motion: Construction 9 Brownian Motion: Construction 9.1 Definition and Heuristics The central limit theorem states that the standard Gaussian distribution arises as the weak limit of the rescaled partial sums S n / p n of

More information

Lecture 12. F o s, (1.1) F t := s>t

Lecture 12. F o s, (1.1) F t := s>t Lecture 12 1 Brownian motion: the Markov property Let C := C(0, ), R) be the space of continuous functions mapping from 0, ) to R, in which a Brownian motion (B t ) t 0 almost surely takes its value. Let

More information

PROBABILITY THEORY LECTURE 3

PROBABILITY THEORY LECTURE 3 PROBABILITY THEORY LECTURE 3 Per Sidén Division of Statistics Dept. of Computer and Information Science Linköping University PER SIDÉN (STATISTICS, LIU) PROBABILITY THEORY - L3 1 / 15 OVERVIEW LECTURE

More information

18.175: Lecture 17 Poisson random variables

18.175: Lecture 17 Poisson random variables 18.175: Lecture 17 Poisson random variables Scott Sheffield MIT 1 Outline More on random walks and local CLT Poisson random variable convergence Extend CLT idea to stable random variables 2 Outline More

More information

Lévy Processes and Infinitely Divisible Measures in the Dual of afebruary Nuclear2017 Space 1 / 32

Lévy Processes and Infinitely Divisible Measures in the Dual of afebruary Nuclear2017 Space 1 / 32 Lévy Processes and Infinitely Divisible Measures in the Dual of a Nuclear Space David Applebaum School of Mathematics and Statistics, University of Sheffield, UK Talk at "Workshop on Infinite Dimensional

More information

Central Limit Theorem using Characteristic functions

Central Limit Theorem using Characteristic functions Cetral Limit Theorem usig Characteristic fuctios RogXi Guo MAT 477 Jauary 20, 2014 RogXi Guo (2014 Cetral Limit Theorem usig Characteristic fuctios Jauary 20, 2014 1 / 15 Itroductio study a radom variable

More information

1-D ISING MODELS, COMPOUND GEOMETRIC DISTRIBUTIONS AND SELFDECOMPOSABILITY

1-D ISING MODELS, COMPOUND GEOMETRIC DISTRIBUTIONS AND SELFDECOMPOSABILITY 1-D ISING MODELS, COMPOUND GEOMETRIC DISTRIBUTIONS AND SELFDECOMPOSABILITY Zbigniew J. Jurek, Institute of Mathematics, The University of Wroc law, Pl.Grunwaldzki 2/4, 50-384 Wroc law, Poland Reports on

More information

McGill University Math 354: Honors Analysis 3

McGill University Math 354: Honors Analysis 3 Practice problems McGill University Math 354: Honors Analysis 3 not for credit Problem 1. Determine whether the family of F = {f n } functions f n (x) = x n is uniformly equicontinuous. 1st Solution: The

More information

E[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) =

E[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) = Chapter 7 Generating functions Definition 7.. Let X be a random variable. The moment generating function is given by M X (t) =E[e tx ], provided that the expectation exists for t in some neighborhood of

More information

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )

More information

Proof. We indicate by α, β (finite or not) the end-points of I and call

Proof. We indicate by α, β (finite or not) the end-points of I and call C.6 Continuous functions Pag. 111 Proof of Corollary 4.25 Corollary 4.25 Let f be continuous on the interval I and suppose it admits non-zero its (finite or infinite) that are different in sign for x tending

More information

Hardy-Stein identity and Square functions

Hardy-Stein identity and Square functions Hardy-Stein identity and Square functions Daesung Kim (joint work with Rodrigo Bañuelos) Department of Mathematics Purdue University March 28, 217 Daesung Kim (Purdue) Hardy-Stein identity UIUC 217 1 /

More information

If α is a probability distribution on the line, its characteristic function is defined by

If α is a probability distribution on the line, its characteristic function is defined by Chapter 2 Weak Convergence 2. Characteristic Functions If α is a probability distribution on the line, its characteristic function is defined by φ(t) = exp[ itx] dα. (2.) The above definition makes sense.

More information

ON ADDITIVE TIME-CHANGES OF FELLER PROCESSES. 1. Introduction

ON ADDITIVE TIME-CHANGES OF FELLER PROCESSES. 1. Introduction ON ADDITIVE TIME-CHANGES OF FELLER PROCESSES ALEKSANDAR MIJATOVIĆ AND MARTIJN PISTORIUS Abstract. In this note we generalise the Phillips theorem [1] on the subordination of Feller processes by Lévy subordinators

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

1* (10 pts) Let X be a random variable with P (X = 1) = P (X = 1) = 1 2

1* (10 pts) Let X be a random variable with P (X = 1) = P (X = 1) = 1 2 Math 736-1 Homework Fall 27 1* (1 pts) Let X be a random variable with P (X = 1) = P (X = 1) = 1 2 and let Y be a standard normal random variable. Assume that X and Y are independent. Find the distribution

More information

1 Introduction. 2 Diffusion equation and central limit theorem. The content of these notes is also covered by chapter 3 section B of [1].

1 Introduction. 2 Diffusion equation and central limit theorem. The content of these notes is also covered by chapter 3 section B of [1]. 1 Introduction The content of these notes is also covered by chapter 3 section B of [1]. Diffusion equation and central limit theorem Consider a sequence {ξ i } i=1 i.i.d. ξ i = d ξ with ξ : Ω { Dx, 0,

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

2. As we shall see, we choose to write in terms of σ x because ( X ) 2 = σ 2 x.

2. As we shall see, we choose to write in terms of σ x because ( X ) 2 = σ 2 x. Section 5.1 Simple One-Dimensional Problems: The Free Particle Page 9 The Free Particle Gaussian Wave Packets The Gaussian wave packet initial state is one of the few states for which both the { x } and

More information

Vector fields Lecture 2

Vector fields Lecture 2 Vector fields Lecture 2 Let U be an open subset of R n and v a vector field on U. We ll say that v is complete if, for every p U, there exists an integral curve, γ : R U with γ(0) = p, i.e., for every

More information

Stochastic Analysis. Prof. Dr. Andreas Eberle

Stochastic Analysis. Prof. Dr. Andreas Eberle Stochastic Analysis Prof. Dr. Andreas Eberle March 13, 212 Contents Contents 2 1 Lévy processes and Poisson point processes 6 1.1 Lévy processes.............................. 7 Characteristic exponents.........................

More information

are harmonic functions so by superposition

are harmonic functions so by superposition J. Rauch Applied Complex Analysis The Dirichlet Problem Abstract. We solve, by simple formula, the Dirichlet Problem in a half space with step function boundary data. Uniqueness is proved by complex variable

More information

Asymptotics for posterior hazards

Asymptotics for posterior hazards Asymptotics for posterior hazards Pierpaolo De Blasi University of Turin 10th August 2007, BNR Workshop, Isaac Newton Intitute, Cambridge, UK Joint work with Giovanni Peccati (Université Paris VI) and

More information

1.5 Approximate Identities

1.5 Approximate Identities 38 1 The Fourier Transform on L 1 (R) which are dense subspaces of L p (R). On these domains, P : D P L p (R) and M : D M L p (R). Show, however, that P and M are unbounded even when restricted to these

More information

Characteristic Functions and the Central Limit Theorem

Characteristic Functions and the Central Limit Theorem Chapter 6 Characteristic Functions and the Central Limit Theorem 6.1 Characteristic Functions 6.1.1 Transforms and Characteristic Functions. There are several transforms or generating functions used in

More information

We denote the space of distributions on Ω by D ( Ω) 2.

We denote the space of distributions on Ω by D ( Ω) 2. Sep. 1 0, 008 Distributions Distributions are generalized functions. Some familiarity with the theory of distributions helps understanding of various function spaces which play important roles in the study

More information

Complex Analysis Slide 9: Power Series

Complex Analysis Slide 9: Power Series Complex Analysis Slide 9: Power Series MA201 Mathematics III Department of Mathematics IIT Guwahati August 2015 Complex Analysis Slide 9: Power Series 1 / 37 Learning Outcome of this Lecture We learn Sequence

More information

Part 3.3 Differentiation Taylor Polynomials

Part 3.3 Differentiation Taylor Polynomials Part 3.3 Differentiation 3..3.1 Taylor Polynomials Definition 3.3.1 Taylor 1715 and Maclaurin 1742) If a is a fixed number, and f is a function whose first n derivatives exist at a then the Taylor polynomial

More information

Introduction to self-similar growth-fragmentations

Introduction to self-similar growth-fragmentations Introduction to self-similar growth-fragmentations Quan Shi CIMAT, 11-15 December, 2017 Quan Shi Growth-Fragmentations CIMAT, 11-15 December, 2017 1 / 34 Literature Jean Bertoin, Compensated fragmentation

More information

Fourier analysis, measures, and distributions. Alan Haynes

Fourier analysis, measures, and distributions. Alan Haynes Fourier analysis, measures, and distributions Alan Haynes 1 Mathematics of diffraction Physical diffraction As a physical phenomenon, diffraction refers to interference of waves passing through some medium

More information

ON THE COMPOUND POISSON DISTRIBUTION

ON THE COMPOUND POISSON DISTRIBUTION Acta Sci. Math. Szeged) 18 1957), pp. 23 28. ON THE COMPOUND POISSON DISTRIBUTION András Préopa Budapest) Received: March 1, 1957 A probability distribution is called a compound Poisson distribution if

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Asymptotic statistics using the Functional Delta Method

Asymptotic statistics using the Functional Delta Method Quantiles, Order Statistics and L-Statsitics TU Kaiserslautern 15. Februar 2015 Motivation Functional The delta method introduced in chapter 3 is an useful technique to turn the weak convergence of random

More information

The Analog Formulation of Sparsity Implies Infinite Divisibility and Rules Out Bernoulli-Gaussian Priors

The Analog Formulation of Sparsity Implies Infinite Divisibility and Rules Out Bernoulli-Gaussian Priors The Analog Formulation of Sparsity Implies Infinite Divisibility and ules Out Bernoulli-Gaussian Priors Arash Amini, Ulugbek S. Kamilov, and Michael Unser Biomedical Imaging Group BIG) École polytechnique

More information

Beyond the color of the noise: what is memory in random phenomena?

Beyond the color of the noise: what is memory in random phenomena? Beyond the color of the noise: what is memory in random phenomena? Gennady Samorodnitsky Cornell University September 19, 2014 Randomness means lack of pattern or predictability in events according to

More information

Jump Processes. Richard F. Bass

Jump Processes. Richard F. Bass Jump Processes Richard F. Bass ii c Copyright 214 Richard F. Bass Contents 1 Poisson processes 1 1.1 Definitions............................. 1 1.2 Stopping times.......................... 3 1.3 Markov

More information

Lecture 8: The Field B dr

Lecture 8: The Field B dr Lecture 8: The Field B dr October 29, 2018 Throughout this lecture, we fix a perfectoid field C of characteristic p, with valuation ring O C. Fix an element π C with 0 < π C < 1, and let B denote the completion

More information

Estimation of the characteristics of a Lévy process observed at arbitrary frequency

Estimation of the characteristics of a Lévy process observed at arbitrary frequency Estimation of the characteristics of a Lévy process observed at arbitrary frequency Johanna Kappus Institute of Mathematics Humboldt-Universität zu Berlin kappus@math.hu-berlin.de Markus Reiß Institute

More information

1. Theorem. (Archimedean Property) Let x be any real number. There exists a positive integer n greater than x.

1. Theorem. (Archimedean Property) Let x be any real number. There exists a positive integer n greater than x. Advanced Calculus I, Dr. Block, Chapter 2 notes. Theorem. (Archimedean Property) Let x be any real number. There exists a positive integer n greater than x. 2. Definition. A sequence is a real-valued function

More information

Kernel families of probability measures. Saskatoon, October 21, 2011

Kernel families of probability measures. Saskatoon, October 21, 2011 Kernel families of probability measures Saskatoon, October 21, 2011 Abstract The talk will compare two families of probability measures: exponential, and Cauchy-Stjelties families. The exponential families

More information

Risk Bounds for Lévy Processes in the PAC-Learning Framework

Risk Bounds for Lévy Processes in the PAC-Learning Framework Risk Bounds for Lévy Processes in the PAC-Learning Framework Chao Zhang School of Computer Engineering anyang Technological University Dacheng Tao School of Computer Engineering anyang Technological University

More information

MATH 6605: SUMMARY LECTURE NOTES

MATH 6605: SUMMARY LECTURE NOTES MATH 6605: SUMMARY LECTURE NOTES These notes summarize the lectures on weak convergence of stochastic processes. If you see any typos, please let me know. 1. Construction of Stochastic rocesses A stochastic

More information

Notes on uniform convergence

Notes on uniform convergence Notes on uniform convergence Erik Wahlén erik.wahlen@math.lu.se January 17, 2012 1 Numerical sequences We begin by recalling some properties of numerical sequences. By a numerical sequence we simply mean

More information

u xx + u yy = 0. (5.1)

u xx + u yy = 0. (5.1) Chapter 5 Laplace Equation The following equation is called Laplace equation in two independent variables x, y: The non-homogeneous problem u xx + u yy =. (5.1) u xx + u yy = F, (5.) where F is a function

More information

IEOR 3106: Introduction to Operations Research: Stochastic Models. Fall 2011, Professor Whitt. Class Lecture Notes: Thursday, September 15.

IEOR 3106: Introduction to Operations Research: Stochastic Models. Fall 2011, Professor Whitt. Class Lecture Notes: Thursday, September 15. IEOR 3106: Introduction to Operations Research: Stochastic Models Fall 2011, Professor Whitt Class Lecture Notes: Thursday, September 15. Random Variables, Conditional Expectation and Transforms 1. Random

More information

Lévy Processes in Cones of Banach Spaces

Lévy Processes in Cones of Banach Spaces Lévy Processes in Cones of Banach Spaces Víctor Pérez-Abreu Centro de Investigación en Matemáticas CIMAT, Guanajuato, Mexico Alfonso Rocha-Arteaga Escuela de Ciencias Físico-Matemáticas Universidad Autónoma

More information

Math 259: Introduction to Analytic Number Theory More about the Gamma function

Math 259: Introduction to Analytic Number Theory More about the Gamma function Math 59: Introduction to Analytic Number Theory More about the Gamma function We collect some more facts about Γs as a function of a complex variable that will figure in our treatment of ζs and Ls, χ.

More information

LECTURE 10: REVIEW OF POWER SERIES. 1. Motivation

LECTURE 10: REVIEW OF POWER SERIES. 1. Motivation LECTURE 10: REVIEW OF POWER SERIES By definition, a power series centered at x 0 is a series of the form where a 0, a 1,... and x 0 are constants. For convenience, we shall mostly be concerned with the

More information

e (x y)2 /4kt φ(y) dy, for t > 0. (4)

e (x y)2 /4kt φ(y) dy, for t > 0. (4) Math 24A October 26, 2 Viktor Grigoryan Heat equation: interpretation of the solution Last time we considered the IVP for the heat equation on the whole line { ut ku xx = ( < x

More information

Problems for Chapter 3.

Problems for Chapter 3. Problems for Chapter 3. Let A denote a nonempty set of reals. The complement of A, denoted by A, or A C is the set of all points not in A. We say that belongs to the interior of A, Int A, if there eists

More information

. Get closed expressions for the following subsequences and decide if they converge. (1) a n+1 = (2) a 2n = (3) a 2n+1 = (4) a n 2 = (5) b n+1 =

. Get closed expressions for the following subsequences and decide if they converge. (1) a n+1 = (2) a 2n = (3) a 2n+1 = (4) a n 2 = (5) b n+1 = Math 316, Intro to Analysis subsequences. Recall one of our arguments about why a n = ( 1) n diverges. Consider the subsequences a n = ( 1) n = +1. It converges to 1. On the other hand, the subsequences

More information

L p Spaces and Convexity

L p Spaces and Convexity L p Spaces and Convexity These notes largely follow the treatments in Royden, Real Analysis, and Rudin, Real & Complex Analysis. 1. Convex functions Let I R be an interval. For I open, we say a function

More information

Convergence of price and sensitivities in Carr s randomization approximation globally and near barrier

Convergence of price and sensitivities in Carr s randomization approximation globally and near barrier Convergence of price and sensitivities in Carr s randomization approximation globally and near barrier Sergei Levendorskĭi University of Leicester Toronto, June 23, 2010 Levendorskĭi () Convergence of

More information

Lecture 6: Ideal gas ensembles

Lecture 6: Ideal gas ensembles Introduction Lecture 6: Ideal gas ensembles A simple, instructive and practical application of the equilibrium ensemble formalisms of the previous lecture concerns an ideal gas. Such a physical system

More information

On rational approximation of algebraic functions. Julius Borcea. Rikard Bøgvad & Boris Shapiro

On rational approximation of algebraic functions. Julius Borcea. Rikard Bøgvad & Boris Shapiro On rational approximation of algebraic functions http://arxiv.org/abs/math.ca/0409353 Julius Borcea joint work with Rikard Bøgvad & Boris Shapiro 1. Padé approximation: short overview 2. A scheme of rational

More information

u( x) = g( y) ds y ( 1 ) U solves u = 0 in U; u = 0 on U. ( 3)

u( x) = g( y) ds y ( 1 ) U solves u = 0 in U; u = 0 on U. ( 3) M ath 5 2 7 Fall 2 0 0 9 L ecture 4 ( S ep. 6, 2 0 0 9 ) Properties and Estimates of Laplace s and Poisson s Equations In our last lecture we derived the formulas for the solutions of Poisson s equation

More information

The Diffusion Equation with Piecewise Smooth Initial Conditions ABSTRACT INTRODUCTION

The Diffusion Equation with Piecewise Smooth Initial Conditions ABSTRACT INTRODUCTION Malaysian Journal of Mathematical Sciences 5(1): 101-110 (011) The Diffusion Equation with Piecewise Smooth Initial Conditions Ravshan Ashurov and Almaz Butaev Institute of Advanced Technology (ITMA),Universiti

More information

CIMPA SCHOOL, 2007 Jump Processes and Applications to Finance Monique Jeanblanc

CIMPA SCHOOL, 2007 Jump Processes and Applications to Finance Monique Jeanblanc CIMPA SCHOOL, 27 Jump Processes and Applications to Finance Monique Jeanblanc 1 Jump Processes I. Poisson Processes II. Lévy Processes III. Jump-Diffusion Processes IV. Point Processes 2 I. Poisson Processes

More information

221A Lecture Notes Steepest Descent Method

221A Lecture Notes Steepest Descent Method Gamma Function A Lecture Notes Steepest Descent Method The best way to introduce the steepest descent method is to see an example. The Stirling s formula for the behavior of the factorial n! for large

More information

An introduction to Lévy processes

An introduction to Lévy processes with financial modelling in mind Department of Statistics, University of Oxford 27 May 2008 1 Motivation 2 3 General modelling with Lévy processes Modelling financial price processes Quadratic variation

More information

Supermodular ordering of Poisson arrays

Supermodular ordering of Poisson arrays Supermodular ordering of Poisson arrays Bünyamin Kızıldemir Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University 637371 Singapore

More information

Examples of the Fourier Theorem (Sect. 10.3). The Fourier Theorem: Continuous case.

Examples of the Fourier Theorem (Sect. 10.3). The Fourier Theorem: Continuous case. s of the Fourier Theorem (Sect. 1.3. The Fourier Theorem: Continuous case. : Using the Fourier Theorem. The Fourier Theorem: Piecewise continuous case. : Using the Fourier Theorem. The Fourier Theorem:

More information

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2)

X n D X lim n F n (x) = F (x) for all x C F. lim n F n(u) = F (u) for all u C F. (2) 14:17 11/16/2 TOPIC. Convergence in distribution and related notions. This section studies the notion of the so-called convergence in distribution of real random variables. This is the kind of convergence

More information

Asymptotic Expansions

Asymptotic Expansions Asymptotic Expansions V 1.5.4 217/1/3 Suppose we have a function f(x) of single real parameter x and we are interested in an approximation to f(x) for x close to x. The approximation might depend on whether

More information

n E(X t T n = lim X s Tn = X s

n E(X t T n = lim X s Tn = X s Stochastic Calculus Example sheet - Lent 15 Michael Tehranchi Problem 1. Let X be a local martingale. Prove that X is a uniformly integrable martingale if and only X is of class D. Solution 1. If If direction:

More information

Some Fun with Divergent Series

Some Fun with Divergent Series Some Fun with Divergent Series 1. Preliminary Results We begin by examining the (divergent) infinite series S 1 = 1 + 2 + 3 + 4 + 5 + 6 + = k=1 k S 2 = 1 2 + 2 2 + 3 2 + 4 2 + 5 2 + 6 2 + = k=1 k 2 (i)

More information

Self-similar Markov processes

Self-similar Markov processes Self-similar Markov processes Andreas E. Kyprianou 1 1 Unversity of Bath Which are our favourite stochastic processes? Markov chains Diffusions Brownian motion Cts-time Markov processes with jumps Lévy

More information

1 Fourier Integrals of finite measures.

1 Fourier Integrals of finite measures. 18.103 Fall 2013 1 Fourier Integrals of finite measures. Denote the space of finite, positive, measures on by M + () = {µ : µ is a positive measure on ; µ() < } Proposition 1 For µ M + (), we define the

More information

Assignment 4. u n+1 n(n + 1) i(i + 1) = n n (n + 1)(n + 2) n(n + 2) + 1 = (n + 1)(n + 2) 2 n + 1. u n (n + 1)(n + 2) n(n + 1) = n

Assignment 4. u n+1 n(n + 1) i(i + 1) = n n (n + 1)(n + 2) n(n + 2) + 1 = (n + 1)(n + 2) 2 n + 1. u n (n + 1)(n + 2) n(n + 1) = n Assignment 4 Arfken 5..2 We have the sum Note that the first 4 partial sums are n n(n + ) s 2, s 2 2 3, s 3 3 4, s 4 4 5 so we guess that s n n/(n + ). Proving this by induction, we see it is true for

More information

Maxwell s equations for electrostatics

Maxwell s equations for electrostatics Maxwell s equations for electrostatics October 6, 5 The differential form of Gauss s law Starting from the integral form of Gauss s law, we treat the charge as a continuous distribution, ρ x. Then, letting

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

13. Examples of measure-preserving tranformations: rotations of a torus, the doubling map

13. Examples of measure-preserving tranformations: rotations of a torus, the doubling map 3. Examples of measure-preserving tranformations: rotations of a torus, the doubling map 3. Rotations of a torus, the doubling map In this lecture we give two methods by which one can show that a given

More information

Statistical test for some multistable processes

Statistical test for some multistable processes Statistical test for some multistable processes Ronan Le Guével Joint work in progress with A. Philippe Journées MAS 2014 1 Multistable processes First definition : Ferguson-Klass-LePage series Properties

More information

Stability and Sensitivity of the Capacity in Continuous Channels. Malcolm Egan

Stability and Sensitivity of the Capacity in Continuous Channels. Malcolm Egan Stability and Sensitivity of the Capacity in Continuous Channels Malcolm Egan Univ. Lyon, INSA Lyon, INRIA 2019 European School of Information Theory April 18, 2019 1 / 40 Capacity of Additive Noise Models

More information

Small-time asymptotics of stopped Lévy bridges and simulation schemes with controlled bias

Small-time asymptotics of stopped Lévy bridges and simulation schemes with controlled bias Small-time asymptotics of stopped Lévy bridges and simulation schemes with controlled bias José E. Figueroa-López 1 1 Department of Statistics Purdue University Seoul National University & Ajou University

More information

Lecture 4: Introduction to stochastic processes and stochastic calculus

Lecture 4: Introduction to stochastic processes and stochastic calculus Lecture 4: Introduction to stochastic processes and stochastic calculus Cédric Archambeau Centre for Computational Statistics and Machine Learning Department of Computer Science University College London

More information

Introduction to Rare Event Simulation

Introduction to Rare Event Simulation Introduction to Rare Event Simulation Brown University: Summer School on Rare Event Simulation Jose Blanchet Columbia University. Department of Statistics, Department of IEOR. Blanchet (Columbia) 1 / 31

More information

B553 Lecture 1: Calculus Review

B553 Lecture 1: Calculus Review B553 Lecture 1: Calculus Review Kris Hauser January 10, 2012 This course requires a familiarity with basic calculus, some multivariate calculus, linear algebra, and some basic notions of metric topology.

More information

In this chapter we study elliptical PDEs. That is, PDEs of the form. 2 u = lots,

In this chapter we study elliptical PDEs. That is, PDEs of the form. 2 u = lots, Chapter 8 Elliptic PDEs In this chapter we study elliptical PDEs. That is, PDEs of the form 2 u = lots, where lots means lower-order terms (u x, u y,..., u, f). Here are some ways to think about the physical

More information

Translation Invariant Experiments with Independent Increments

Translation Invariant Experiments with Independent Increments Translation Invariant Statistical Experiments with Independent Increments (joint work with Nino Kordzakhia and Alex Novikov Steklov Mathematical Institute St.Petersburg, June 10, 2013 Outline 1 Introduction

More information

This is a Gaussian probability centered around m = 0 (the most probable and mean position is the origin) and the mean square displacement m 2 = n,or

This is a Gaussian probability centered around m = 0 (the most probable and mean position is the origin) and the mean square displacement m 2 = n,or Physics 7b: Statistical Mechanics Brownian Motion Brownian motion is the motion of a particle due to the buffeting by the molecules in a gas or liquid. The particle must be small enough that the effects

More information