Self-similar Markov processes
|
|
- Susanna Williams
- 6 years ago
- Views:
Transcription
1 Self-similar Markov processes Andreas E. Kyprianou 1 1 Unversity of Bath
2 Which are our favourite stochastic processes? Markov chains Diffusions Brownian motion Cts-time Markov processes with jumps Lévy processes \ Self-similar Markov processes
3 Which are our favourite stochastic processes? Markov chains Diffusions Brownian motion Cts-time Markov processes with jumps Lévy processes \ Self-similar Markov processes
4 Which are our favourite stochastic processes? Markov chains Diffusions Brownian motion Cts-time Markov processes with jumps Lévy processes \ Self-similar Markov processes
5 Which are our favourite stochastic processes? Markov chains Diffusions Brownian motion Cts-time Markov processes with jumps Lévy processes \ Self-similar Markov processes
6 Which are our favourite stochastic processes? Markov chains Diffusions Brownian motion Cts-time Markov processes with jumps Lévy processes \ Self-similar Markov processes
7 Which are our favourite stochastic processes? Markov chains Diffusions Brownian motion Cts-time Markov processes with jumps Lévy processes \ Self-similar Markov processes
8 Which are our favourite stochastic processes? Markov chains Diffusions Brownian motion Cts-time Markov processes with jumps Lévy processes \ Self-similar Markov processes
9 Which are our favourite stochastic processes? Markov chains Diffusions Brownian motion Cts-time Markov processes with jumps Lévy processes \ Self-similar Markov processes
10 Which are our favourite stochastic processes? Markov chains Diffusions Brownian motion Cts-time Markov processes with jumps Lévy processes \ Self-similar Markov processes
11 Which are our favourite stochastic processes? Markov chains Diffusions Brownian motion Cts-time Markov processes with jumps Lévy processes \ Self-similar Markov processes
12 Which are our favourite stochastic processes? Markov chains Diffusions Brownian motion Cts-time Markov processes with jumps Lévy processes \ Self-similar Markov processes
13 Lévy processes Stick to one-dimension A Lévy process is an R-valued random trajectory {X t : t 0} issued from the origin with paths that are right-continuous and left limits and which has stationary and independent increments. More formally stationary and independent increments means: for 0 s t <, X t X s = X t s for 0 s t <, X t X s = X t s. It can be shown that this means the entire process is characterised by its position at time 1 E[e iθxt ] = e Ψ(θ)t for some appropriate function Ψ (the characteristic exponent).
14 Lévy processes Stick to one-dimension A Lévy process is an R-valued random trajectory {X t : t 0} issued from the origin with paths that are right-continuous and left limits and which has stationary and independent increments. More formally stationary and independent increments means: for 0 s t <, X t X s = X t s for 0 s t <, X t X s = X t s. It can be shown that this means the entire process is characterised by its position at time 1 E[e iθxt ] = e Ψ(θ)t for some appropriate function Ψ (the characteristic exponent).
15 Lévy processes Stick to one-dimension A Lévy process is an R-valued random trajectory {X t : t 0} issued from the origin with paths that are right-continuous and left limits and which has stationary and independent increments. More formally stationary and independent increments means: for 0 s t <, X t X s = X t s for 0 s t <, X t X s = X t s. It can be shown that this means the entire process is characterised by its position at time 1 E[e iθxt ] = e Ψ(θ)t for some appropriate function Ψ (the characteristic exponent).
16 Lévy processes Stick to one-dimension A Lévy process is an R-valued random trajectory {X t : t 0} issued from the origin with paths that are right-continuous and left limits and which has stationary and independent increments. More formally stationary and independent increments means: for 0 s t <, X t X s = X t s for 0 s t <, X t X s = X t s. It can be shown that this means the entire process is characterised by its position at time 1 E[e iθxt ] = e Ψ(θ)t for some appropriate function Ψ (the characteristic exponent).
17 Brownian motion
18 Compound Poisson process
19 Brownian motion + compound Poisson process
20 Unbounded variation paths
21 Bounded variation paths
22 Space exploration: some successes and dissatisfaction Fundamentally we want to understand how Lévy processes explore space. 25 years of research has been very successful in giving an (relatively) complete theoretical description......with the caveat that the database of tractable examples for the aforesaid theory is uncomfortably small (relative to Markov chains and diffusions).
23 Space exploration: some successes and dissatisfaction Fundamentally we want to understand how Lévy processes explore space. 25 years of research has been very successful in giving an (relatively) complete theoretical description......with the caveat that the database of tractable examples for the aforesaid theory is uncomfortably small (relative to Markov chains and diffusions).
24 Space exploration: some successes and dissatisfaction Fundamentally we want to understand how Lévy processes explore space. 25 years of research has been very successful in giving an (relatively) complete theoretical description......with the caveat that the database of tractable examples for the aforesaid theory is uncomfortably small (relative to Markov chains and diffusions).
25 Space exploration: some successes and dissatisfaction Example 1: P(Process first exceeds level x by an amount y) = U(dz) ν(z x +y) [0,x) where Ψ(θ) = κ + ( iθ)κ (iθ), θ R, κ + (λ) = q + δλ + (1 e λx )ν(dx), λ 0, ν(x) = ν(x, ) (0, ) and [0, ) e λx U(dx) = 1 κ + (λ)
26 Space exploration: some successes and dissatisfaction Example 1: P(Process first exceeds level x by an amount y) = U(dz) ν(z x +y) [0,x) where Ψ(θ) = κ + ( iθ)κ (iθ), θ R, κ + (λ) = q + δλ + (1 e λx )ν(dx), λ 0, ν(x) = ν(x, ) (0, ) and [0, ) e λx U(dx) = 1 κ + (λ)
27 Space exploration: some successes and dissatisfaction Example 2: Under appropriate assumptions, P(Process ever hits a point x) = u(x) u(0), x R, where R e iθx u(x)dx = 1 Ψ(θ), θ R.
28 Self-similar Markov processes on R α-ssmp R-valued Markov process, equipped with initial measures P x, x R\{0}, with 0 an absorbing state, satisfying the scaling property ( d cxc α t) = X Pcx, x, c > 0 Px t 0
29 Space-time changes and modulation It turns out that everyn R-valued ssmp can be characterised using polar coordinates in C (think re iθ ) as follows: X t = x exp { ξ ϕ( x α t) + iπ(j ϕ( x α t) + 1) }, t 0, x 0, where (ξ, J) is a so-called Markov modulated Lévy process and { s } ϕ(t) = inf s > 0 : e αξu du > t. 0 (ξ, J): J = {J t : t 0} is a Markov chain on {1, 2} with intensity matrix Q. When J t = i, ξ moves as a Lévy process of type i. dξ t = dξ (i) t When J makes a jump at time t, e.g. 1 2, then ξ experiences an additional jump ξ t which is an i.i.d. copy of some pre specified r.v. U 1,2.
30 Space-time changes and modulation It turns out that everyn R-valued ssmp can be characterised using polar coordinates in C (think re iθ ) as follows: X t = x exp { ξ ϕ( x α t) + iπ(j ϕ( x α t) + 1) }, t 0, x 0, where (ξ, J) is a so-called Markov modulated Lévy process and { s } ϕ(t) = inf s > 0 : e αξu du > t. 0 (ξ, J): J = {J t : t 0} is a Markov chain on {1, 2} with intensity matrix Q. When J t = i, ξ moves as a Lévy process of type i. dξ t = dξ (i) t When J makes a jump at time t, e.g. 1 2, then ξ experiences an additional jump ξ t which is an i.i.d. copy of some pre specified r.v. U 1,2.
31 Space-time changes and modulation It turns out that every ssmp can be characterised using polar coordinates (think re iθ ) as follows: X t = x exp { ξ ϕ( x α t) + iπ(j ϕ( x α t) + 1) }, t 0, x 0, where (ξ, J) is a so-called Markov modulated Lévy process and { s } ϕ(t) = inf s > 0 : e αξu du > t. 0 (ξ, J): Markov modulated Lévy processes can also be characterised by a characteristic exponetnt. where Ψ(θ) = E i [e iθxt ; J t = j] = (exp{ Ψ(θ)t}) i,j ( Ψ1 (θ) 0 0 Ψ 2 (θ) ) ( Q 1 E(e iθu1,2 ) E(e iθu2,1 ) 1 )
32 Space-time changes and modulation If the Markov chain has an absorbing state, then the ssmp is in effect a positive self-similar Markov process (pssmp) where ξ is a Lévy process. X t = x exp { ξ ϕ( x α t)}, t 0, x 0,
33 Positive feedback There is one class of Lévy processes which has always been considered to be the next best thing after Brownian motion : the (α, ρ)-stable ( process. ) Ψ(θ) = θ α e πiα( 1 2 ρ) 1 {θ>0} + e πiα( 1 2 ρ) 1 {θ<0}, θ R, Only allowed to take α (0, 2], ρ [0, 1]. In fact stable processes are also self-similar Markov processes: E[e iθxt ] = e Ψ(θ)t and E[e iθcx c α t ] = e Ψ(θ)t for all c > 0. So α is the index of self-similarity, but ρ [0, 1] is a symmetry index. When ρ = 1/2, Ψ(θ) = θ α so X = d X. When ρ (0, 1) (keep away from complete asymmetry!) and α (0, 2) then the underlying Markov modulated Lévy process has exponent Ψ(θ) = Γ(α iθ)γ(1+iθ) Γ(αˆρ iθ)γ(1 αˆρ+iθ) Γ(α iθ)γ(1+iθ) Γ(αˆρ)Γ(1 αˆρ) Γ(α iθ)γ(1+iθ) Γ(αρ)Γ(1 αρ) Γ(α iθ))γ(1+iθ) Γ(αρ iθ)γ(1 αρ+iθ).
34 Positive feedback There is one class of Lévy processes which has always been considered to be the next best thing after Brownian motion : the (α, ρ)-stable ( process. ) Ψ(θ) = θ α e πiα( 1 2 ρ) 1 {θ>0} + e πiα( 1 2 ρ) 1 {θ<0}, θ R, Only allowed to take α (0, 2], ρ [0, 1]. In fact stable processes are also self-similar Markov processes: E[e iθxt ] = e Ψ(θ)t and E[e iθcx c α t ] = e Ψ(θ)t for all c > 0. So α is the index of self-similarity, but ρ [0, 1] is a symmetry index. When ρ = 1/2, Ψ(θ) = θ α so X = d X. When ρ (0, 1) (keep away from complete asymmetry!) and α (0, 2) then the underlying Markov modulated Lévy process has exponent Ψ(θ) = Γ(α iθ)γ(1+iθ) Γ(αˆρ iθ)γ(1 αˆρ+iθ) Γ(α iθ)γ(1+iθ) Γ(αˆρ)Γ(1 αˆρ) Γ(α iθ)γ(1+iθ) Γ(αρ)Γ(1 αρ) Γ(α iθ))γ(1+iθ) Γ(αρ iθ)γ(1 αρ+iθ).
35 Positive feedback There is one class of Lévy processes which has always been considered to be the next best thing after Brownian motion : the (α, ρ)-stable ( process. ) Ψ(θ) = θ α e πiα( 1 2 ρ) 1 {θ>0} + e πiα( 1 2 ρ) 1 {θ<0}, θ R, Only allowed to take α (0, 2], ρ [0, 1]. In fact stable processes are also self-similar Markov processes: E[e iθxt ] = e Ψ(θ)t and E[e iθcx c α t ] = e Ψ(θ)t for all c > 0. So α is the index of self-similarity, but ρ [0, 1] is a symmetry index. When ρ = 1/2, Ψ(θ) = θ α so X = d X. When ρ (0, 1) (keep away from complete asymmetry!) and α (0, 2) then the underlying Markov modulated Lévy process has exponent Ψ(θ) = Γ(α iθ)γ(1+iθ) Γ(αˆρ iθ)γ(1 αˆρ+iθ) Γ(α iθ)γ(1+iθ) Γ(αˆρ)Γ(1 αˆρ) Γ(α iθ)γ(1+iθ) Γ(αρ)Γ(1 αρ) Γ(α iθ))γ(1+iθ) Γ(αρ iθ)γ(1 αρ+iθ).
36 Positive feedback There is one class of Lévy processes which has always been considered to be the next best thing after Brownian motion : the (α, ρ)-stable ( process. ) Ψ(θ) = θ α e πiα( 1 2 ρ) 1 {θ>0} + e πiα( 1 2 ρ) 1 {θ<0}, θ R, Only allowed to take α (0, 2], ρ [0, 1]. In fact stable processes are also self-similar Markov processes: E[e iθxt ] = e Ψ(θ)t and E[e iθcx c α t ] = e Ψ(θ)t for all c > 0. So α is the index of self-similarity, but ρ [0, 1] is a symmetry index. When ρ = 1/2, Ψ(θ) = θ α so X = d X. When ρ (0, 1) (keep away from complete asymmetry!) and α (0, 2) then the underlying Markov modulated Lévy process has exponent Ψ(θ) = Γ(α iθ)γ(1+iθ) Γ(αˆρ iθ)γ(1 αˆρ+iθ) Γ(α iθ)γ(1+iθ) Γ(αˆρ)Γ(1 αˆρ) Γ(α iθ)γ(1+iθ) Γ(αρ)Γ(1 αρ) Γ(α iθ))γ(1+iθ) Γ(αρ iθ)γ(1 αρ+iθ).
37 Positive feedback There is one class of Lévy processes which has always been considered to be the next best thing after Brownian motion : the (α, ρ)-stable ( process. ) Ψ(θ) = θ α e πiα( 1 2 ρ) 1 {θ>0} + e πiα( 1 2 ρ) 1 {θ<0}, θ R, Only allowed to take α (0, 2], ρ [0, 1]. In fact stable processes are also self-similar Markov processes: E[e iθxt ] = e Ψ(θ)t and E[e iθcx c α t ] = e Ψ(θ)t for all c > 0. So α is the index of self-similarity, but ρ [0, 1] is a symmetry index. When ρ = 1/2, Ψ(θ) = θ α so X = d X. When ρ (0, 1) (keep away from complete asymmetry!) and α (0, 2) then the underlying Markov modulated Lévy process has exponent Ψ(θ) = Γ(α iθ)γ(1+iθ) Γ(αˆρ iθ)γ(1 αˆρ+iθ) Γ(α iθ)γ(1+iθ) Γ(αˆρ)Γ(1 αˆρ) Γ(α iθ)γ(1+iθ) Γ(αρ)Γ(1 αρ) Γ(α iθ))γ(1+iθ) Γ(αρ iθ)γ(1 αρ+iθ).
38 Positive feedback There is one class of Lévy processes which has always been considered to be the next best thing after Brownian motion : the (α, ρ)-stable ( process. ) Ψ(θ) = θ α e πiα( 1 2 ρ) 1 {θ>0} + e πiα( 1 2 ρ) 1 {θ<0}, θ R, Only allowed to take α (0, 2], ρ [0, 1]. In fact stable processes are also self-similar Markov processes: E[e iθxt ] = e Ψ(θ)t and E[e iθcx c α t ] = e Ψ(θ)t for all c > 0. So α is the index of self-similarity, but ρ [0, 1] is a symmetry index. When ρ = 1/2, Ψ(θ) = θ α so X = d X. When ρ (0, 1) (keep away from complete asymmetry!) and α (0, 2) then the underlying Markov modulated Lévy process has exponent Ψ(θ) = Γ(α iθ)γ(1+iθ) Γ(αˆρ iθ)γ(1 αˆρ+iθ) Γ(α iθ)γ(1+iθ) Γ(αˆρ)Γ(1 αˆρ) Γ(α iθ)γ(1+iθ) Γ(αρ)Γ(1 αρ) Γ(α iθ))γ(1+iθ) Γ(αρ iθ)γ(1 αρ+iθ).
39 Positive feedback Take the trajectory of an (α, ρ), α (0, 1), ρ (0, 1) and imagine cutting out all the negative parts of its trajectory and then shunting up the remaining bits of path the resulting object is a positive self-similar Markov process. The underlying Lévy process, ξ, has exponent Ψ(θ) = Γ(αρ iθ) Γ( iθ) Γ(1 αρ + iθ) Γ(1 α + iθ), θ R.
40 Positive feedback Take the trajectory of an (α, ρ), α (0, 1), ρ (0, 1) and imagine cutting out all the negative parts of its trajectory and then shunting up the remaining bits of path the resulting object is a positive self-similar Markov process. The underlying Lévy process, ξ, has exponent Ψ(θ) = Γ(αρ iθ) Γ( iθ) Γ(1 αρ + iθ) Γ(1 α + iθ), θ R.
41 Space exploration: some successes and dissatisfaction P(Process first exceeds level x by an amount y) = U(dz) ν(z x +y) [0,x) where Ψ(θ) = κ + ( iθ)κ (iθ), θ R, κ + (λ) = q + δλ + (1 e λx )ν(dx), λ 0, ν(x) = ν(x, ) (0, ) and [0, ) e λx U(dx) = 1 κ + (λ) where P(Process ever hits a point x) = u(x) u(0), x R, R e iθx u(x)dx = 1 Ψ(θ), θ R.
42 Positive feedback α (0, 1) P(Stable process first enters [0, 1] in dy) = P(ξ first enters (, 0] in d(log y)) = sin(παˆρ) x αρ y αρ (x 1) αˆρ (1 y) αˆρ (x y) 1 dy π α (1, 2) P(Stable process hits 1 before 0 when starting from x> 0) = P(ξ ever hits 0 when starting from log x) = sin(πρα) x 1 α 1 [1 (x>1) sin(π ˆρα) + 1 (0<x<1) sin(πρα)] + x α 1 sin(π ˆρα). (sin(πρα) + sin(π ˆρα))
43 Positive feedback α (0, 1) P(Stable process first enters [0, 1] in dy) = P(ξ first enters (, 0] in d(log y)) = sin(παˆρ) x αρ y αρ (x 1) αˆρ (1 y) αˆρ (x y) 1 dy π α (1, 2) P(Stable process hits 1 before 0 when starting from x> 0) = P(ξ ever hits 0 when starting from log x) = sin(πρα) x 1 α 1 [1 (x>1) sin(π ˆρα) + 1 (0<x<1) sin(πρα)] + x α 1 sin(π ˆρα). (sin(πρα) + sin(π ˆρα))
44 A bigger picture It s possible to extend the notion of both Lévy processes and ssmp to higher dimensions For example, a d-dimensional isotropic stable Lévy process is also a ssmp: E[e iθ Xt ] = exp{ θ α t}, t 0, θ R d, necessarily α (0, 2]. The radial distance of such a process from the origin, X t, t 0, is a pssmp. Its underlying Lévy process has characteristic exponent Ψ(θ) = Γ( 1 2 ( iθ + α)) Γ( 1 2 iθ) Γ( 1 2 (iθ + d)) Γ( 1 θ R. 2 (iθ + d α)),
45 A bigger picture It s possible to extend the notion of both Lévy processes and ssmp to higher dimensions For example, a d-dimensional isotropic stable Lévy process is also a ssmp: E[e iθ Xt ] = exp{ θ α t}, t 0, θ R d, necessarily α (0, 2]. The radial distance of such a process from the origin, X t, t 0, is a pssmp. Its underlying Lévy process has characteristic exponent Ψ(θ) = Γ( 1 2 ( iθ + α)) Γ( 1 2 iθ) Γ( 1 2 (iθ + d)) Γ( 1 θ R. 2 (iθ + d α)),
46 A bigger picture It s possible to extend the notion of both Lévy processes and ssmp to higher dimensions For example, a d-dimensional isotropic stable Lévy process is also a ssmp: E[e iθ Xt ] = exp{ θ α t}, t 0, θ R d, necessarily α (0, 2]. The radial distance of such a process from the origin, X t, t 0, is a pssmp. Its underlying Lévy process has characteristic exponent Ψ(θ) = Γ( 1 2 ( iθ + α)) Γ( 1 2 iθ) Γ( 1 2 (iθ + d)) Γ( 1 θ R. 2 (iθ + d α)),
47 Bogdan-Zak transform The Kelvin transform is an inversion of R d through the unit sphere: Kx = x x 2, x Rd.
48 Bogdan-Zak transform Bogdan-Zak transform Suppose that X is a d-dimensional isotropic stable process with d 2. Define η(t) = inf{s > 0 : s 0 X u 2α du > t}, t 0. Then, for all x R d \{0}, {KX η(t) : t 0} under P x is equal in law to (X, P h Kx ), where dp h x = X t α d, t 0, dp x x α d σ(xs :s t)
49 Bogdan-Zak transform The Kelvin transform maps the ball {x R d : x 1/2 1/2} to a half-space.
50 Bogdan-Zak transform How does an isotropic stable process first enter/exit a ball? how does an isotropic stable process cross a hyperplane use rotational symmetry to the orthogonal projection Where does an isotropic stable process first hit the surface of a sphere? where does an isotropic stable process hit a hyperplane use rotational symmetry to the orthogonal projection
51 Bogdan-Zak transform How does an isotropic stable process first enter/exit a ball? how does an isotropic stable process cross a hyperplane use rotational symmetry to the orthogonal projection Where does an isotropic stable process first hit the surface of a sphere? where does an isotropic stable process hit a hyperplane use rotational symmetry to the orthogonal projection
52 Bogdan-Zak transform How does an isotropic stable process first enter/exit a ball? how does an isotropic stable process cross a hyperplane use rotational symmetry to the orthogonal projection Where does an isotropic stable process first hit the surface of a sphere? where does an isotropic stable process hit a hyperplane use rotational symmetry to the orthogonal projection
53
Stable and extended hypergeometric Lévy processes
Stable and extended hypergeometric Lévy processes Andreas Kyprianou 1 Juan-Carlos Pardo 2 Alex Watson 2 1 University of Bath 2 CIMAT as given at CIMAT, 23 October 2013 A process, and a problem Let X be
More informationInfinitely divisible distributions and the Lévy-Khintchine formula
Infinitely divisible distributions and the Cornell University May 1, 2015 Some definitions Let X be a real-valued random variable with law µ X. Recall that X is said to be infinitely divisible if for every
More informationHITTING DISTRIBUTIONS OF α-stable PROCESSES VIA PATH CENSORING AND SELF-SIMILARITY
Submitted to the Annals of Probability arxiv: arxiv:2.369 HITTING DISTRIBUTIONS OF α-stable PROCESSES VIA PATH CENSORING AND SELF-SIMILARITY By Andreas E. Kyprianou,,, Juan Carlos Pardo, and Alexander
More informationPotentials of stable processes
Potentials of stable processes A. E. Kyprianou A. R. Watson 5th December 23 arxiv:32.222v [math.pr] 4 Dec 23 Abstract. For a stable process, we give an explicit formula for the potential measure of the
More informationThe hitting time of zero for stable processes, symmetric and asymmetric
The hitting time of zero for stable processes, symmetric and asymmetric A. R. Watson Zürich seminar on stochastic processes 19 Feb 2014 Abstract. We will discuss a method to characterise explicitly the
More informationBrownian Motion. 1 Definition Brownian Motion Wiener measure... 3
Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................
More informationExponential functionals of Lévy processes
Exponential functionals of Lévy processes Víctor Rivero Centro de Investigación en Matemáticas, México. 1/ 28 Outline of the talk Introduction Exponential functionals of spectrally positive Lévy processes
More informationPolynomial Preserving Jump-Diffusions on the Unit Interval
Polynomial Preserving Jump-Diffusions on the Unit Interval Martin Larsson Department of Mathematics, ETH Zürich joint work with Christa Cuchiero and Sara Svaluto-Ferro 7th General AMaMeF and Swissquote
More informationOther properties of M M 1
Other properties of M M 1 Přemysl Bejda premyslbejda@gmail.com 2012 Contents 1 Reflected Lévy Process 2 Time dependent properties of M M 1 3 Waiting times and queue disciplines in M M 1 Contents 1 Reflected
More informationScale functions for spectrally negative Lévy processes and their appearance in economic models
Scale functions for spectrally negative Lévy processes and their appearance in economic models Andreas E. Kyprianou 1 Department of Mathematical Sciences, University of Bath 1 This is a review talk and
More informationLecture Characterization of Infinitely Divisible Distributions
Lecture 10 1 Characterization of Infinitely Divisible Distributions We have shown that a distribution µ is infinitely divisible if and only if it is the weak limit of S n := X n,1 + + X n,n for a uniformly
More informationGeneralised Fractional-Black-Scholes Equation: pricing and hedging
Generalised Fractional-Black-Scholes Equation: pricing and hedging Álvaro Cartea Birkbeck College, University of London April 2004 Outline Lévy processes Fractional calculus Fractional-Black-Scholes 1
More informationAdditive Lévy Processes
Additive Lévy Processes Introduction Let X X N denote N independent Lévy processes on. We can construct an N-parameter stochastic process, indexed by N +, as follows: := X + + X N N for every := ( N )
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 218. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationOn Stopping Times and Impulse Control with Constraint
On Stopping Times and Impulse Control with Constraint Jose Luis Menaldi Based on joint papers with M. Robin (216, 217) Department of Mathematics Wayne State University Detroit, Michigan 4822, USA (e-mail:
More informationp 1 ( Y p dp) 1/p ( X p dp) 1 1 p
Doob s inequality Let X(t) be a right continuous submartingale with respect to F(t), t 1 P(sup s t X(s) λ) 1 λ {sup s t X(s) λ} X + (t)dp 2 For 1 < p
More informationOn Optimal Stopping Problems with Power Function of Lévy Processes
On Optimal Stopping Problems with Power Function of Lévy Processes Budhi Arta Surya Department of Mathematics University of Utrecht 31 August 2006 This talk is based on the joint paper with A.E. Kyprianou:
More informationarxiv: v3 [math.pr] 21 Mar 2018
DEEP FACTORISATION OF THE STABLE PROCESS III: RADIAL EXCURSION THEORY AND THE POINT OF CLOSEST REACH arxiv:176.9924v3 [math.pr 21 Mar 218 By Andreas E. Kyprianou, Victor Rivero and Weerapat Satitkanitkul
More informationPROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS
PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please
More informationErnesto Mordecki. Talk presented at the. Finnish Mathematical Society
EXACT RUIN PROBABILITIES FOR A CLASS Of LÉVY PROCESSES Ernesto Mordecki http://www.cmat.edu.uy/ mordecki Montevideo, Uruguay visiting Åbo Akademi, Turku Talk presented at the Finnish Mathematical Society
More informationThe strictly 1/2-stable example
The strictly 1/2-stable example 1 Direct approach: building a Lévy pure jump process on R Bert Fristedt provided key mathematical facts for this example. A pure jump Lévy process X is a Lévy process such
More informationOn the Breiman conjecture
Péter Kevei 1 David Mason 2 1 TU Munich 2 University of Delaware 12th GPSD Bochum Outline Introduction Motivation Earlier results Partial solution Sketch of the proof Subsequential limits Further remarks
More informationP (A G) dp G P (A G)
First homework assignment. Due at 12:15 on 22 September 2016. Homework 1. We roll two dices. X is the result of one of them and Z the sum of the results. Find E [X Z. Homework 2. Let X be a r.v.. Assume
More informationDefinition: Lévy Process. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 2: Lévy Processes. Theorem
Definition: Lévy Process Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 2: Lévy Processes David Applebaum Probability and Statistics Department, University of Sheffield, UK July
More informationStochastic Modelling Unit 1: Markov chain models
Stochastic Modelling Unit 1: Markov chain models Russell Gerrard and Douglas Wright Cass Business School, City University, London June 2004 Contents of Unit 1 1 Stochastic Processes 2 Markov Chains 3 Poisson
More informationEffective dynamics for the (overdamped) Langevin equation
Effective dynamics for the (overdamped) Langevin equation Frédéric Legoll ENPC and INRIA joint work with T. Lelièvre (ENPC and INRIA) Enumath conference, MS Numerical methods for molecular dynamics EnuMath
More informationExponential families also behave nicely under conditioning. Specifically, suppose we write η = (η 1, η 2 ) R k R p k so that
1 More examples 1.1 Exponential families under conditioning Exponential families also behave nicely under conditioning. Specifically, suppose we write η = η 1, η 2 R k R p k so that dp η dm 0 = e ηt 1
More informationShort-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility
Short-time expansions for close-to-the-money options under a Lévy jump model with stochastic volatility José Enrique Figueroa-López 1 1 Department of Statistics Purdue University Statistics, Jump Processes,
More informationLecture 17 Brownian motion as a Markov process
Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is
More informationFiltrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition
Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,
More information1.1 Review of Probability Theory
1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,
More informationDynamics of the evolving Bolthausen-Sznitman coalescent. by Jason Schweinsberg University of California at San Diego.
Dynamics of the evolving Bolthausen-Sznitman coalescent by Jason Schweinsberg University of California at San Diego Outline of Talk 1. The Moran model and Kingman s coalescent 2. The evolving Kingman s
More informationLecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.
1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if
More informationThe asymmetric KMP model
June 14, 2017 Joint work with G. Carinci (Delft) C. Giardinà (Modena) T. Sasamoto (Tokyo) Outline 1) Symmetric case: how KMP is connected to other processes SIP, BEP (via so-called thermalization), and
More informationNumerical Methods with Lévy Processes
Numerical Methods with Lévy Processes 1 Objective: i) Find models of asset returns, etc ii) Get numbers out of them. Why? VaR and risk management Valuing and hedging derivatives Why not? Usual assumption:
More informationErnesto Mordecki 1. Lecture III. PASI - Guanajuato - June 2010
Optimal stopping for Hunt and Lévy processes Ernesto Mordecki 1 Lecture III. PASI - Guanajuato - June 2010 1Joint work with Paavo Salminen (Åbo, Finland) 1 Plan of the talk 1. Motivation: from Finance
More informationMalliavin Calculus in Finance
Malliavin Calculus in Finance Peter K. Friz 1 Greeks and the logarithmic derivative trick Model an underlying assent by a Markov process with values in R m with dynamics described by the SDE dx t = b(x
More informationIntroduction to self-similar growth-fragmentations
Introduction to self-similar growth-fragmentations Quan Shi CIMAT, 11-15 December, 2017 Quan Shi Growth-Fragmentations CIMAT, 11-15 December, 2017 1 / 34 Literature Jean Bertoin, Compensated fragmentation
More informationBeyond the color of the noise: what is memory in random phenomena?
Beyond the color of the noise: what is memory in random phenomena? Gennady Samorodnitsky Cornell University September 19, 2014 Randomness means lack of pattern or predictability in events according to
More informationBulk scaling limits, open questions
Bulk scaling limits, open questions Based on: Continuum limits of random matrices and the Brownian carousel B. Valkó, B. Virág. Inventiones (2009). Eigenvalue statistics for CMV matrices: from Poisson
More informationStochastic Processes
Stochastic Processes A very simple introduction Péter Medvegyev 2009, January Medvegyev (CEU) Stochastic Processes 2009, January 1 / 54 Summary from measure theory De nition (X, A) is a measurable space
More informationGaussian Processes. 1. Basic Notions
Gaussian Processes 1. Basic Notions Let T be a set, and X : {X } T a stochastic process, defined on a suitable probability space (Ω P), that is indexed by T. Definition 1.1. We say that X is a Gaussian
More informationAn introduction to the theory of Lévy processes 1
An introduction to the theory of Lévy processes 1 These notes give a more detailed account of the material discussed in lectures. Exercises are for your own convenience and will not be looked at during
More informationBrownian Motion. Chapter Stochastic Process
Chapter 1 Brownian Motion 1.1 Stochastic Process A stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ,P and a real valued stochastic
More informationHardy-Stein identity and Square functions
Hardy-Stein identity and Square functions Daesung Kim (joint work with Rodrigo Bañuelos) Department of Mathematics Purdue University March 28, 217 Daesung Kim (Purdue) Hardy-Stein identity UIUC 217 1 /
More informationNonuniform Random Variate Generation
Nonuniform Random Variate Generation 1 Suppose we have a generator of i.i.d. U(0, 1) r.v. s. We want to generate r.v. s from other distributions, such as normal, Weibull, Poisson, etc. Or other random
More informationBayesian Inference and the Symbolic Dynamics of Deterministic Chaos. Christopher C. Strelioff 1,2 Dr. James P. Crutchfield 2
How Random Bayesian Inference and the Symbolic Dynamics of Deterministic Chaos Christopher C. Strelioff 1,2 Dr. James P. Crutchfield 2 1 Center for Complex Systems Research and Department of Physics University
More informationOn the posterior structure of NRMI
On the posterior structure of NRMI Igor Prünster University of Turin, Collegio Carlo Alberto and ICER Joint work with L.F. James and A. Lijoi Isaac Newton Institute, BNR Programme, 8th August 2007 Outline
More informationStable Process. 2. Multivariate Stable Distributions. July, 2006
Stable Process 2. Multivariate Stable Distributions July, 2006 1. Stable random vectors. 2. Characteristic functions. 3. Strictly stable and symmetric stable random vectors. 4. Sub-Gaussian random vectors.
More informationGeneralized fractional Lévy processes 1
Generalized fractional Lévy processes 1 Claudia Klüppelberg Center for Mathematical Sciences and Institute for Advanced Study Technische Universität München Sandbjerg, January 2010 1 1 joint work with
More informationprocess on the hierarchical group
Intertwining of Markov processes and the contact process on the hierarchical group April 27, 2010 Outline Intertwining of Markov processes Outline Intertwining of Markov processes First passage times of
More information18.175: Lecture 13 Infinite divisibility and Lévy processes
18.175 Lecture 13 18.175: Lecture 13 Infinite divisibility and Lévy processes Scott Sheffield MIT Outline Poisson random variable convergence Extend CLT idea to stable random variables Infinite divisibility
More informationStochastic Calculus and Black-Scholes Theory MTH772P Exercises Sheet 1
Stochastic Calculus and Black-Scholes Theory MTH772P Exercises Sheet. For ξ, ξ 2, i.i.d. with P(ξ i = ± = /2 define the discrete-time random walk W =, W n = ξ +... + ξ n. (i Formulate and prove the property
More informationSimulation methods for stochastic models in chemistry
Simulation methods for stochastic models in chemistry David F. Anderson anderson@math.wisc.edu Department of Mathematics University of Wisconsin - Madison SIAM: Barcelona June 4th, 21 Overview 1. Notation
More informationMax stable Processes & Random Fields: Representations, Models, and Prediction
Max stable Processes & Random Fields: Representations, Models, and Prediction Stilian Stoev University of Michigan, Ann Arbor March 2, 2011 Based on joint works with Yizao Wang and Murad S. Taqqu. 1 Preliminaries
More informationIEOR 4701: Stochastic Models in Financial Engineering. Summer 2007, Professor Whitt. SOLUTIONS to Homework Assignment 9: Brownian motion
IEOR 471: Stochastic Models in Financial Engineering Summer 27, Professor Whitt SOLUTIONS to Homework Assignment 9: Brownian motion In Ross, read Sections 1.1-1.3 and 1.6. (The total required reading there
More informationFractional Laplacian
Fractional Laplacian Grzegorz Karch 5ème Ecole de printemps EDP Non-linéaire Mathématiques et Interactions: modèles non locaux et applications Ecole Supérieure de Technologie d Essaouira, du 27 au 30 Avril
More informationProblem Points S C O R E Total: 120
PSTAT 160 A Final Exam Solution December 10, 2015 Name Student ID # Problem Points S C O R E 1 10 2 10 3 10 4 10 5 10 6 10 7 10 8 10 9 10 10 10 11 10 12 10 Total: 120 1. (10 points) Take a Markov chain
More informationSYMMETRIC STABLE PROCESSES IN PARABOLA SHAPED REGIONS
SYMMETRIC STABLE PROCESSES IN PARABOLA SHAPED REGIONS RODRIGO BAÑUELOS AND KRZYSZTOF BOGDAN Abstract We identify the critical exponent of integrability of the first exit time of rotation invariant stable
More informationSelected Exercises on Expectations and Some Probability Inequalities
Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ
More informationPoisson random measure: motivation
: motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps
More informationGARCH processes continuous counterparts (Part 2)
GARCH processes continuous counterparts (Part 2) Alexander Lindner Centre of Mathematical Sciences Technical University of Munich D 85747 Garching Germany lindner@ma.tum.de http://www-m1.ma.tum.de/m4/pers/lindner/
More informationStochastic Processes
Stochastic Processes 8.445 MIT, fall 20 Mid Term Exam Solutions October 27, 20 Your Name: Alberto De Sole Exercise Max Grade Grade 5 5 2 5 5 3 5 5 4 5 5 5 5 5 6 5 5 Total 30 30 Problem :. True / False
More informationSTOCHASTIC PROCESSES Basic notions
J. Virtamo 38.3143 Queueing Theory / Stochastic processes 1 STOCHASTIC PROCESSES Basic notions Often the systems we consider evolve in time and we are interested in their dynamic behaviour, usually involving
More informationThe exact asymptotics for hitting probability of a remote orthant by a multivariate Lévy process: the Cramér case
The exact asymptotics for hitting probability of a remote orthant by a multivariate Lévy process: the Cramér case Konstantin Borovkov and Zbigniew Palmowski Abstract For a multivariate Lévy process satisfying
More informationHarmonic Functions and Brownian Motion in Several Dimensions
Harmonic Functions and Brownian Motion in Several Dimensions Steven P. Lalley October 11, 2016 1 d -Dimensional Brownian Motion Definition 1. A standard d dimensional Brownian motion is an R d valued continuous-time
More informationProblem Points S C O R E Total: 120
PSTAT 160 A Final Exam December 10, 2015 Name Student ID # Problem Points S C O R E 1 10 2 10 3 10 4 10 5 10 6 10 7 10 8 10 9 10 10 10 11 10 12 10 Total: 120 1. (10 points) Take a Markov chain with the
More informationUniversity Of Calgary Department of Mathematics and Statistics
University Of Calgary Department of Mathematics and Statistics Hawkes Seminar May 23, 2018 1 / 46 Some Problems in Insurance and Reinsurance Mohamed Badaoui Department of Electrical Engineering National
More informationIEOR 6711, HMWK 5, Professor Sigman
IEOR 6711, HMWK 5, Professor Sigman 1. Semi-Markov processes: Consider an irreducible positive recurrent discrete-time Markov chain {X n } with transition matrix P (P i,j ), i, j S, and finite state space.
More informationUpper Bound for Intermediate Singular Values of Random Sub-Gaussian Matrices 1
Upper Bound for Intermediate Singular Values of Random Sub-Gaussian Matrices 1 Feng Wei 2 University of Michigan July 29, 2016 1 This presentation is based a project under the supervision of M. Rudelson.
More informationSmall-time asymptotics of stopped Lévy bridges and simulation schemes with controlled bias
Small-time asymptotics of stopped Lévy bridges and simulation schemes with controlled bias José E. Figueroa-López 1 1 Department of Statistics Purdue University Seoul National University & Ajou University
More informationImprecise Filtering for Spacecraft Navigation
Imprecise Filtering for Spacecraft Navigation Tathagata Basu Cristian Greco Thomas Krak Durham University Strathclyde University Ghent University Filtering for Spacecraft Navigation The General Problem
More informationSymmetrization of Lévy processes and applications 1
Syetrization of Lévy processes and applications 1 Pedro J. Méndez Hernández Universidad de Costa Rica July 2010, resden 1 Joint work with R. Bañuelos, Purdue University Pedro J. Méndez (UCR Syetrization
More informationKnudsen Diffusion and Random Billiards
Knudsen Diffusion and Random Billiards R. Feres http://www.math.wustl.edu/ feres/publications.html November 11, 2004 Typeset by FoilTEX Gas flow in the Knudsen regime Random Billiards [1] Large Knudsen
More informationIn Memory of Wenbo V Li s Contributions
In Memory of Wenbo V Li s Contributions Qi-Man Shao The Chinese University of Hong Kong qmshao@cuhk.edu.hk The research is partially supported by Hong Kong RGC GRF 403513 Outline Lower tail probabilities
More informationSection 9.2 introduces the description of Markov processes in terms of their transition probabilities and proves the existence of such processes.
Chapter 9 Markov Processes This lecture begins our study of Markov processes. Section 9.1 is mainly ideological : it formally defines the Markov property for one-parameter processes, and explains why it
More informationGradient Estimation for Attractor Networks
Gradient Estimation for Attractor Networks Thomas Flynn Department of Computer Science Graduate Center of CUNY July 2017 1 Outline Motivations Deterministic attractor networks Stochastic attractor networks
More informationOrnstein-Uhlenbeck processes for geophysical data analysis
Ornstein-Uhlenbeck processes for geophysical data analysis Semere Habtemicael Department of Mathematics North Dakota State University May 19, 2014 Outline 1 Introduction 2 Model 3 Characteristic function
More informationThe 1d Kalman Filter. 1 Understanding the forward model. Richard Turner
The d Kalman Filter Richard Turner This is a Jekyll and Hyde of a document and should really be split up. We start with Jekyll which contains a very short derivation for the d Kalman filter, the purpose
More informationPath Decomposition of Markov Processes. Götz Kersting. University of Frankfurt/Main
Path Decomposition of Markov Processes Götz Kersting University of Frankfurt/Main joint work with Kaya Memisoglu, Jim Pitman 1 A Brownian path with positive drift 50 40 30 20 10 0 0 200 400 600 800 1000-10
More informationBayesian Regularization
Bayesian Regularization Aad van der Vaart Vrije Universiteit Amsterdam International Congress of Mathematicians Hyderabad, August 2010 Contents Introduction Abstract result Gaussian process priors Co-authors
More informationLower Tail Probabilities and Related Problems
Lower Tail Probabilities and Related Problems Qi-Man Shao National University of Singapore and University of Oregon qmshao@darkwing.uoregon.edu . Lower Tail Probabilities Let {X t, t T } be a real valued
More informationFrom a Mesoscopic to a Macroscopic Description of Fluid-Particle Interaction
From a Mesoscopic to a Macroscopic Description of Fluid-Particle Interaction Carnegie Mellon University Center for Nonlinear Analysis Working Group, October 2016 Outline 1 Physical Framework 2 3 Free Energy
More informationThe parabolic Anderson model on Z d with time-dependent potential: Frank s works
Weierstrass Institute for Applied Analysis and Stochastics The parabolic Anderson model on Z d with time-dependent potential: Frank s works Based on Frank s works 2006 2016 jointly with Dirk Erhard (Warwick),
More informationPath integrals and the classical approximation 1 D. E. Soper 2 University of Oregon 14 November 2011
Path integrals and the classical approximation D. E. Soper University of Oregon 4 November 0 I offer here some background for Sections.5 and.6 of J. J. Sakurai, Modern Quantum Mechanics. Introduction There
More informationLecture 10: Semi-Markov Type Processes
Lecture 1: Semi-Markov Type Processes 1. Semi-Markov processes (SMP) 1.1 Definition of SMP 1.2 Transition probabilities for SMP 1.3 Hitting times and semi-markov renewal equations 2. Processes with semi-markov
More informationMFM Practitioner Module: Quantitative Risk Management. John Dodson. September 23, 2015
MFM Practitioner Module: Quantitative Risk Management September 23, 2015 Mixtures Mixtures Mixtures Definitions For our purposes, A random variable is a quantity whose value is not known to us right now
More informationIntegral representations in models with long memory
Integral representations in models with long memory Georgiy Shevchenko, Yuliya Mishura, Esko Valkeila, Lauri Viitasaari, Taras Shalaiko Taras Shevchenko National University of Kyiv 29 September 215, Ulm
More informationCIMPA SCHOOL, 2007 Jump Processes and Applications to Finance Monique Jeanblanc
CIMPA SCHOOL, 27 Jump Processes and Applications to Finance Monique Jeanblanc 1 Jump Processes I. Poisson Processes II. Lévy Processes III. Jump-Diffusion Processes IV. Point Processes 2 I. Poisson Processes
More informationExpectation, variance and moments
Expectation, variance and moments John Appleby Contents Expectation and variance Examples 3 Moments and the moment generating function 4 4 Examples of moment generating functions 5 5 Concluding remarks
More informationSTAT STOCHASTIC PROCESSES. Contents
STAT 3911 - STOCHASTIC PROCESSES ANDREW TULLOCH Contents 1. Stochastic Processes 2 2. Classification of states 2 3. Limit theorems for Markov chains 4 4. First step analysis 5 5. Branching processes 5
More informationNormalized kernel-weighted random measures
Normalized kernel-weighted random measures Jim Griffin University of Kent 1 August 27 Outline 1 Introduction 2 Ornstein-Uhlenbeck DP 3 Generalisations Bayesian Density Regression We observe data (x 1,
More informationStatistical inference on Lévy processes
Alberto Coca Cabrero University of Cambridge - CCA Supervisors: Dr. Richard Nickl and Professor L.C.G.Rogers Funded by Fundación Mutua Madrileña and EPSRC MASDOC/CCA student workshop 2013 26th March Outline
More informationELEMENTS OF PROBABILITY THEORY
ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable
More information6. Brownian Motion. Q(A) = P [ ω : x(, ω) A )
6. Brownian Motion. stochastic process can be thought of in one of many equivalent ways. We can begin with an underlying probability space (Ω, Σ, P) and a real valued stochastic process can be defined
More informationLOCATION OF THE PATH SUPREMUM FOR SELF-SIMILAR PROCESSES WITH STATIONARY INCREMENTS. Yi Shen
LOCATION OF THE PATH SUPREMUM FOR SELF-SIMILAR PROCESSES WITH STATIONARY INCREMENTS Yi Shen Department of Statistics and Actuarial Science, University of Waterloo. Waterloo, ON N2L 3G1, Canada. Abstract.
More informationFunctional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals
Functional Limit theorems for the quadratic variation of a continuous time random walk and for certain stochastic integrals Noèlia Viles Cuadros BCAM- Basque Center of Applied Mathematics with Prof. Enrico
More informationUniformly Uniformly-ergodic Markov chains and BSDEs
Uniformly Uniformly-ergodic Markov chains and BSDEs Samuel N. Cohen Mathematical Institute, University of Oxford (Based on joint work with Ying Hu, Robert Elliott, Lukas Szpruch) Centre Henri Lebesgue,
More informationSolutions For Stochastic Process Final Exam
Solutions For Stochastic Process Final Exam (a) λ BMW = 20 0% = 2 X BMW Poisson(2) Let N t be the number of BMWs which have passes during [0, t] Then the probability in question is P (N ) = P (N = 0) =
More informationComplex Systems Methods 2. Conditional mutual information, entropy rate and algorithmic complexity
Complex Systems Methods 2. Conditional mutual information, entropy rate and algorithmic complexity Eckehard Olbrich MPI MiS Leipzig Potsdam WS 2007/08 Olbrich (Leipzig) 26.10.2007 1 / 18 Overview 1 Summary
More information