MOMENTS AND CUMULANTS OF THE SUBORDINATED LEVY PROCESSES

Size: px
Start display at page:

Download "MOMENTS AND CUMULANTS OF THE SUBORDINATED LEVY PROCESSES"

Transcription

1 MOMENTS AND CUMULANTS OF THE SUBORDINATED LEVY PROCESSES PENKA MAYSTER ISET de Rades UNIVERSITY OF TUNIS 1

2 The formula of Faa di Bruno represents the n-th derivative of the composition of two functions f(g(t)), involving the Bell polynomials. Various identities relating moments and cumulants of random variables provide applications of Bell polynomials. As the Laplace exponent of the subordinated Lévy processes is exactly the composition of two Laplace exponents, 2

3 we utilize the Faa di Bruno formula to describe the moments and cumulants of the subordinated Lévy processes. 3

4 Subordinated Lévy processes Lévy process is an additive process X = (X(t), t 0) on a probability space (Ω X, B X, P X ), which is a right continuous process having left limits and X(0) = 0. Additive process, it means: independent and stationary increments. If (F t, t 0) denotes the natural filtration generated by X, then the increment (X t+s X t ) is independent of F t and has the same law as X s for every s, t 0. The distribution of X is determined by its transition probability p X (t, dx) measure and thus by its characteristic function. Let E(exp(iλX(t))) = exp(tf X (λ)), λ R, t 0. 4

5 We have the Lévy-Khinchine formula f X (λ) = iaλ 1 2 σ2 λ 2 + R (eiλx 1 iλx1 { x <1} )Π X (dx). Obviously a and σ are constant, i = 1 and (1 0+ x2 )Π X (dx) <. The Lévy measure Π X (dx) represent the mean of the numbers of increments with altitude x. 5

6 They say, that the Lévy process is defined by its triplet: (a, σ 2, Π X (dx)). It is well known that a Lévy process is of unbounded variation if σ 2 > 0 or x Π X(dx) =. 6

7 Let T = (T (t), t 0) be a Subordinator i.e, the Lévy process with non decreasing sample paths. Equivalently, this means that the Gaussian coefficient σ 2 is equal to zero and the Lévy measure Π T does not charge the interval ], 0] and fulfills 0+ (1 x)π T (dx) <. 7

8 All the subordinators are of bounded variation, positive drifts and jump measure concentrated on the interval (0, ). triplet of the subordinator is The (b, 0, Π T ) 8

9 To describe the subordinators it is more convenient to use Laplace transform rather than characteristic function. We denote the transition probability of T by p T (t, dx), x 0, t 0, and its Laplace transform by E(exp( λt (t))) = exp( tψ T (λ)), λ > 0, t > 0. where ψ : [0, + [ [0, + [ denotes the so-called Bernstein function ψ T (λ) = bλ + where b > 0 is the constant drift. 0 +(1 e λx )Π T (dx), 9

10 Natural examples of subordinators are: Gamma process, one-side stable, the quadratic variation of any Lévy process and so on. The compound Poisson process includes many explicitly known Lévy processes as special cassis, especially integer-valued Lévy processes. 10

11 Let X and T be independent Lévy processes, supposed T is a subordinator. The subordinated process Y = (Y (t), t 0) is defined by Y (t) = X(T (t)) on the probability space (Ω Y, B Y, P Y ), where Ω Y is the cartesian product Ω X Ω T and B Y = B X B T, P Y = P X P T 11

12 and the transition probability is given by p Y (t, dy) = 0 p X(s, dy)p T (t, ds). The subordinated process Y has stationary independent increments and characteristic exponent given by f Y (λ) = ψ T ( f X (λ), λ R, t 0. The Lévy measure of the subordinated process is Π Y (dy) = d T Π X (dy) + 0 p X(s, dy)π T (ds) where d T is the constant drift of the subordinator T. 12

13 The drift of the subordinated process is: d Y = d X d T + 0 ( x <1 xp X(s, dx))π T (ds). The continuous part of the subordinated process has the coefficient σ 2 Y = d T σ 2. The triplet of the subordinated process is {d Y, d T σ 2, Π Y (dx)}. 13

14 Moments and cumulants In general, the n-th moments and cumulants can be calculated as the derivatives of the characteristic functions. The cumulants are the derivatives of the Bernstein function at zero, or the moments of the Lévy measure. The relations between them can been expressed via the Faa di Bruno formulas and the Bell polynomials B n and B n,k, Charalambides,Ch.A.: Enumerative Combinatorics,Chapman, Hall,(2002 Roman, S.M.: The formula of Faa di Bruno, AMS Monthly, 87 (1980)

15 Denote by a n, b n, c n the cumulants. Let x n (s), t n (t), y n (t) be the moments of the processes X(s), T (t), Y (t) respectively and x n = x n (1), t n = t n (1), y n = y n (1) be the moments of the representative random variables X(1), T (1), Y (1). 15

16 For the convenience, we shall denote the sequences by and so on. a = (a 1, a 2,...), b = (b 1, b 2,...) The sequences of the powers in particular: a = (a, a 2,...), 1 = (1, 1,...). 16

17 = (1, 2, 3...)., ( 1) = (0, 1, 2,...). J. PITMAN, Combinatorial Stochastic Processes, LECTURE NOTES IN MATHEMATICS v. 1875, 2002, 17

18 The (n, k) th partial Bell polynomial is defined as a sum of products as follows: or equivalently: B n,k (a ) = n! k! k (k 1,k 2,...k n ) i=1 a ki k i!, B n,k (a ) = (k 1,k 2,...k n ) n!a k ak n n k 1!(1!) k 1...k n!(n!) k, n where the sum is over all partitions of n into k parts, that is over all nonnegative integer solutions (k 1, k 2,..., k n ) of the equations: k 1 + 2k nk n = n, k 1 + k k n = k. 18

19 The (n) th partition polynomial of two variables b, a is defined as a sum of the partial Bell polynomials as follows: or equivalently: B n (b, a ) = n b k B n,k (a ), B n (b, a ) = (k 1,k 2,...k n ) n!b k a k ak n n k 1!(1!) k 1...k n!(n!) k, n where the sum is over all nonnegative integer solutions (k 1, k 2,..., k n ) of the equation: k 1 + 2k nk n = n. 19

20 For the sequence of: (a bx ) = (abx 1, a 2 bx 2,..., a n bx n,...) The definitions imply the following relations B n,k (a bx ) = a n b k B n,k (x ) 20

21 The Faa di Bruno formula is manifested by the following x n (t) = B n (t, a ), c n = B n (b, a ) If X is a Poisson process, then (a ) = (a1 ), and x n (s) will be expressed by the Stirling numbers of the second kind. 21

22 If X is a Gamma process, then (a ) = (a ( 1)!). obtain, as the auxiliary result We shall B n (1, ( 1)!) = n! 22

23 The relation between moments and cumulants is given by the following combination: x n = n 1 ( n 1 k=0 k )x k a n k, a n = n ( 1) k 1 (k 1)!B n,k (x ). 23

24 x 1 (s) = a 1 s x 2 (s) = a 2 s + (a 1 s) 2 x 3 (s) = a 3 s + 3a 1 a 2 s 2 + (a 1 s) 3 x 4 (s) = a 4 s + (4a 1 a 3 + 3a 2 2 )s2 + 6a 2 1 a 2s 3 + (a 1 s) 4 24

25 x 5 (s) = a 5 s+(5a 1 a 4 +10a 2 a 3 )s 2 +(10a 1 a 3 +15a 1 a 2 2 )s3 +10a 3 1 a 2s 4 +(a 1 s) 5 x 6 = a 6 s+(15a 2 a 4 +6a 1 a 5 +10a 2 3 )s3 +(60a 1 a 2 a 3 +15a 2 1 a 4+15a 3 2 )s3 + 20a 3 1 a 3s a 4 1 a 2s 5 + (a 1 s) 6 and in particular B 6,3 (c ) = 60c 1 c 2 c c 2 1 c c 2 2.

26 The exponential Bell polynomials are given by: B n (c, a ) and B n (1, a ) 25

27 The Stirling numbers of the second kind we denote by S n,k := B n,k (1 ) = 1 k! k j=0 ( 1) k j ( k j )jn The unsigned Stirling numbers of the first kind. are given by s n,k := B n,k (( 1)!) 26

28 For the sequence of factorials,! = (1, 2!, 3!,...) we obtain the Lah numbers: B n,k (!) = ( n 1 k 1 )n! k!. the idempotent numbers are equal to: B n,k ( ) = ( n k )kn k. 27

29 For the sequence we have (a 1 (!)) = (1a 0, a2!, a 2 3!,...a n 1 n!,...), B n,k (a 1 (!)) = a n k ( n 1 k 1 )n! k!. 28

30 For the sequence (( a) 1 ) = ((1a) 0, (2a) 1, (3a) 2,..., (na) n 1 ), B n,k (( a) 1 ) = ( n 1 k 1 )(an)n k. B n (1, a x ) = a n B n (x ) S.Bouroubi, M.Abbas, New identities for Bell s polynomials. New approaches, Rostock. Math. Kolloq. v.61, 49-55,(2006) 29

31 The central moments can be expressed by the cumulants as follows: E(X x 1 ) = 0, E(X x 1 ) 2 = V arx = a 2, E(X(s) x 1 (s)) 2 = V arx(s) = a 2 s E(X x 1 ) 3 = a 3, E(X(s) x 1 (s)) 3 = a 3 s 30

32 Obviously, the first moment coincides with the first cumulant, the second cumulant coincides with the variance and the third cumulant is equal to the third central moment. 31

33 E(X x 1 ) 4 = a 4 + 3a 2 2, E(X(s) x 1(s)) 4 = a 4 s + 3a 2 2 s2 E(X x 1 ) 5 = a a 2 a 3, E(X(s) x 1 (s)) 5 = a 5 s + 10a 2 a 3 s 2 E(X x 1 ) 6 = a 6 + (15a 2 a a 2 3 ) + 15a3 2, E(X(s) x 1 (s)) 6 = a 6 s + (15a 2 a a 2 3 )s2 + 15a 3 2 s3 32

34 Suppose T (t) is any subordinator and Let X(s) be any Lévy process with finite moments t n, x n and cumulants b n, a n. cumulants of the subordinated process: Then the c n = B n (b, a ) and moments are: y n (t) = n t k B n,k (c ) = n t k (t)b n,k (a ), 33

35 We have, in general: y n (t) = Consequently: y n (t) = 0 xn p Y (t, dx) = 0 xn 0 p X(s, dx)p T (t, ds). x n(s)p T (t, ds) = B n(s, a )p T (t, ds). 0 0 Matrix representation of composition,(jabtinsky) and Change of basis ( Kolchin, Moskow) 34

36 B n (t, c ) = n B k (t, b )B n,k (a ). Remember, that Finally, we have: B n (s, a ) = n s k B n,k (a ). y n (t) = B n (t (t), a ). 35

37 Suppose T (t) is any subordinator and Let X(s) be a Poisson process with parameter a with finite moments t n, x n and cumulants b n, a n. Then Y (t) is a compound Poisson process with the following moments and cumulants: y n (t) = n t k (t)a k S n,k, c n = n b k a k S n,k. 36

38 Suppose T (t) is any subordinator and Let X(s) be a Gamma process with parameter a with finite moments t n, x n and cumulants b n, a n. Then Y (t) is a Lévy process with the cumulants: c n = a n n b k B n,k (( 1)!). y n (t) = a n n t k (t)b n,k (( 1)!), 37

39 Suppose T (t) is any subordinator and Let X(s) be a Brownian motion with finite moments and cumulants, then the subordinated process Y (t) has the following moments and cumulants: c n = (2k 1)!!σ 2k b k, y n = (2k 1)!!σ 2k t k, n = 2k. 38

40 The transition probability density of Brownian motion is : P (X(s) = x) = exp( x2 2sσ 2) 2sπσ 2. The moments and cumulants of the Browian motion are a 2 = σ 2 and a n = 0 for n 2. The sequence a = (0, σ 2, 0,...), and x n (s) = s k (2k 1)!!(σ 2 ) k, n = 2k. 39

41 By the main definition of the partial Bell polynomials B n,k we see that non-null terms are only those having k 2 0, k 1 = 0, k 3 = 0,...k n = 0. Consequently: k = k 2, 2k = n and For example, B n,k (a ) = n!(σ2 ) k k!2 k. B 2,1 (a ) = σ 2, B 4,2 (a ) = 3σ 4, B 6,3 (a ) = 15σ 6 40

42 (Brow, T),(Brow,Po) If X is Brownian and T is any subordinator, or Poisson subordinator, then the cumulants are given by (X, T ) T : b n T is Po: b n = b X : a n c n = B n (b, a ) c n = bb n (1, a ) Bro : a 2 = a, c n = (2k 1)!!σ 2k b k, n = 2k c n = (2k 1)!!σ 2k b 41

43 (X,Po) suppose T (t) is a Poisson process then c = bx t k (t) = B k (t, b ) = k j=1 (tb) j S k,j. (X, T ) T : b n T is Po: b n = b X : a n c n = B n (b, a ) c n = bb n (1, a ) y n (t) = B n (t (t), a ) = n k j=1 (tb) j S k,j B n,k (a ). 42

44 (Po,Po) If X is Poisson process and T is a Poisson process (X, T ) T : b n T isp o : b n = b X : a n c n = B n (b, a ) c n = bb n (1, a ) P o : a n = a c n = n b k a k S n,k c n = b n a k S n,k 43

45 Poisson subordinated by Poisson Neyman process Let X(t) and T (t) be two Poisson processes with intensity a and b respectively. Then the subordinated process Y (t) is a compound Poisson process with intensity equal to the total mass of the Lévy measure ψ Y ( ) = b(1 e a ). The intensity of Y (t) is less than the intensity of T (t). The Lévy measure is a zero truncated Poisson probability measure: Π Y ({k}) = 0 p X(s, {k})bδ(s 1) = b ak k! e a, k = 1, 2,... 44

46 Obviously, a n = a and b n = b and the moments Then, x n = B n (1, a ) = n a k S n,k. c n = b n B n,k (a1 ) = b n a k S n,k. y n (t) = B n (t (t), a ) = n k j=1 (tb) j S k,j a k S n,k. 45

47 The transition probability of the Neyman process is : p Y (t, {k}) = ak e bt k! j=0 j k (bte a ) j, k = 0, 1, 2,... j! This probability distribution had been introduced by Neyman in

48 (Ga,Po) If X is Gamma process and T is a Poisson process (X, T ) T : b n T is P o : b n = b X : a n c n = B n (b, a ) c n = bb n (1, a ) Ga : a n = a n (n 1)! c n = a n n b k B n,k (( 1)!) c n = ba n n! 47

49 Let X(s) be a Gamma process with parameter a and T (t) be a Poisson process with parameter b, then the subordinated process Y (t) is a compound Poisson process with the following moments y n (t) = a n n k j=0 (bt) j S k,j B n,k (( 1)!), y n (t) = a n n! n (bt) k ( n 1 k 1 )n! k!. 48

50 Gamma process X(s) subordinated by Poisson is a compound Poisson, but not integer-valued, with transition distribution characterized by the Bessel function of the first kind The jumps times of the subordinated process are the same as the jumps times of the subordinator. 49

51 Namely, the Bernstein function the Lévy measure is equal to ψ Y (λ) = b( aλ ) = baλ 1 + aλ, Π Y (dx) = b e x/a a dx and has the total mass b, and it is easy to calculate the cumulants. c n = b n B n,k (a k (k 1)!) = ba n n!. 50

52 The transition probability: p Y (t, dx) = e (bt+x a )tb a ( xbt 1 a )k 1 k!γ(k) dx. Bessel function of the first kind with parameter α denoted by J α (x) is a solution of the equation: x 2 y + xy + (x 2 α 2 )y = 0 and has the following series expression J α (x) = k=0 ( 1) k k!γ(k + α + 1) (x 2 )2k+1. In our example α = 1,e.i. is a negative integer, the first term of the series is vanished and J 1 (x) = ( 1) 1 J 1 (x). 51

53 Obviously, it is more convenient to calculate the moments via the Bell polynomials and cumulants via the Lévy measure. y n (t) = n B n,k (a ) k j=0 For the completeness we give the calculus: (bt) j S k,j, y n (t) = 0 xn e (bt+x a )tb a ( xbt 1 a )k 1 k!γ(k) dx y n (t) = a n n (bt) k B n,k (!). 52

54 (X, T ) T : b n T is P o : b n = b X : a n c n = B n (b, a ) c n = bb n (1, a ) a n = a c n = n b k a k S n,k c n = b n a k S n,k a n = a n (n 1)! c n = a n n b k B n,k (( 1)!) c n = ba n n! Bro : a 2 = a, c n = (2k 1)!!σ 2k b k b(2k 1)!!σ 2k 53

55 suppose T (t) is a Gamma process -cumulants (X, T ) T : b n Ga : b n = b n (n 1)! X : a n B n (b, a ) n b k (k 1)!B n,k (a ) P o : a n = a n b k a k S n,k n (ab) k (k 1)!S n,k a n = a n (n 1)! a n n b k B n,k (( 1)!) a n n b k (k 1)!B n,k (( 1 Bro : a 2 = a, (2k 1)!!σ 2k b k (2k 1)!!σ 2k b k (k 1)! 54

56 (X,Ga) Let X(s) be any Levy process with finite moments x n and cumulants a n and suppose T (t) is a Gamma process with finite moments t n (t) = b n Γ(t+n) Γ(t), and cumulants b n = b n (n 1)!. Then Y (t) is a Levy process with the following moments y n (t) = n b kγ(t + k) B n,k (a ) Γ(t) and cumulants: c n = n b k (k 1)!B n,k (a ). 55

57 (Po,Ga) Let X(s) be a Poisson process with parameter a and suppose T (t) is a Gamma process with parameter b then the subordinated process Y (t) is a Negative-Binomial process with the following moments and cumulants: y n = n (ab) k k!s n,k, c n = n (ab) k (k 1)!S n,k. 56

58 The transition probability density of Gamma process T (t) with parameter b is given by: p T (t, dx) = 1 dx Γ(t) b (x b )t 1 e x/b. The Bernstein function is equal to: ψ T (λ) = log(1 + bλ) and the Lévy measure Π T (dx) = x 1 e x/b dx, x > 0. It easy to calculate the moments and cumulants: t k = b k k!, b k = b k (k 1)!. 57

59 Namely: t k (t) = b kγ(t + k), Γ(t) b k = Then x=0 xk Π T (dx) = x=0 xk 1 e x/b dx = b k Γ(k) = b k (k 1)!. y n = B n (t, a ) = In the same way: n t k B n,k (a ) = n b k k!a k S n,k. c n = B n (b, a ) = n b k B n,k (a ) = n b k (k 1)!a k S n,k.

60 (Ga,Ga) Gamma processx(s) with parameter a subordinated by Gamma process with parameter b has the following moments: and cumulants: y n (t) = a n n b kγ(t + k) B n,k (( 1)!), Γ(t) c n = a n n b k (k 1)!B n,k (( 1)!). 58

61 Cumulants of the subordinated processes Y (t) = X(T (t)): (X, T ) b n Po:b n = b Ga:b n = b n (n 1)! a n bb n (1, a ) n b k (k 1)!B n,k (a ) Po b n a k S n,k n (ab) k (k 1)!S n,k Ga ba n n! a n n b k (k 1)!B n,k (( 1)!) Bro (2k 1)!!σ 2k b (2k 1)!!σ 2k b k (k 1)! 59

62 The moments of the subordinated Levy processes are (X, T ) T P o Ga X y n (t) = B n (t, c ) n (bt) k B n,k (x ) Po n t k (t)a k S n,k n kj=1 (tb) j S k,j a k S n,k Ga a n n t k (t)b n,k (( 1)!) a n n (bt) k B n,k (!) Bro (2k 1)!!σ 2k t k (t) (2k 1)!!σ 2k k j=1 (tb) j S k,j 60

63 The moments of the subordinated Levy processes are 61

64 (X, T ) T P o Ga X n (bt) k B n,k (x ) n b k Γ(t+k) Γ(t) B n,k(a ) Po n kj=1 (tb) j S k,j a k S n,k n (ab) k k!s n,k Ga a n n (bt) k B n,k (!) a n n b k k!b n,k (( 1)!) Bro (2k 1)!!σ 2k k j=1 (tb) j S k,j (2k 1)!!σ 2k b k k!

65 The symmetric Variance-Gamma process (Brow,Ga) The Variance-Gamma process may also be expressed as the difference of two independent Gamma processes. c n = b k (k 1)!(2k 1)!!σ 2k, y n (t) = (2k 1)!!σ 2k b kγ(t + k). Γ(t) n = 2k 62

66 The symmetric Meixner process Biane, P., Pitman J. and Yor, M.: Probability laws related to the Jacobi theta and Riemann zeta functions, and Brownian excursions, Bulletin of the American Mathematical Society 38 (2001) The Meixner(α, β, δ) density is given by: where p(x) = (2 cos β 2 )2δ 2απΓ(2α) exp(βx α ) Γ(δ + ix α ) 2, x (, ), α > 0, δ > 0, β < π. 63

67 The characteristic function is cos β E(exp(iλY (t))) = ( 2 ) cosh( αλ iβ 2 ) )2δt, λ R, t 0. Let X(t) be a Brownian motion with Laplace exponent ψ X = λ2 σ 2 2 and let the subordinator T (t) = 2 π 2 n=1 G n (t), where G n (t) is a Gamma process with mean β n t 64

68 and Lévy measure Π n (dx) = 1 x exp( x β n ), where the constant β n = b (n 1 2 )2 then the subordinated process Y (t) = X(T (t)) is a Meixner(α, 0, δ) process with parameters α = 2σ b, δ =

69 THANKS FOR ATTENTION 66

Definition: Lévy Process. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 2: Lévy Processes. Theorem

Definition: Lévy Process. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 2: Lévy Processes. Theorem Definition: Lévy Process Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 2: Lévy Processes David Applebaum Probability and Statistics Department, University of Sheffield, UK July

More information

Jump-type Levy Processes

Jump-type Levy Processes Jump-type Levy Processes Ernst Eberlein Handbook of Financial Time Series Outline Table of contents Probabilistic Structure of Levy Processes Levy process Levy-Ito decomposition Jump part Probabilistic

More information

Poisson random measure: motivation

Poisson random measure: motivation : motivation The Lévy measure provides the expected number of jumps by time unit, i.e. in a time interval of the form: [t, t + 1], and of a certain size Example: ν([1, )) is the expected number of jumps

More information

STAT/MATH 395 A - PROBABILITY II UW Winter Quarter Moment functions. x r p X (x) (1) E[X r ] = x r f X (x) dx (2) (x E[X]) r p X (x) (3)

STAT/MATH 395 A - PROBABILITY II UW Winter Quarter Moment functions. x r p X (x) (1) E[X r ] = x r f X (x) dx (2) (x E[X]) r p X (x) (3) STAT/MATH 395 A - PROBABILITY II UW Winter Quarter 07 Néhémy Lim Moment functions Moments of a random variable Definition.. Let X be a rrv on probability space (Ω, A, P). For a given r N, E[X r ], if it

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

ELEMENTS OF PROBABILITY THEORY

ELEMENTS OF PROBABILITY THEORY ELEMENTS OF PROBABILITY THEORY Elements of Probability Theory A collection of subsets of a set Ω is called a σ algebra if it contains Ω and is closed under the operations of taking complements and countable

More information

Numerical Methods with Lévy Processes

Numerical Methods with Lévy Processes Numerical Methods with Lévy Processes 1 Objective: i) Find models of asset returns, etc ii) Get numbers out of them. Why? VaR and risk management Valuing and hedging derivatives Why not? Usual assumption:

More information

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition

Filtrations, Markov Processes and Martingales. Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition Filtrations, Markov Processes and Martingales Lectures on Lévy Processes and Stochastic Calculus, Braunschweig, Lecture 3: The Lévy-Itô Decomposition David pplebaum Probability and Statistics Department,

More information

Exponential functionals of Lévy processes

Exponential functionals of Lévy processes Exponential functionals of Lévy processes Víctor Rivero Centro de Investigación en Matemáticas, México. 1/ 28 Outline of the talk Introduction Exponential functionals of spectrally positive Lévy processes

More information

Lecture 17: The Exponential and Some Related Distributions

Lecture 17: The Exponential and Some Related Distributions Lecture 7: The Exponential and Some Related Distributions. Definition Definition: A continuous random variable X is said to have the exponential distribution with parameter if the density of X is e x if

More information

Ernesto Mordecki. Talk presented at the. Finnish Mathematical Society

Ernesto Mordecki. Talk presented at the. Finnish Mathematical Society EXACT RUIN PROBABILITIES FOR A CLASS Of LÉVY PROCESSES Ernesto Mordecki http://www.cmat.edu.uy/ mordecki Montevideo, Uruguay visiting Åbo Akademi, Turku Talk presented at the Finnish Mathematical Society

More information

Things to remember when learning probability distributions:

Things to remember when learning probability distributions: SPECIAL DISTRIBUTIONS Some distributions are special because they are useful They include: Poisson, exponential, Normal (Gaussian), Gamma, geometric, negative binomial, Binomial and hypergeometric distributions

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

µ X (A) = P ( X 1 (A) )

µ X (A) = P ( X 1 (A) ) 1 STOCHASTIC PROCESSES This appendix provides a very basic introduction to the language of probability theory and stochastic processes. We assume the reader is familiar with the general measure and integration

More information

Econ 508B: Lecture 5

Econ 508B: Lecture 5 Econ 508B: Lecture 5 Expectation, MGF and CGF Hongyi Liu Washington University in St. Louis July 31, 2017 Hongyi Liu (Washington University in St. Louis) Math Camp 2017 Stats July 31, 2017 1 / 23 Outline

More information

Stochastic Analysis. Prof. Dr. Andreas Eberle

Stochastic Analysis. Prof. Dr. Andreas Eberle Stochastic Analysis Prof. Dr. Andreas Eberle March 13, 212 Contents Contents 2 1 Lévy processes and Poisson point processes 6 1.1 Lévy processes.............................. 7 Characteristic exponents.........................

More information

Exponential Distribution and Poisson Process

Exponential Distribution and Poisson Process Exponential Distribution and Poisson Process Stochastic Processes - Lecture Notes Fatih Cavdur to accompany Introduction to Probability Models by Sheldon M. Ross Fall 215 Outline Introduction Exponential

More information

This ODE arises in many physical systems that we shall investigate. + ( + 1)u = 0. (λ + s)x λ + s + ( + 1) a λ. (s + 1)(s + 2) a 0

This ODE arises in many physical systems that we shall investigate. + ( + 1)u = 0. (λ + s)x λ + s + ( + 1) a λ. (s + 1)(s + 2) a 0 Legendre equation This ODE arises in many physical systems that we shall investigate We choose We then have Substitution gives ( x 2 ) d 2 u du 2x 2 dx dx + ( + )u u x s a λ x λ a du dx λ a λ (λ + s)x

More information

The Λ-Fleming-Viot process and a connection with Wright-Fisher diffusion. Bob Griffiths University of Oxford

The Λ-Fleming-Viot process and a connection with Wright-Fisher diffusion. Bob Griffiths University of Oxford The Λ-Fleming-Viot process and a connection with Wright-Fisher diffusion Bob Griffiths University of Oxford A d-dimensional Λ-Fleming-Viot process {X(t)} t 0 representing frequencies of d types of individuals

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

Linearization coefficients for orthogonal polynomials. Michael Anshelevich

Linearization coefficients for orthogonal polynomials. Michael Anshelevich Linearization coefficients for orthogonal polynomials Michael Anshelevich February 26, 2003 P n = monic polynomials of degree n = 0, 1,.... {P n } = basis for the polynomials in 1 variable. Linearization

More information

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as

EEL 5544 Noise in Linear Systems Lecture 30. X (s) = E [ e sx] f X (x)e sx dx. Moments can be found from the Laplace transform as L30-1 EEL 5544 Noise in Linear Systems Lecture 30 OTHER TRANSFORMS For a continuous, nonnegative RV X, the Laplace transform of X is X (s) = E [ e sx] = 0 f X (x)e sx dx. For a nonnegative RV, the Laplace

More information

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t

Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of. F s F t 2.2 Filtrations Let (Ω, F) be a measureable space. A filtration in discrete time is a sequence of σ algebras {F t } such that F t F and F t F t+1 for all t = 0, 1,.... In continuous time, the second condition

More information

Stat 475 Life Contingencies I. Chapter 2: Survival models

Stat 475 Life Contingencies I. Chapter 2: Survival models Stat 475 Life Contingencies I Chapter 2: Survival models The future lifetime random variable Notation We are interested in analyzing and describing the future lifetime of an individual. We use (x) to denote

More information

Stochastic Proceses Poisson Process

Stochastic Proceses Poisson Process Stochastic Proceses 2014 1. Poisson Process Giovanni Pistone Revised June 9, 2014 1 / 31 Plan 1. Poisson Process (Formal construction) 2. Wiener Process (Formal construction) 3. Infinitely divisible distributions

More information

Characterizations of free Meixner distributions

Characterizations of free Meixner distributions Characterizations of free Meixner distributions Texas A&M University March 26, 2010 Jacobi parameters. Matrix. β 0 γ 0 0 0... 1 β 1 γ 1 0.. m n J =. 0 1 β 2 γ.. 2 ; J n =. 0 0 1 β.. 3............... A

More information

Generalised Fractional-Black-Scholes Equation: pricing and hedging

Generalised Fractional-Black-Scholes Equation: pricing and hedging Generalised Fractional-Black-Scholes Equation: pricing and hedging Álvaro Cartea Birkbeck College, University of London April 2004 Outline Lévy processes Fractional calculus Fractional-Black-Scholes 1

More information

UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017)

UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, Practice Final Examination (Winter 2017) UCSD ECE250 Handout #27 Prof. Young-Han Kim Friday, June 8, 208 Practice Final Examination (Winter 207) There are 6 problems, each problem with multiple parts. Your answer should be as clear and readable

More information

ON ADDITIVE TIME-CHANGES OF FELLER PROCESSES. 1. Introduction

ON ADDITIVE TIME-CHANGES OF FELLER PROCESSES. 1. Introduction ON ADDITIVE TIME-CHANGES OF FELLER PROCESSES ALEKSANDAR MIJATOVIĆ AND MARTIJN PISTORIUS Abstract. In this note we generalise the Phillips theorem [1] on the subordination of Feller processes by Lévy subordinators

More information

Continuous Random Variables and Continuous Distributions

Continuous Random Variables and Continuous Distributions Continuous Random Variables and Continuous Distributions Continuous Random Variables and Continuous Distributions Expectation & Variance of Continuous Random Variables ( 5.2) The Uniform Random Variable

More information

1 Probability and Random Variables

1 Probability and Random Variables 1 Probability and Random Variables The models that you have seen thus far are deterministic models. For any time t, there is a unique solution X(t). On the other hand, stochastic models will result in

More information

1 Infinitely Divisible Random Variables

1 Infinitely Divisible Random Variables ENSAE, 2004 1 2 1 Infinitely Divisible Random Variables 1.1 Definition A random variable X taking values in IR d is infinitely divisible if its characteristic function ˆµ(u) =E(e i(u X) )=(ˆµ n ) n where

More information

New asymptotic expansion for the Γ (z) function.

New asymptotic expansion for the Γ (z) function. New asymptotic expansion for the Γ z function. Gergő Nemes Institute of Mathematics, Eötvös Loránd University 7 Budapest, Hungary September 4, 007 Published in Stan s Library, Volume II, 3 Dec 007. Link:

More information

Supermodular ordering of Poisson arrays

Supermodular ordering of Poisson arrays Supermodular ordering of Poisson arrays Bünyamin Kızıldemir Nicolas Privault Division of Mathematical Sciences School of Physical and Mathematical Sciences Nanyang Technological University 637371 Singapore

More information

1.1 Review of Probability Theory

1.1 Review of Probability Theory 1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

ENGI 9420 Lecture Notes 1 - ODEs Page 1.01

ENGI 9420 Lecture Notes 1 - ODEs Page 1.01 ENGI 940 Lecture Notes - ODEs Page.0. Ordinary Differential Equations An equation involving a function of one independent variable and the derivative(s) of that function is an ordinary differential equation

More information

Lecture 5: Moment generating functions

Lecture 5: Moment generating functions Lecture 5: Moment generating functions Definition 2.3.6. The moment generating function (mgf) of a random variable X is { x e tx f M X (t) = E(e tx X (x) if X has a pmf ) = etx f X (x)dx if X has a pdf

More information

Lecture 5: Expectation

Lecture 5: Expectation Lecture 5: Expectation 1. Expectations for random variables 1.1 Expectations for simple random variables 1.2 Expectations for bounded random variables 1.3 Expectations for general random variables 1.4

More information

Holomorphic functions which preserve holomorphic semigroups

Holomorphic functions which preserve holomorphic semigroups Holomorphic functions which preserve holomorphic semigroups University of Oxford London Mathematical Society Regional Meeting Birmingham, 15 September 2016 Heat equation u t = xu (x Ω R d, t 0), u(t, x)

More information

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt.

The concentration of a drug in blood. Exponential decay. Different realizations. Exponential decay with noise. dc(t) dt. The concentration of a drug in blood Exponential decay C12 concentration 2 4 6 8 1 C12 concentration 2 4 6 8 1 dc(t) dt = µc(t) C(t) = C()e µt 2 4 6 8 1 12 time in minutes 2 4 6 8 1 12 time in minutes

More information

Stochastic Calculus and Black-Scholes Theory MTH772P Exercises Sheet 1

Stochastic Calculus and Black-Scholes Theory MTH772P Exercises Sheet 1 Stochastic Calculus and Black-Scholes Theory MTH772P Exercises Sheet. For ξ, ξ 2, i.i.d. with P(ξ i = ± = /2 define the discrete-time random walk W =, W n = ξ +... + ξ n. (i Formulate and prove the property

More information

Asymptotics of Integrals of. Hermite Polynomials

Asymptotics of Integrals of. Hermite Polynomials Applied Mathematical Sciences, Vol. 4, 010, no. 61, 04-056 Asymptotics of Integrals of Hermite Polynomials R. B. Paris Division of Complex Systems University of Abertay Dundee Dundee DD1 1HG, UK R.Paris@abertay.ac.uk

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

CS Lecture 19. Exponential Families & Expectation Propagation

CS Lecture 19. Exponential Families & Expectation Propagation CS 6347 Lecture 19 Exponential Families & Expectation Propagation Discrete State Spaces We have been focusing on the case of MRFs over discrete state spaces Probability distributions over discrete spaces

More information

Introduction to orthogonal polynomials. Michael Anshelevich

Introduction to orthogonal polynomials. Michael Anshelevich Introduction to orthogonal polynomials Michael Anshelevich November 6, 2003 µ = probability measure on R with finite moments m n (µ) = R xn dµ(x)

More information

Conditional distributions. Conditional expectation and conditional variance with respect to a variable.

Conditional distributions. Conditional expectation and conditional variance with respect to a variable. Conditional distributions Conditional expectation and conditional variance with respect to a variable Probability Theory and Stochastic Processes, summer semester 07/08 80408 Conditional distributions

More information

7.3 Singular points and the method of Frobenius

7.3 Singular points and the method of Frobenius 284 CHAPTER 7. POWER SERIES METHODS 7.3 Singular points and the method of Frobenius Note: or.5 lectures, 8.4 and 8.5 in [EP], 5.4 5.7 in [BD] While behaviour of ODEs at singular points is more complicated,

More information

On a class of stochastic differential equations in a financial network model

On a class of stochastic differential equations in a financial network model 1 On a class of stochastic differential equations in a financial network model Tomoyuki Ichiba Department of Statistics & Applied Probability, Center for Financial Mathematics and Actuarial Research, University

More information

Analytical properties of scale functions for spectrally negative Lévy processes

Analytical properties of scale functions for spectrally negative Lévy processes Analytical properties of scale functions for spectrally negative Lévy processes Andreas E. Kyprianou 1 Department of Mathematical Sciences, University of Bath 1 Joint work with Terence Chan and Mladen

More information

Chapter 7. Basic Probability Theory

Chapter 7. Basic Probability Theory Chapter 7. Basic Probability Theory I-Liang Chern October 20, 2016 1 / 49 What s kind of matrices satisfying RIP Random matrices with iid Gaussian entries iid Bernoulli entries (+/ 1) iid subgaussian entries

More information

GENERALIZATION OF UNIVERSAL PARTITION AND BIPARTITION THEOREMS. Hacène Belbachir 1

GENERALIZATION OF UNIVERSAL PARTITION AND BIPARTITION THEOREMS. Hacène Belbachir 1 #A59 INTEGERS 3 (23) GENERALIZATION OF UNIVERSAL PARTITION AND BIPARTITION THEOREMS Hacène Belbachir USTHB, Faculty of Mathematics, RECITS Laboratory, Algiers, Algeria hbelbachir@usthb.dz, hacenebelbachir@gmail.com

More information

Quadratic harnesses, q-commutations, and orthogonal martingale polynomials. Quadratic harnesses. Previous research. Notation and Motivation

Quadratic harnesses, q-commutations, and orthogonal martingale polynomials. Quadratic harnesses. Previous research. Notation and Motivation Quadratic harnesses, q-commutations, and orthogonal martingale polynomials Quadratic harnesses Slide 1 W lodek Bryc University of Cincinnati August 2005 Based on joint research with W Matysiak and J Wesolowski

More information

Experimental Design and Statistics - AGA47A

Experimental Design and Statistics - AGA47A Experimental Design and Statistics - AGA47A Czech University of Life Sciences in Prague Department of Genetics and Breeding Fall/Winter 2014/2015 Matúš Maciak (@ A 211) Office Hours: M 14:00 15:30 W 15:30

More information

ON THE MOMENTS OF ITERATED TAIL

ON THE MOMENTS OF ITERATED TAIL ON THE MOMENTS OF ITERATED TAIL RADU PĂLTĂNEA and GHEORGHIŢĂ ZBĂGANU The classical distribution in ruins theory has the property that the sequence of the first moment of the iterated tails is convergent

More information

E[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) =

E[X n ]= dn dt n M X(t). ). What is the mgf? Solution. Found this the other day in the Kernel matching exercise: 1 M X (t) = Chapter 7 Generating functions Definition 7.. Let X be a random variable. The moment generating function is given by M X (t) =E[e tx ], provided that the expectation exists for t in some neighborhood of

More information

04. Random Variables: Concepts

04. Random Variables: Concepts University of Rhode Island DigitalCommons@URI Nonequilibrium Statistical Physics Physics Course Materials 215 4. Random Variables: Concepts Gerhard Müller University of Rhode Island, gmuller@uri.edu Creative

More information

Lecture 4a: Continuous-Time Markov Chain Models

Lecture 4a: Continuous-Time Markov Chain Models Lecture 4a: Continuous-Time Markov Chain Models Continuous-time Markov chains are stochastic processes whose time is continuous, t [0, ), but the random variables are discrete. Prominent examples of continuous-time

More information

The Laplace driven moving average a non-gaussian stationary process

The Laplace driven moving average a non-gaussian stationary process The Laplace driven moving average a non-gaussian stationary process 1, Krzysztof Podgórski 2, Igor Rychlik 1 1 Mathematical Sciences, Mathematical Statistics, Chalmers 2 Centre for Mathematical Sciences,

More information

Department of Mathematics

Department of Mathematics Department of Mathematics Ma 3/103 KC Border Introduction to Probability and Statistics Winter 2017 Lecture 8: Expectation in Action Relevant textboo passages: Pitman [6]: Chapters 3 and 5; Section 6.4

More information

Jump Processes. Richard F. Bass

Jump Processes. Richard F. Bass Jump Processes Richard F. Bass ii c Copyright 214 Richard F. Bass Contents 1 Poisson processes 1 1.1 Definitions............................. 1 1.2 Stopping times.......................... 3 1.3 Markov

More information

Stable and extended hypergeometric Lévy processes

Stable and extended hypergeometric Lévy processes Stable and extended hypergeometric Lévy processes Andreas Kyprianou 1 Juan-Carlos Pardo 2 Alex Watson 2 1 University of Bath 2 CIMAT as given at CIMAT, 23 October 2013 A process, and a problem Let X be

More information

Two special equations: Bessel s and Legendre s equations. p Fourier-Bessel and Fourier-Legendre series. p

Two special equations: Bessel s and Legendre s equations. p Fourier-Bessel and Fourier-Legendre series. p LECTURE 1 Table of Contents Two special equations: Bessel s and Legendre s equations. p. 259-268. Fourier-Bessel and Fourier-Legendre series. p. 453-460. Boundary value problems in other coordinate system.

More information

S. Ghorai 1. Lecture XV Bessel s equation, Bessel s function. e t t p 1 dt, p > 0. (1)

S. Ghorai 1. Lecture XV Bessel s equation, Bessel s function. e t t p 1 dt, p > 0. (1) S Ghorai 1 1 Gamma function Gamma function is defined by Lecture XV Bessel s equation, Bessel s function Γp) = e t t p 1 dt, p > 1) The integral in 1) is convergent that can be proved easily Some special

More information

Probability and Measure

Probability and Measure Chapter 4 Probability and Measure 4.1 Introduction In this chapter we will examine probability theory from the measure theoretic perspective. The realisation that measure theory is the foundation of probability

More information

Asymptotics of the maximum of Brownian motion under Erlangian sampling

Asymptotics of the maximum of Brownian motion under Erlangian sampling Asymptotics of the maximum of Brownian motion under Erlangian sampling paper in honor of N.G. de Bruin A.J.E.M. Janssen J.S.H. van Leeuwaarden October 30, 202 Abstract Consider the all-time maximum of

More information

Multi-Factor Lévy Models I: Symmetric alpha-stable (SαS) Lévy Processes

Multi-Factor Lévy Models I: Symmetric alpha-stable (SαS) Lévy Processes Multi-Factor Lévy Models I: Symmetric alpha-stable (SαS) Lévy Processes Anatoliy Swishchuk Department of Mathematics and Statistics University of Calgary Calgary, Alberta, Canada Lunch at the Lab Talk

More information

Brownian Motion. Chapter Definition of Brownian motion

Brownian Motion. Chapter Definition of Brownian motion Chapter 5 Brownian Motion Brownian motion originated as a model proposed by Robert Brown in 1828 for the phenomenon of continual swarming motion of pollen grains suspended in water. In 1900, Bachelier

More information

3. Probability and Statistics

3. Probability and Statistics FE661 - Statistical Methods for Financial Engineering 3. Probability and Statistics Jitkomut Songsiri definitions, probability measures conditional expectations correlation and covariance some important

More information

Continuous Random Variables

Continuous Random Variables Continuous Random Variables Recall: For discrete random variables, only a finite or countably infinite number of possible values with positive probability. Often, there is interest in random variables

More information

3 Continuous Random Variables

3 Continuous Random Variables Jinguo Lian Math437 Notes January 15, 016 3 Continuous Random Variables Remember that discrete random variables can take only a countable number of possible values. On the other hand, a continuous random

More information

Universal examples. Chapter The Bernoulli process

Universal examples. Chapter The Bernoulli process Chapter 1 Universal examples 1.1 The Bernoulli process First description: Bernoulli random variables Y i for i = 1, 2, 3,... independent with P [Y i = 1] = p and P [Y i = ] = 1 p. Second description: Binomial

More information

A Class of Fractional Stochastic Differential Equations

A Class of Fractional Stochastic Differential Equations Vietnam Journal of Mathematics 36:38) 71 79 Vietnam Journal of MATHEMATICS VAST 8 A Class of Fractional Stochastic Differential Equations Nguyen Tien Dung Department of Mathematics, Vietnam National University,

More information

Regression and Statistical Inference

Regression and Statistical Inference Regression and Statistical Inference Walid Mnif wmnif@uwo.ca Department of Applied Mathematics The University of Western Ontario, London, Canada 1 Elements of Probability 2 Elements of Probability CDF&PDF

More information

Invariance Properties of the Negative Binomial Lévy Process and Stochastic Self-similarity

Invariance Properties of the Negative Binomial Lévy Process and Stochastic Self-similarity International Mathematical Forum, 2, 2007, no. 30, 1457-1468 Invariance Properties of the Negative Binomial Lévy Process and Stochastic Self-similarity Tomasz J. Kozubowski Department of Mathematics &

More information

Expectation, variance and moments

Expectation, variance and moments Expectation, variance and moments John Appleby Contents Expectation and variance Examples 3 Moments and the moment generating function 4 4 Examples of moment generating functions 5 5 Concluding remarks

More information

Random Process. Random Process. Random Process. Introduction to Random Processes

Random Process. Random Process. Random Process. Introduction to Random Processes Random Process A random variable is a function X(e) that maps the set of experiment outcomes to the set of numbers. A random process is a rule that maps every outcome e of an experiment to a function X(t,

More information

GARCH processes continuous counterparts (Part 2)

GARCH processes continuous counterparts (Part 2) GARCH processes continuous counterparts (Part 2) Alexander Lindner Centre of Mathematical Sciences Technical University of Munich D 85747 Garching Germany lindner@ma.tum.de http://www-m1.ma.tum.de/m4/pers/lindner/

More information

MAS113 Introduction to Probability and Statistics. Proofs of theorems

MAS113 Introduction to Probability and Statistics. Proofs of theorems MAS113 Introduction to Probability and Statistics Proofs of theorems Theorem 1 De Morgan s Laws) See MAS110 Theorem 2 M1 By definition, B and A \ B are disjoint, and their union is A So, because m is a

More information

Krzysztof Burdzy University of Washington. = X(Y (t)), t 0}

Krzysztof Burdzy University of Washington. = X(Y (t)), t 0} VARIATION OF ITERATED BROWNIAN MOTION Krzysztof Burdzy University of Washington 1. Introduction and main results. Suppose that X 1, X 2 and Y are independent standard Brownian motions starting from 0 and

More information

1 The distributive law

1 The distributive law THINGS TO KNOW BEFORE GOING INTO DISCRETE MATHEMATICS The distributive law The distributive law is this: a(b + c) = ab + bc This can be generalized to any number of terms between parenthesis; for instance:

More information

Ernesto Mordecki 1. Lecture III. PASI - Guanajuato - June 2010

Ernesto Mordecki 1. Lecture III. PASI - Guanajuato - June 2010 Optimal stopping for Hunt and Lévy processes Ernesto Mordecki 1 Lecture III. PASI - Guanajuato - June 2010 1Joint work with Paavo Salminen (Åbo, Finland) 1 Plan of the talk 1. Motivation: from Finance

More information

arxiv:cond-mat/ v2 [cond-mat.stat-mech] 23 Feb 1998

arxiv:cond-mat/ v2 [cond-mat.stat-mech] 23 Feb 1998 Multiplicative processes and power laws arxiv:cond-mat/978231v2 [cond-mat.stat-mech] 23 Feb 1998 Didier Sornette 1,2 1 Laboratoire de Physique de la Matière Condensée, CNRS UMR6632 Université des Sciences,

More information

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk

Theory and Applications of Stochastic Systems Lecture Exponential Martingale for Random Walk Instructor: Victor F. Araman December 4, 2003 Theory and Applications of Stochastic Systems Lecture 0 B60.432.0 Exponential Martingale for Random Walk Let (S n : n 0) be a random walk with i.i.d. increments

More information

Stationary distributions of non Gaussian Ornstein Uhlenbeck processes for beam halos

Stationary distributions of non Gaussian Ornstein Uhlenbeck processes for beam halos N Cufaro Petroni: CYCLOTRONS 2007 Giardini Naxos, 1 5 October, 2007 1 Stationary distributions of non Gaussian Ornstein Uhlenbeck processes for beam halos CYCLOTRONS 2007 Giardini Naxos, 1 5 October Nicola

More information

221A Lecture Notes Steepest Descent Method

221A Lecture Notes Steepest Descent Method Gamma Function A Lecture Notes Steepest Descent Method The best way to introduce the steepest descent method is to see an example. The Stirling s formula for the behavior of the factorial n! for large

More information

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

221B Lecture Notes Steepest Descent Method

221B Lecture Notes Steepest Descent Method Gamma Function B Lecture Notes Steepest Descent Method The best way to introduce the steepest descent method is to see an example. The Stirling s formula for the behavior of the factorial n! for large

More information

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1

Lecture 9. d N(0, 1). Now we fix n and think of a SRW on [0,1]. We take the k th step at time k n. and our increments are ± 1 Random Walks and Brownian Motion Tel Aviv University Spring 011 Lecture date: May 0, 011 Lecture 9 Instructor: Ron Peled Scribe: Jonathan Hermon In today s lecture we present the Brownian motion (BM).

More information

On Optimal Stopping Problems with Power Function of Lévy Processes

On Optimal Stopping Problems with Power Function of Lévy Processes On Optimal Stopping Problems with Power Function of Lévy Processes Budhi Arta Surya Department of Mathematics University of Utrecht 31 August 2006 This talk is based on the joint paper with A.E. Kyprianou:

More information

Chapter 2 - Survival Models

Chapter 2 - Survival Models 2-1 Chapter 2 - Survival Models Section 2.2 - Future Lifetime Random Variable and the Survival Function Let T x = ( Future lifelength beyond age x of an individual who has survived to age x [measured in

More information

Nash Type Inequalities for Fractional Powers of Non-Negative Self-adjoint Operators. ( Wroclaw 2006) P.Maheux (Orléans. France)

Nash Type Inequalities for Fractional Powers of Non-Negative Self-adjoint Operators. ( Wroclaw 2006) P.Maheux (Orléans. France) Nash Type Inequalities for Fractional Powers of Non-Negative Self-adjoint Operators ( Wroclaw 006) P.Maheux (Orléans. France) joint work with A.Bendikov. European Network (HARP) (to appear in T.A.M.S)

More information

2. (a) What is gaussian random variable? Develop an equation for guassian distribution

2. (a) What is gaussian random variable? Develop an equation for guassian distribution Code No: R059210401 Set No. 1 II B.Tech I Semester Supplementary Examinations, February 2007 PROBABILITY THEORY AND STOCHASTIC PROCESS ( Common to Electronics & Communication Engineering, Electronics &

More information

On Distributions Associated with the Generalized Lévy s Stochastic Area Formula

On Distributions Associated with the Generalized Lévy s Stochastic Area Formula On Distributions Associated with the Generalized Lévy s Stochastic Area Formula Raouf Ghomrasni Abstract A closed-form ression is obtained for the conditional probability distribution of t R s ds given

More information

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS

PROBABILITY: LIMIT THEOREMS II, SPRING HOMEWORK PROBLEMS PROBABILITY: LIMIT THEOREMS II, SPRING 15. HOMEWORK PROBLEMS PROF. YURI BAKHTIN Instructions. You are allowed to work on solutions in groups, but you are required to write up solutions on your own. Please

More information

Obstacle problems for nonlocal operators

Obstacle problems for nonlocal operators Obstacle problems for nonlocal operators Camelia Pop School of Mathematics, University of Minnesota Fractional PDEs: Theory, Algorithms and Applications ICERM June 19, 2018 Outline Motivation Optimal regularity

More information

Verona Course April Lecture 1. Review of probability

Verona Course April Lecture 1. Review of probability Verona Course April 215. Lecture 1. Review of probability Viorel Barbu Al.I. Cuza University of Iaşi and the Romanian Academy A probability space is a triple (Ω, F, P) where Ω is an abstract set, F is

More information

Math Review Sheet, Fall 2008

Math Review Sheet, Fall 2008 1 Descriptive Statistics Math 3070-5 Review Sheet, Fall 2008 First we need to know about the relationship among Population Samples Objects The distribution of the population can be given in one of the

More information

Continuous Distributions

Continuous Distributions A normal distribution and other density functions involving exponential forms play the most important role in probability and statistics. They are related in a certain way, as summarized in a diagram later

More information

On critical branching processes with immigration in varying environment

On critical branching processes with immigration in varying environment On critical branching processes with immigration in varying environment Márton Ispány Faculty of Informatics, University of Debrecen Alfréd Rényi Institute of Mathematics, Hungarian Academy of Sciences

More information

arxiv:math/ v1 [math.pr] 12 Jul 2006

arxiv:math/ v1 [math.pr] 12 Jul 2006 Moment estimates for Lévy Processes Harald Luschgy and Gilles Pagès arxiv:math/67282v1 [math.pr] 12 Jul 26 June 2, 6 Abstract For real Lévy processes (X t t having no Brownian component with Blumenthal-Getoor

More information