On Kummer s distributions of type two and generalized Beta distributions

Size: px
Start display at page:

Download "On Kummer s distributions of type two and generalized Beta distributions"

Transcription

1 On Kummer s distributions of type two and generalized Beta distributions Marwa Hamza and Pierre Vallois 2 March 20, 205 Université de Lorraine, Institut de Mathématiques Elie Cartan, INRIA-BIGS, CNRS UMR 7502, BP 239, F Vandœuvre-lès-Nancy Cedex, France marouahamza@univ-lorrainefr; 2 pierrevallois@univ-lorrainefr Abstract: We characterize the Kummer distributions of type two and the generalized beta distributions as solutions of an equation involving gamma and respectively beta distributions We give also some almost sure realizations of Kummer s distributions and generalized beta ones using the conditioning method and the rejection method as an application Key words: Generalized beta distribution, Kummer distribution of type two, Letac Wesołowski Matsumoto and Yor property of independence MSC subject classifications: 60B0, 60F5, 60G40, 60G50, 60J0 Introduction According to [2], a bijective and decreasing function f from ]0, + [ to ]0, + [ is said to be a Letac-Wesołowski-Matsumoto-Yor LWMY function if there exist two positive and independent random variables X and Y such that fx + Y and fx fx + Y are independent It is has been proved in [6] and [7] that the function f 0 x = /x is a LWMY function and leaded to the Matsumoto- Yor property of independence which involves the Generalized Inverse Gaussian distributions and the gamma ones Under additional assumptions, Koudou and Vallois [2] have shown that there exist 4 classes of LWMY functions including the one generated by f 0 The three other classes are generated respectively by and f x = e x, x > 0, g x = f x = ln x + x, x > 0, 2 e fδ x + δ x = ln e x, x > 0, δ > The two cases f = f and f = g = f are "equivalent" and lead to gamma and Kummer s of type two distributions Recall that: γλ, cdx = cλ Γλ xλ e cx ]0,+ [ xdx, λ, c > 0,

2 2 βa, b = Γa + b ΓaΓb xa x b [0,] xdx and, as usual, γλ stands for γλ, The Kummer distribution of type two, with parameters a, c > 0 and b IR is the distribution defined over ]0, + [ as: K 2 a, b, cdx = Γa Ψa, b, c xa + x a b e cx ]0,+ [ xdx 4 where Ψ is the hyper-geometric confluent function of type 2: Ψa, b, c = Γa 0 t a + t b a e ct dt We denote by LX the law of the random variable X It is convenient to recall the result of Koudou and Vallois which tells that there exists a relation involving couples of positive and independent random variables with Kummer s and gamma distributions Theorem Theorem 2 in [3] Let X and Y be two independent and positive random variables such that: LX = K 2 a, b, c and LY = γb, c, 5 where a, b, c > 0 Then, the random variables: U := + X + Y + X are independent and X, V := X + Y 6 X + Y LU = βa, b and LV = K 2 a + b, b, c 7 Theorem 2 in [3] says more generally that if X and Y are independent rv s whose log densities are locally integrable, then U and V defined by 6 are independent if and only if 5 holds Since we do not use this result in its full generality we only give its "weak" form, ie the formulation given in Theorem Note that the second relation in 6 and 5 imply the following relation of convolution K 2 a, b, c γb, c = K 2 a + b, b, c 8 The Kummer distribution K 2 a, b, c with positive parameters a and c is close to the γa, c distribution since their densities functions are intertwined via the relation: K 2 a, b, cx = c a + x a b γa, cx x > 0, 9 Ψa, b, c We observe that if b > a, then + x a b for any x > 0 This permits to prove that the probability measure K 2 a, b, c is the conditional distribution of a random variable which is γa, c- distributed see Proposition 25 This property allows us, see Proposition 27, to revisit Theorem and to show that identity 9 is the analog, in some sense, to the well known identity: γa, c γb, c = γa + b, c, a, b, c > 0 0 We can go further in this direction since, when a, c > 0 and b > a, 9 implies K 2 a, b, cx c a Ψa, b, c γa, cx, x > 0

3 3 Therefore, we can apply the rejection method to obtain a second almost sure realization of the Kummer s distribution K 2 a, b, c, see Proposition 20 A third as realization of 5 and 7 will be given in Proposition 22, considering the first time where a Markov chain valued in ]0, [ ]0, + [ enters the domain {x, y, y x < c}, where c > 0 Note that K 2 a, b, c cannot be equal to K 2 a + b, b, c Therefore, 8 does not give a closed identity satisfied by the Kummer distribution K 2 a, b, c We will consider in Proposition 2 a different transformation than +x+y +x x, y involving the gamma ones x x+y, x + y which permits to give a new characterization of Kummer s distributions Theorem 2 Let X, Y and Y 2 independent and positive random variables such that LY = γa, c and LY 2 = γa + b, c, 2 where a, c > 0 and b > a Then LX = L Y + Y2 +X if and only if LX = K 2 a, b, c 3 2 Let Y i i be a collection of independent random variables such that LY 2i = LY = γa, c, LY 2i = LY 2 = γa + b, c, i 4 Then L Y Y Y 3 + = K 2 a, b, c 5 3 Concerning the last class of LMWY functions which is generated by the function fδ defined in 3, Koudou and Vallois in [2] have considered the following change of variable f δ x = exp f δ ln x = x, 0 < x < 6 + δ x The Matsumoto-Yor independence property can be reformulated and takes the following form: we look for two independent random variables X and Y such that f δ XY and f δ X f δ XY are independent 7 Under additional assumptions, the laws of X and Y satisfying the above condition have been determined, see Theorem 24 in [3] For simplicity we only mention the weak part of this result Let us recall the definition introduced in [2] of the generalized beta distribution β δ a, b, c, with parameters a, b > 0 and c IR: β δ a, b, cdx = k δ a, b, c x a x b + δ x c [0,] x dx, 8 Note that β δ a, b, c = H c, a; a + b; δ, where Ha, b; c; z stands for the hyper-geometric distribution

4 4 Theorem 3 Theorem 24 in [3] Let X and Y be two independent random variables valued in [0, ] and such that LX = β δ a + b, λ, λ b and LY = βa, b 9 where a, b and λ > 0 Then f δ X U δ, V δ = f δ XY, f δ XY = XY + δ XY, X + δ X + δ XY XY 20 are independent and LU δ = β δ λ + b, a, a b and LV δ = βλ, b 2 The distribution β δ a, b, c can be compared with the βa, b one, since β δ a, b, cx = k δ a, b, c ΓaΓb Γa + b + δ xc βa, bx 22 Consequently, if c < 0 and δ, then + δ x c for any x ]0, [ That inequality allows us to give two almost sure realizations of the generalized beta distribution The first one as a conditional law of a random variable with beta distribution cf Proposition 35 and the second one is based on the rejection method cf Proposition 38 Using twice Theorem 3 we get a relation satisfied by the β δ a + b, λ, λ b distribution Theorem 4 Let a, b, λ > 0; X, Y and Y 2 independent random variables such that LY = βλ, b and LY 2 = βa, b Then LX = L f δ Y f δ XY 2 LX = β δ a + b, λ, λ b 23 The paper is organized as follows: in Section 2 resp 3 we state our results related to the Kummer distributions resp the generalized beta distributions All the proofs and the intermediate and technical results are postponed in Section 4 2 Results related to Kummer s distributions 2 A characterization of Kummer s distributions The goal of this section is to prove that certain Kummer distributions K 2 a, b, c with a, c > 0 can be characterized as the unique solution of a convolution equation As mentioned in the Introduction, relation 8 does not give an answer to the question Thus, we define a new transformation which is the starting point of our approach y Proposition 2 The transformation T : x, y + x, x + y is an involution + x over ]0, + [ ]0, + [ 2 Let X and Y be two independent random variables Define U, V := T X, Y, ie U := Y and V := X + Y 2 + X + X a Suppose that LX = K 2 a, b, c and LY = γa + b, c, where a, c > 0 and b > a Then U and V are independent and LU = K 2 a + b, b, c and LV = γa, c 22

5 5 b Suppose that LX = K 2 a + b, b, c and LY = γa, c, where a, c > 0 and b > a Then U and V are independent and L U = K 2 a, b, c and L V = γa + b, c 23 Remark 22 Let X and Y be two independent random variables distributed as in item 2 b of Y Proposition 2 Then, using the Mellin transform, Letac have shown in [4], that L = + X K 2 a, b, c However, our proof given in Subsection 4 is different We claim that Proposition 2 implies one part of 3 Indeed, let Y, Y 2 and X be independent random variables such that LY = γa, c, LY 2 = γa + b, c, LX = K 2 a, b, c, a, b, c > 0 Y2 Part 2 a of Proposition 2 implies that L = K 2 a + b, b, c Using item 2 b with + X Y2 X, Y replaced by + X, Y we get L Y + Y 2 + X = LX = K 2 a, b, c, 24 which proves the direct part of the identity 3 given in Theorem 2 The converse is more difficult and requires new ingredients Relation 24 leads to introduce: [y ] = y 25 and [y, y 2,, y n ] = y + [y 2,, y n ], n 2 26 where y n n is a sequence of real numbers in ]0, + [ Note that [Y, Y 2, X] = result Y + Y 2 + X The proof of item 2 of Theorem 2 is based on the following Proposition 23 Let Y, Y 2,, Y n, be a collection of independent and positive random variables such that L Y 2i = L Y, LY 2i = LY 2 i 27 Then [Y,, Y n ] converges almost surely as n to a random variable Y 2 For any positive random variable X 0, the sequence [Y,, Y n, X 0 ] converges almost surely to Y as n It is now easy to prove Item of Theorem 2 Let Y i i be a sequence of rv s satisfying 4 and consider a random variable X, independent from Y i i and such that LX = L Y + Y2 +X 28

6 6 Item of Proposition 23 implies that [Y,, Y n, X] and [Y,, Y n ] converge as to the same random variable Y, as n Since the law of X satisfies 28, it is clear that L [Y,, Y 2n, X] = LX, for any n Thus, LX = LY Using Proposition 2, we deduce then that 28 has a unique solution and LX = K 2 a, b, c Remark 24 In the setting of Generalized Inverse Gaussian and gamma distributions, Letac and Seshadri have developed in [5] a similar scheme using continue fractions 2 Chamayou and Letac have shown in [] a result which looks like 3 and involves the Beta distributions of type 2 Recall the definition of the Beta distribution of type 2 with parameter a, b > 0: β 2 Γa + b a, b dx = ΓaΓb xa + x a b ]0,+ [ xdx Let X, Y and Y 2 be independent random variables such that LX = β 2 a, b, LY = β 2 a + b, a, LY 2 = β 2 a + b, b Then, the authors prove in Example of []: L Y + Y 2 + X = LX 29 The proof of 29 is based on an iterative scheme similar to the one given in Proposition 23 3 Identity 3 permits to prove a part of Item of Example 9 in [] In fact, consider two independent random variables Y and Y 2 such that LY = γa and LY 2 = γa + b We introduce, for c > 0, Y c i := Y i c, i =, 2 It is clear that L Y c = γa, c and L Y c 2 = γa + b, c Consider a collection of random variables X c such that for all c > 0, Xc is independent of Y, Y 2 and with distribution K 2 a, b, c Since K 2 a, b, c converges to β 2 a, b, when c 0, then X c converges, when c 0, to a random variable X with distribution β 2 a, b, we have Y c + Y c 2 + X c Then we deduce from Theorem 2 that L X c = L = c + c + Y Y 2 + X c Y Y2 +X c When c goes to 0, we get LX = L Ŷ + X, where Ŷ is independent of X and LŶ = L Y = β 2 a, a + b This identity has been given in Item of Example in [] Y 2 c>0

7 7 22 Realizations of Kummer s distributions The results of this section are based on the fact that the Kummer distribution K 2 a, b, c with positive parameters is close to the γa, c distribution Inequality allows us to express the distribution K 2 a, b, c as a conditional law of a random variable X 0 with γa, c distribution and to use the rejection schema for simulating a random variable with K 2 a, b, c distribution Since we will use intensively later conditional distributions, we denote LX A the distribution of the random variable X conditionally on the event A Proposition 25 Let X 0, ξ be a couple of independent random variables such that LX 0 = γa, c and Lξ = U [0, ], where a, c > 0 Then, for any b > a, conditionally on {ξ + X 0 a+b < }, LX 0 = K 2 a, b, c 20 The proof of Proposition 25 is straightforward Since Proposition 25 tells us that K 2 a, b, c can be compared with γa, c, it seems natural to ask how 8 is far from 0 As mentioned in the Introduction, 8 is one part of the more general identity 7 This leads us to investigate more generally the distribution of U, V conditionally on {ξ + X 0 a+b < }, where U = + X 0 + Y + X 0 X 0 X 0 + Y, V = X 0 + Y, 2 Y is independent of X 0, ξ and LY = γb, c The key idea is to introduce: Z = X 0 X 0 + Y Consequently, the random variables U and X 0 can be easily expressed in terms of Z and V : U = 22 + V Z + V Z, X 0 = ZV 23 Moreover the law of Z, V is known Indeed, since L X 0, Y = γa, c γb, c then L Z, V X0 = L X 0 + Y, X 0 + Y = βa, b γa + b, c 24 As a result, the previous change of variables leads us to determine the law of + V Z + V Z, V, conditionally on {ξ + V Z a+b < } We consider a more general setting Proposition 26 Let a, b > 0, δ and ξ, V, Z three random variables such that L Z, V, ξ = βa, b LV U[0, ] 25 and V [ δ, ] resp V if δ > resp δ = Then, ZV L + V Z, ξ[ δ + V Z ] a+b ξ [ ] a+b, V δ + V Z < = L Z, ξ L V ξv δ a < 26 The proof of Proposition 26 is given in Section 4 This general result has two consequences The first one is to provide a realization of 5 and 7, see the formulation given in Proposition 27 below The second one is to give a proof of Proposition 37 related to the generalized beta distributions As explained in the Introduction, the Matsumoto-Yor property, leads after transformations to the Kummer distributions and generalized beta distributions and there is a formal link between these two families of laws Proposition 26 gives a more concrete relation

8 8 Proposition 27 Let a, b, c > 0, X 0, ξ, Y three random variables such that: L X 0, Y, ξ = γa, c γb, c U[0, ] 27 Then L X 0, Y ξ + X0 a+b < = K 2 a, b, c γb, c 28 and L U, V ξ + X0 a+b < + V Z ξ = L + V Z, V + V Z a+b < 29 = βa, b K 2 a + b, b, c, where U, V resp Z has been defined by 2 resp 22 Remark 28 Identity 28 is a direct consequence of Proposition 25 2 As shows 23, U can be expressed as an homographic function of Z It is easy to prove that the homographic functions ϕ defined over [0, ] and such that ϕ0 = 0 and ϕ = resp ϕ0 = and ϕ = 0 are of the type ϕ = ϕ c resp ϕx = ϕ c x for some c > 0, where ϕ c x := cx, x [0, ], c > c x Note that the function f δ defined by 6 equals: f δ x = ϕ /δ x, x [0, ], δ > 0 22 From 23, it is clear that U = ϕ +V Z 222 Therefore, we have the following by-product of Proposition 27 Corollary 29 Let Z, V, ξ be three independent random variables such that L Z, V, ξ = βa, b γa + b, c U[0, ] where a, b, c > 0 Then L ϕ V Z ξ + V Z a+b < = βa, b In Proposition 3, we calculate the law of ϕ V Z when V is a constant and Z is β δ a, b, c-distributed We exploit again the vein of conditioning but not in relation with Theorem Identity allows us to use the rejection method which provides a second realization of Kummer s distributions Proposition 20 Let X i, Y i, ξ i i a sequence of independent random variables such that L X i, Y i, ξ i = γa, c γb, c U[0, ], where a, b, c > 0 Let T = inf {i ; ξ i + X i a+b < } 223 Then T is as finite and

9 9 X T and Y T are independent and L X T, Y T = K 2 a, b, c γb, c The random variables U = + X T + Y T X T + X T X T + Y T L U, V = βa, b K 2 a + b, b, c and V = X T + Y T are independent and Remark 2 Obviously Item of Proposition 20 provides a new almost sure realization of Kummer s distributions whose parameters are positive 2 Although Item 2 is a direct consequence of 224 and Theorem, we will prove it directly The random variable T defined by 223 can be considered as a first passage time Vallois [8] gave an almost sure realization of the relation of convolution between the Generalized Inverse Gaussian distribution and the gamma ones He has introduced two-dimensional random walks and he studied the first hitting time of these random walks above an hyperbola Inspired by [8], we give a "similar" realization of Kummer s distributions We begin with notations Let a > 0, α > α 2 > 0, Z 0, Z, Z 2 and Z 3 independent random variables such that LZ 0 = γa, LZ = γα, LZ 2 = γα 2, LZ 3 = γα α Consider two sequences Y i i and Y i i of independent and exponentially distributed rv s We suppose that Z 0, Z, Z 2, Z 3, Y i i, Y i i are independent Then, we set: X n := Z 0 Z 0 + Z + Y + + Y n, X 2 n := Z 2 + Y + + Y n, n 226 It is cleat that X n and X n 2 n are independent and, n L X n, X 2 n = βa, α + n γn + α 2, n 227 We are interested in the following stopping time: τ = inf{n, X 2 n X n > c}, 228 with c > 0 and the convention inf = + Proposition 22 Conditionally on C := {τ < +, X 2 τ X τ < c}, 229 The random variables X and Y defined as X := X τ X τ, Y := X 2τ + Z 3 c X τ 230 are independent and L X, Y = K 2 a, α α 2 +, c γα α 2 +, c 23 where a, c > 0 and α > α 2 > 0

10 0 2 The random variables U and V defined as U = X 2τ + Z 3 X 2 τ + Z 3 c X τ, V = X 2τ + Z c are independent and L U, V = βa, α α 2 + K 2 a + α α 2 +, α + α 2, c 233 Remark 23 In the case C = {X 2 τ X τ > c}, we have not been able to consider the good rv s of interest 2 It is easy to show that the random variables U and V defined in 232 can be written as: U = + X + Y + X X X + Y, V = X + Y Therefore part 2 of Proposition 22 is a direct consequence of Theorem 3 We can give a geometric interpretation of the random variables X, Y, U and V Let us introduce the points: X2 τ + Z 3 c M, X 2 τ + Z 3 X τ X2 τ + Z 3 c X 2 τ + Z 3 M, X 2 τ + Z 3 c M X 2 τ + Z 3 c X τ X 2 τ + Z 3, X 2 τ + Z 3 c Let m be the orthogonal projection of M on the horizontal line y = m M Then m M = X, MM = Y, = U and m M = V Thus Proposition 22 can be m M rewritten as follows: a The two parts m M and MM of the segment m M are independent and their laws are given by 23 b The fraction m M and the total length of m M are independent and the distribution of m M these quantities is given by Results involving generalized beta distributions We give here two different kinds of results: A relation involving the generalized beta distributions and the beta ones which permits to characterize certain generalized beta laws 2 Families of rv s whose law are of the generalized beta type 3 General facts on the generalized beta distributions We give invariance properties verified by the generalized beta distributions Recall that δ is a positive parameter, f δ is the function defined by 6, β δ a, b, c is the distribution defined by 8, and the homographic function ϕ δ is defined by 220 Proposition 3 Let a, b, δ > 0, c IR and Z a random variable such that LZ = β δ a, b, c Then +

11 L Z = β /δ b, a, c 2 L ϕ δ Z = β /δ a, b, a b c 3 L f δ Z = β δ b, a, a b c Corollary 32 Let δ > 0 and Z such that LZ = βa, b Then δz L ϕ δ Z = L = β /δ a, b, a b + δ Z Corollary 32 permits to generate rv s which are β δ -distributed where the third parameter is negative via classical beta random variables It also reveals the specific role of the parameter δ It is clear that Corollary 32 is an application of Proposition 3 with c = 0 The proof of Proposition 3 is direct 32 A characterization of the generalized beta distributions We deal with Theorem 4 and we proceed similarly to Section 2 related to Kummer s distributions However there are differences, for instance we do not need to introduce new identities cf Proposition 2, Theorem 3 suffices Let us start with three independent random variables X, Y and Y 2 such that: LX = β δ a + b, λ, λ b, LY = βλ, b, LY 2 = βa, b, a, b, λ > 0, 3 then Item of Theorem 3 implies that L f δ XY 2 = β δ λ + b, a, a b Since Y and f δ XY 2 are independent and LY = βλ, b, we deduce from Item 2 of Theorem 3 that L f δ Y f δ Y 2 = βa + b, λ, λ b and is therefore equals to LX, which gives the proof of the direct part of identity 23 stated in Theorem 4 Consider a sequence of independent random variables Y i i such that LY 2i = LY = βλ, b, i 32 LY 2i = LY 2 = βa, b, i 33 Then, iterating the identity: LX = L f δ Y f δ XY 2 34 gives L f δ Y f δ Y2 f δ Y 3 f δ Y 2n f δ Y 2n X = β δ a + b, λ, λ b 35 Consider identity 34, in order to prove that LX = β δ a + b, λ, λ b is its only one possible distribution, it is useful to introduce the sequence defined recursively as [y ] = y 36 [y,, y n ] = f δ y [y 2,, y n ] n 37 where y i i is a sequence of real numbers in [0, ] As a consequence, relation 35 can be reformulated as follows: L [Y,, Y 2n, X] = LX = β δ a + b, λ, λ b 38

12 2 Theorem 33 Let Y i i be a sequence of independent random variables such that Then, LY 2i+ = LY = βλ, b, i 0 LY 2i = LY 2 = βa, b, i [Y,, Y n ] converges almost surely as n to a random variable Y 2 For any positive random variable X 0, the sequence [Y,, Y n, X 0 ] converges almost surely to Y as n The proof of Theorem 33 needs technical and preliminaries results which are postponed in Section 4 Remark 34 Theorem 4 is a direct consequence of Theorem 33 In fact,the first part ie comes from the above developments Concerning the second part ie, suppose that 34 holds Let Y i n be a sequence of rv s satisfying assumptions of Theorem 33 and independent from X Then, LX = L [Y,, Y 2n, X] Taking the limit n and using Theorem 33 we deduce that the law of X is unique and equals to LY 2 The method considered in the proof of Theorem 33 is totaly different from the one used by Letac and Seshadri [5] for the Generalized Inverse Gaussian distribution and from the method that we have developed to characterize the Kummer distributions of type two 3 Theorem 4 has been stated in a different form by Chamayou and Letac cf Example 2, [] In fact, let us first observe that if LX = β δ a + b, λ, λ b then Proposition 3 implies that L f δ X = β δ λ, a + b, a Consequently, taking the image by f δ which is an involution, it is easy to prove that 23 is equivalent to LX = L Y f δ X Y 2 LX = β δ λ, a + b, a = Ha, λ; a + b + λ; δ 39 As for the last identity, recall that according to the definition of the hyper-geometric distribution Ha, b; c; z with parameters a > 0, 0 < b < c and z <, we have β δ a, b, c = H c, a; a + b; δ, or equivalently Ha, b; c; z = β z b, c b, a In [], the authors have shown the direct part of 39, ie, but they did not prove the converse Its proof is based on Theorem 33 and is not straightforward 33 Almost sure realizations of the generalized beta distributions We begin by observing that relation 22 allows us to express the β δ a, b, c distribution with a, b > 0 and c < 0 as a conditional law of a random variable X 0 which is βa, b-distributed Recall, see Corollary 32, that such generalized beta-distributions have be obtained via direct images of beta distributions Proposition 35 Let a, b, c > 0, δ and X 0, ξ two independent random variables such that L X 0, ξ = βa, b U [0, ] Then L X 0 ξ [ + δ X0 ] c < = βδ a, b, c 2 L X 0 ξ [ δ + δx0 ] c < = β/δ a, b, c The proof of Proposition 35 is direct This proposition gives a better understanding of Theorem 3, see Proposition 37 below Let us first observe that taking δ = in Theorem 3 permit to recover the classical result involving beta distributions

13 3 Proposition 36 Let a, b, λ > 0, X 0 and Y be two independent random variables such that L X 0, Y = βa + b, λ βa, b 30 Then U = X 0 Y, V = X 0 X 0 Y 3 are independent and L U, V = βλ + b, a βλ, b 32 We now give a realization of Theorem 3, when δ > Let X 0, Y as in Proposition 36 and ξ a random variable independent of X 0, Y and uniformly distributed on [0, ] Consider U and V as defined in 3 From Proposition 35, we have: L X 0, Y ξ + δ X 0 λ+b < = β δ a + b, λ, λ b βa, b The random variables U δ, V δ considered in Theorem 3 and defined by 20 can be expressed in terms of U and V as U δ = U + δ U = f δ U = ϕ /δ U 33 V δ = V [ + δ U ] + δ U V = V δ + δu δ + δu V = ϕ +/δ U V 34 These relations permit to use Proposition 26 and to deduce the law of U δ, V δ conditionally on {ξ + δ X 0 λ+b < } Proposition 37 Let X 0, Y and ξ three random variables such that L X 0, Y, ξ = βa + b, λ βa, b U[0, ] 35, where a, b, λ > 0 Then, for any δ >, L X 0, Y ξ + δ X 0 λ+b < = β δ a + b, λ, λ b βa, b 36 and L U δ, V δ ξ + δ X 0 λ+b < = β δ λ + b, a, a b βλ, b 37 We go on in the direction of conditioning by giving another consequence of relation 22 Indeed, the inequality β δ a, b, cx k δ a, b, c ΓaΓb Γa + b βa, bx, 0 < x < 38 where δ >, c < 0, a, b > 0, allows us to use the rejection method to obtain an almost sure realization of β δ a, b, c distributions

14 4 Proposition 38 Let δ >, a, b, c > 0, X i, ξ i i be a sequence of independent random variables such that L X i, ξ i = βa, b U[0, ] Let T be the random time: T = inf{i ; ξ i + δ X i c < } 39 Then T is finite almost surely and LX T = β δ a, b, c Remark 39 For simplicity we have defined a simple scheme in order to give an almost realization of the β δ a, b, c-distribution whose third parameter is negative Similarly to Proposition 20, it is actually positive to produce a couple X T, Y T such that the two coordinates are independent, LX T = β δ a + b, λ, λ b and LY T = βa, b The proof of Proposition 38 is left to the reader 4 Proofs 4 Proofs of results stated in Section 2 4 Proof of Proposition 2 Proof We only prove Item 2 since Item is obvious Let us start with Item 2 a Let f and g be two bounded Borel functions and [ Y := E f g X + Y ] + X + X Then = c f y + x g x + y + x x a + x a+b e cx y a+b e cy {x>0, y>0} dx dy, where c is the product of the normalizing constants coming from the density functions of K 2 a, b, c and γa + b, c With the change of variables u := y, for x fixed, we get + x = c fu g x + u x a e cx+u u a+b e cu {x>0, y>0} dx du Now we fix u and we make the change of variables v := x + u, then = c fu gv u a+b + u a e cu v a e cv {u>0, v>0} du dv, where c and c are normalizing constants Since T is a bijection and T = T, Item 2 b follows directly 42 Proof of Proposition 23 We begin with two preliminary results, see Lemma 4 and Lemma 42 below Proposition 23 will be a straightforward consequence Let y n n be a sequence of positive number Recall that [y,, y n ] n has been defined in 26 We introduce p n n 0 and q n n 0 defined inductively by p 0 = 0, p = y, p n = p n + y n p n 2, n 2 4 q 0 =, q =, q n = q n + y n q n 2, n 2 42

15 5 Lemma 4 Let x 0 Then for any n : [y,, y n, x] = p n + x p n q n + x q n 43 In particular if x = 0, then [y,, y n, 0] = p n q n Moreover, p n q n p n q n = n n y i, n 44 Proof n+ p n+2 q n p n q n+2 = p n+ q n p n q n+ = n y i, n 45 Let us prove 43 by induction on n It is clear that 43 holds for n =, and [ [y,, y n, x] = y,, y n, y n + x Suppose that 43 is satisfied Using 46 we deduce: [ [y,, y n+, x] = y,, y n, y ] n+ + x = p n + yn+ +x p n q n + yn+ +x q n = p n+ + x p n q n+ + x q n 2 We have p q 0 p 0 q = y Suppose that 44 holds, then ] 46 p n+ q n p n q n+ = p n + y n+ p n q n p n q n + y n+ q n = y n+ p n q n p n q n Relation 45 can be proved similarly Lemma 42 Let u n = p 2n and v n = p 2n+, where p n n 0 and q n n 0 are defined by 4 and q 2n q 2n+ 4 Then u n n 0, respectively v n n 0, is increasing, respectively decreasing and u n < v n p n 2 If y n does not converges to then u n, v n, and [y,, y n, x] converge to the same limit, as q n n Proof Using the definition of p n n 0, q n n 0 and 45, we have 2n+ u n+ u n = p 2n+2 q 2n p 2n q 2n+2 = p 2n+ q 2n p 2n q 2n+ = > 0, q 2n+2 q 2n q 2n+2 q 2n q 2n+2 q 2n y i

16 6 2n+2 y i v n+ v n = < 0 q 2n+3 q 2n+ Then u n is increasing and v n is decreasing Moreover, 44 gives w n = v n u n = p 2n+ q 2n p 2n q 2n+ q 2n q 2n+ = 2 Changing n by n + in 47 leads to: 2n+ y i q 2n q 2n+ > 0 47 w n+ = = 2n+3 y i = q 2n+3 q 2n+2 y 2n+3 q 2n+2 + y 2n+3 q 2n+ y 2n+3 q 2n+2 + y 2n+3 q 2n+ y 2n+2 q 2n+ + y 2n+2 q 2n 2n+ y 2n+2 q 2n+ q 2n + y 2n+2 w n 48 y i Using the fact that q n+ q n = + q n q n y n+ >, we deduce: Inductively we get 2n+3 w n+ < w i=4 y i + y i w n+ < y 2n+3 + y 2n+3 y 2n+2 + y 2n+2 w n 49 Since y n does not converges to, then there exists k > 0, and a subsequence n k such that y nk < k Consequently: ϕ n := {ni 2n+3} +, as n + 4 i 2n+3 y nk It is clear that y nk k implies that ρ with ρ = k < Identity 49 implies + y nk + k that w n+ < w ρ ϕn This prove that w n = 0 Therefore u n n and v n n are two lim n + adjacent sequences and they converge to the same limit Let us define ψx := [y,, y n, x] = p n + x p n q n + x q n, x 0 We have ψ x = q n q n q n + x q n 2 pn p n q n q n Suppose that n is odd, according to item, ψ is increasing Consequently If n is even, we have Therefore [y,, y n, x] and p n q n ψ0 = p n q n ψx p n q n = ψ+ p n q n ψx p n q n converge to the same limit

17 7 Lemma 43 Let Y k k be a sequence of independent and positive random variables such that lim K sup m Then, P lim Y n = + = 0 n P Y m K = 0 40 Proof Starting from the identity { lim Y n = + } = n K n m n {Y m K} We deduce P lim Y n = + n = lim K = lim K lim P n lim n {Y m K} m n m n P Y m K The result follows from 40 Proof of Proposition 23 Let Y i i be a collection of independent random variables satisfying 27 It is clear that 40 holds and therefore Y n does not converges to + as Proposition 23 is a direct consequence of Lemma 42 applied with y n = Y n and x = X 0 43 Proof of Propositions 26 and 27 We begin by showing Proposition 26 in the particular case V = c Proposition 44 Let a, b > 0, δ, c [/δ, ] resp c if δ > resp if δ = Let Z and ξ be two independent random variables such that L Z, ξ = βa, b U[0, ] Then cz L + c Z, ξ[ δ + c Z ] a+b [ ] a+b ξ δ + c Z < = LZ, ξ 4 Proof Let ϕ and ϕ 2 be two bounded Borel functions We denote by cz := E [ϕ + c Z Then = Γa + b ϕ ΓaΓb [0,] 2 cx + c x ϕ 2 ξ [ δ + c Z ] a+b { [ ] a+b< } ξ δ+c Z ϕ 2 u [ δ + c x ] a+b { [ ] a+b< } u δ+c x x a x b du dx cx For a fixed u, the change of variable y = leads to + c x Γa + b cδ a+b = ΓaΓb cb ϕ y ϕ 2 u y a y b [0,] c + cy 2 ]

18 8 c + cy a b { u cδ c+ cy a+b< } du dy c + cy From our assumptions, we have: Now for fixed y, we make the change of cδ cδ cδ a+b, variable t = u we obtain c + cy = Γa + b ΓaΓb δ a b c a Note that if we take ϕ = ϕ 2 = in 42, we get P [0,] 2 ϕ y ϕ 2 t y a y b dt dy 42 ξ [ δ + c Z ] a+b < = δ a b c a 43 Now, we are able to prove Proposition 26 Proof of Proposition 26 Let V be a random variable independent of Z, ξ We set [ V Z 2 := E f + V Z, ξ[ δ + V Z ] ] a+b gv {ξ[δ+v Z] a+b <}, where f and g are Borel bounded functions Then + [ sz 2 = E f + s Z, ξ[ δ + s Z ] a+b 0 gsp V ds {ξ[δ+s Z] a+b <} ] Since V is independent of Z, ξ, Proposition 44 implies that 2 = + 0 where ρs := P Using 43 we deduce: E [f Z, ξ] ρs gsp V ds, ξ [ δ + s Z ] a+b < The result follows from the fact that δv [ ] gv 2 = E [fz, ξ] δ b E δv a Proof of Proposition 27 Let X 0, Y, ξ as in Proposition 27 Recall that U resp Z is defined by 23 resp 22 and V = X 0 + Y Property 24 implies that Z, V +, ξ are independent and 25 holds Using Proposition 26 with V replaced by V + and δ =, we get: L U, V ξ + X0 a+b < = LZ L V ξ + V a < 44 According to 24, LV = γa + b, c We can apply Proposition 25 where a, b, c is replaced by a + b, b, c, we deduce that the right hand-side of 44 equals βa, b K 2 a + b, b, c 44 Proof of Proposition 20 Proof

19 9 Let f and g be two Borel functions Then E fx T gy T {T < } = k E fx k gy k {T =k} Since i, Y i is independent of T and of X i, ξ i, then E fx T gy T {T < } = p k E gy k E fx k {ξk +X k }, a+b k where p = P ξ + X a+b > is the probability of rejection Since X k, Y k, ξ k are iid random variables, then E [ ] fx T gy T {T < } = p k E [gy ] E [ ] fx {ξ+x a+b } k = E [gy ] E [ fx ξ + X a+b } ] Then, Item of Proposition 20 follows from Proposition 25 2 It is clear that the law of U, V conditionally on {T = k} is the one of + Xk + Y k X k, X k + Y k conditionally on {ξ k + X k a+b < } + X k X k + Y k Therefore Item 2 is a direct consequence of Proposition Proof of Proposition 22 We begin with a preliminary result Lemma 45 Consider two random variables W and W such that LW = γa, c and LW = γ, c with a, c > 0 Let W be a positive random variable independent of W, W and f :]0, + [ ]0, + [ a Borel function Then, conditionally on A := {W fw W + W }, the rv s W and W + W fw are independent, W + W fw is γ, c distributed and P W dx A = c a fx a exp cfx P W dx, where c a is the normalizing constant Proof Let ϕ and ϕ 2 be two Borel functions We denote by = E [ ϕ W ϕ 2 W + W fw {W <fw <W +W }] Then = λ ϕ x ϕ 2 y + y fx y a e cy+y {y<fx<y+y } P W dx dydy Taking the change of variable u = y + y for fixed x and y, we get = λ ϕ x ϕ 2 u fx e cu y a {y<fx<u} P W dx dydu = λ ϕ x ϕ 2 u fx e cu fx a {fx<u} P W dx du a Using the change of variable z = u fx for fixed x we obtain = λ ϕ x ϕ 2 z e cz e cfx fx a {z>0} P W dx dz, a

20 20 which ends the proof Proof of Proposition 22 Let ϕ, ϕ 2 : [0, + [ IR be two non-negative Borel functions and = E [ϕ X ϕ 2 Y C ] Using 228 and 229, we can write as = n n where [ X n n = E ϕ X n ϕ 2 X2 n + Z 3 c X n Cn ] and C n = C {τ = n}, ie C n = {X 2 n X n c, X 2 n X n > c, X 2 n X n < c} Since n X n is non-decreasing, C n can be reduced to: C n = {X 2 n X n c < X 2 n X n} { = c X 2n X n < } c X 2n Set W := c X 2n, W := c X 2n X 2 n, W := X n and fx = x, 0 < x < According to 225 and 227 we deduce immediately that W, W and W are independent and Moreover, LW = γα 2 + n, c, LW = γ, c, LW = βa, α + n c X 2n X n = W + W fw Therefore we can use Lemma 45 with a = n + α 2, we get α2+n x n = c ϕ ϕ 2 y x a x α+n x x y α α2 exp { cy + x } {0<x<, y>0} dxdy, where c is constant It is clear that n = c + 0 ϕ 2 y y α α2 e cy dy n where Setting z = x x n = 0 leads directly to x ϕ x x a x α α2 exp { cx x } dx n = + 0 ϕ z z a + z a+α α2+ e cz dz

21 2 42 Proofs of results given in Section 3 Proof of Proposition 3 Let f be a Borel function Let := E[f Z] Then = k δ a, b, c 0 f z z a z b + δ z c dz Setting x = z and observing that + δ x = δ + /δ x gives = k δ a, b, c δ c fx x b x a + /δ x c dx Let 2 := E[fϕ δ Z] Then δx k δ a, b, c f x a x b + δ x c dx + δ x Using the change of variable y = 2 = k δ a, b, c δ c 0 δx + δ x we get fy y a y b + /δ y a b c dy Proof of Corollary 32 We have f δ Z = ϕ /δ Z Then from Item of Proposition 3 we get L Z = β /δ b, a, c Besides, Item 2 of Proposition 3 gives that L f δ Z = β δ b, a, a b c 42 Proof of Theorem 33 In a first step we deal with a deterministic sequence y i i Then, we derive the stochastic case ie the setting of Theorem 33 Let us recall cf 36 and 37 that we have introduced: [y ] = y, and [y,, y n ] = f δ y [y 2,, y n ] n 45 where y i [0, ] for any i Under some additional assumptions, we prove that [y,, y n, x] n converges as n goes to +, for x in [0, ] and the limit does not depend on x, see Propositions 46 and 47 Let us first introduce Y n x := [y,, y n, x], n, x [0, ] 46 g λ,λ 2 z := f δ λ f δ λ 2 z, z [0, ], δ > 0 where λ, λ 2 [0, ] Note that the function g λ,λ 2 is increasing It is clear that Y 2 x = [y, y 2, x] = f δ y f δ y 2 x = g y,y 2 x and Y 2n x = g y,y 2 [y 3,, y 2n, x] Then, reasoning by induction, we have Y 2n x = g y,y 2 g y2n,y 2n x 47

22 22 Proposition 46 Let y i i be a sequence of real numbers valued on ]0, ] such that either or n y 2i lim = 0 48 n y 2i n y 2i+ lim = 0 49 n y 2i Then, for all x [0, ], Y n x converges to a limit which does not depend on x The proof of Proposition 46 is based on Lemma 48 and Proposition 49 below However, in order to handle a specific case, we study the convergence of Y n x under a condition which is different from 48 and 49 Proposition 47 Let y i i be a sequence of real numbers valued on [0, ] If δ and n lim n y 2i δ 2 y 2i + δ y 2i 2 = then Y n x converges to a limit which does not depend on x 2 If 0 δ < and n lim n δ 2 y 2i y 2i + δ y 2i + y 2i y 2i 2 = 0 42 then Y n x converges to a limit which does depend on x Proposition 47 will be proved directly, see below Lemma 48 2 We have Y 2n 0 n is increasing and Y 2n n is decreasing Y 2n 0 Y 2n x Y 2n 422 and Proof Y 2n 0 Y 2n+ x Y 2n 423 Let us prove that Y 2n 0 n is increasing: Y 2n+2 0 = g y,y 2 g y2n,y 2n gy2n+,y 2n+2 0 g y,y 2 g y2n,y 2n 0 = Y 2n 0 We proceed similarly to prove that Y 2n n is decreasing 2 From 47 we have Y 2n x = g y,y 2 g y2n,y 2n x Then 422 is a direct consequence of 47 On the other hand, 423 follows from: and 0 y 2n+ f δ x Y 2n+ x = g y,y 2 g y2n,y 2n y 2n+ f δ x

23 23 Proposition 49 Let y n n be a sequence of real numbers such that lim Y 2n Y 2n 0 = n Then, for all x [0, ], the sequence Y n x converges as n goes to to a limit that does not depend on x Proposition 49 is a straightforward consequence of Lemma 48 conditions 424 holds We now investigate under which Lemma 40 Let y, y 2 ]0, [ and λ = y, y 2 If δ then 0 g λx 2 If 0 < δ <, then Proof We have 0 g λx y y 2 δ 2, x [0, ] δ y 2 y y 2 δ 2, x [0, ] δ y + y y 2 2 g λx = y y 2 δ 2 Nx 2, where Nx = + δ y + δ y y 2 x It is easy to see that if δ then x Nx is increasing, then Nx N0 = + δ y As a consequence relation 425 holds If 0 < δ < then x Nx is decreasing and Nx N = + δ y + y y 2 Then relation 426 holds Lemma 4 Let y, y 2 ]0, [ and λ = y, y 2 then 0 g λx y 2 y, x [0, ] 427 Proof For 0 c <, we set ψ c u := u, u u c We have Then ψ c is increasing and ψ cu = c + u c 2 ψ c u < ψ c =, u [0, [ 429 c As a consequence, That proves 427 for δ y y 2 δ 2 + δ y 2 = y y 2 ψ y δ 2 y 2 y

24 24 Besides, for 0 < δ <, we have where z := y + y y 2 Since z y, then y y 2 δ 2 + δ y + y y 2 2 = y y 2 ψ z δ 2 y y 2 z 2, δ 2 y y 2 + δ y + y y 2 2 y δ2 2 y 2 y y Proof of Proposition 46 Let y n n be a sequence of real numbers in ]0, [ satisfying 48 Proposition 49 tells us that we have only to prove that lim Y 2n Y 2n 0 = 0 Relation 47 n implies that Y 2n Y 2n 0 = g y,y 2 b g y,y 2 a, where b = g y3,y 4 g y2n,y 2n, a = g y3,y 4 g y2n,y 2n We take λ = y, y 2, then Y 2n Y 2n 0 = g λcb a, a < c < b 43 From Lemma 4 we deduce that With iteration we get Y 2n Y 2n 0 y 2 y b a 0 Y 2n Y 2n 0 n y 2i y 2i Thus, if y n n verify 48 then we obtain the desired result Now, suppose that y n n verify 49 We deduce from 37 that Y n+ x = f δ y Ŷ n x, where Ŷ n x := [y 2,, y n+, x] It is clear that if y n n verify 49 then y n+ n satifies 48 and we have only to proceed as above to complete the proof Proof of Proposition 47 We develop a scheme which is similar to the previous one, the goal is to prove that lim n Y 2n Y 2n 0 = 0 If δ, then 43 and Item of Lemma 40 imply that where a and b has been defined by 430 Reasoning by induction, we get Y 2n Y 2n 0 b a y y 2 δ 2 + δ y 2, Y 2n Y 2n 0 n y 2i δ 2 y 2i + δ y 2i 2

25 25 Using 420 we can conclude If 0 < δ <, then Item 2 of Lemma 40 implies that Iterating this scheme leads to Y 2n Y 2n 0 b a Y 2n Y 2n 0 n y y 2 δ 2 + δ y + y y 2 2 y 2i y 2i δ 2 + δ y 2i + y 2i y 2i 2 Relation 42 permits to deduce that Y 2n Y 2n 0 goes to 0 as n Proposition 33 is a direct consequence of Proposition 42 below Proposition 42 Let Y i i be a sequence of independent random variables valued on ]0, [ such that Then LY 2i = LY 2, i 432 LY 2i = LY, i 433 E ln Y i <, i 434 If E ln Y 2 < E ln Y resp E ln Y 2 > E ln Y then Y i i satisfies relation 48 resp relation 49 2 If E ln Y 2 = E ln Y then Y n n satisfies 420 if δ and 42 otherwise Proof Let us introduce We have ln Z n = n n Z n = n Y 2i Y 2i n ln Y 2i n n ln Y 2i Using the law of large numbers, we get, that almost surely lim n n n ln Y 2i = E ln Y 2, lim n n n ln Y 2i = E ln Y As a consequence, if E ln Y 2 < E ln Y then lim ln Z n =, which implies that 48 n holds The case E ln Y 2 > E ln Y can be studied similarly 2 Suppose that E ln Y 2 = E ln Y We will proceed as above We consider two cases δ and 0 < δ < a If δ We define We have ln Z n = n Z n = n n Y 2i n ln Y 2i + n δ 2 Y 2i + δ Y 2i 2 n δ 2 Y 2i ln + δ Y 2i 2

26 26 Using the law of large numbers we get n lim ln Y 2i = E ln Y 2 = E ln Y n n lim n n n On the other hand, we have δ 2 Y 2i ln + δ Y 2i 2 = E ln as δ 2 Y + δ Y 2 as δ 2 Y + δ Y 2 = Y ψ Y δ 2, where ψ c was defined in 428 From 429 we deduce that Thus δ 2 Y + δ Y 2 < Y δ 2 Y E ln Y + E ln + δ Y 2 < 0, which implies that lim n ln Z n = Then, relation 420 holds b If 0 < δ <, the proof is similar 422 Proof of Proposition 37 Proof Let X 0, Y and ξ such that L X 0, Y, ξ = βa + b, λ βa, b U[0, ] Recall that U and V has been defined by 3, and U δ, V δ by relations 33 and 34 We set: Z := V and V := + δ U Then 25 holds and V Z + V Z = V δ, δ [ + V Z ] = + δ X 0 Therefore, we can apply Proposition 26, we get successively: L V δ, + ξ δ U [ + δ X0 ] λ+b < = LV L + ξ δ [ U δ + δ ] λ U < = βλ, b L + ξ δ [ ] λ U δ + δu < Then L V δ, U ξ [ + δ X 0 ] λ+b < ξ [ ] λ = βλ, b L U δ + δu < But, LU = βλ + b, a then, Item 2 of Proposition 35 implies that ξ [ ] λ L U δ + δu < = β /δ λ + b, a, λ Since U δ = ϕ /δ U then Item 2 of Proposition 3 gives [ ] L U δ ξ λ δ + δu < = β δ λ + b, a, a b

27 27 Acknowledgements We sincerely thank Efoevi Angélo Koudou for his discussions during this work References [] Chamayou, JF and Letac, G 99 Explicit stationary distributions for compositions of random functions and products of random matrices J Theoret Probab 4, 3-36 [2] Koudou, E and Vallois, P 202 Independence properties of the Matsumoto-Yor type Bernoulli 8, 9-36 [3] Koudou, E and Vallois, P 20 Which distributions have the Matsumoto-Yor property? Electron Commun Probab 6, [4] Letac, G 2006 Working on Kummer distributions March 30th [5] Letac, G and Seshadri, V 983 A characterization of the generalized inverse gaussian distribution by continued fractions Z Wahrscheinlichkeitstheorie verw Gebiete 62, [6] Letac,G and Wesołowski,J 2000 An independence property for the product of GIG and Gamma laws Ann Probab 28, [7] Matsumoto, H and Yor, M 200 An analogue of Pitman s 2M X theorem for exponential Wiener functional, Part II: the role of the generalized inverse Gaussian laws Nagoya Math J 62, [8] Vallois, P 989 Sur le passage de certaines marches aléatoires au dessus d une hyperbole équilatère Ann Inst H Poincaré Probab Stat

On Kummer s distributions of type two and generalized Beta distributions

On Kummer s distributions of type two and generalized Beta distributions On Kummer s distributions of type two and generalized Beta distributions Marwa Hamza and Pierre Vallois September 8, 205 Université de Lorraine, Institut de Mathématiques Elie Cartan, INRIA-BIGS, CNRS

More information

Independence properties of the Matsumoto-Yor type

Independence properties of the Matsumoto-Yor type Independence properties of the Matsumoto-Yor type A. E. Koudou and P. Vallois Institut Elie Cartan, Laboratoire de Mathématiques, B.P. 239, F-54506 Vandoeuvre-lès-Nancy cedex, Efoevi.Koudou@iecn.u-nancy.fr,

More information

The random continued fractions of Dyson and their extensions. La Sapientia, January 25th, G. Letac, Université Paul Sabatier, Toulouse.

The random continued fractions of Dyson and their extensions. La Sapientia, January 25th, G. Letac, Université Paul Sabatier, Toulouse. The random continued fractions of Dyson and their extensions La Sapientia, January 25th, 2010. G. Letac, Université Paul Sabatier, Toulouse. 1 56 years ago Freeman J. Dyson the Princeton physicist and

More information

Convergence at first and second order of some approximations of stochastic integrals

Convergence at first and second order of some approximations of stochastic integrals Convergence at first and second order of some approximations of stochastic integrals Bérard Bergery Blandine, Vallois Pierre IECN, Nancy-Université, CNRS, INRIA, Boulevard des Aiguillettes B.P. 239 F-5456

More information

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past.

Lecture 5. If we interpret the index n 0 as time, then a Markov chain simply requires that the future depends only on the present and not on the past. 1 Markov chain: definition Lecture 5 Definition 1.1 Markov chain] A sequence of random variables (X n ) n 0 taking values in a measurable state space (S, S) is called a (discrete time) Markov chain, if

More information

Lecture 17: The Exponential and Some Related Distributions

Lecture 17: The Exponential and Some Related Distributions Lecture 7: The Exponential and Some Related Distributions. Definition Definition: A continuous random variable X is said to have the exponential distribution with parameter if the density of X is e x if

More information

Brownian Motion and Stochastic Calculus

Brownian Motion and Stochastic Calculus ETHZ, Spring 17 D-MATH Prof Dr Martin Larsson Coordinator A Sepúlveda Brownian Motion and Stochastic Calculus Exercise sheet 6 Please hand in your solutions during exercise class or in your assistant s

More information

A.Piunovskiy. University of Liverpool Fluid Approximation to Controlled Markov. Chains with Local Transitions. A.Piunovskiy.

A.Piunovskiy. University of Liverpool Fluid Approximation to Controlled Markov. Chains with Local Transitions. A.Piunovskiy. University of Liverpool piunov@liv.ac.uk The Markov Decision Process under consideration is defined by the following elements X = {0, 1, 2,...} is the state space; A is the action space (Borel); p(z x,

More information

Math Stochastic Processes & Simulation. Davar Khoshnevisan University of Utah

Math Stochastic Processes & Simulation. Davar Khoshnevisan University of Utah Math 5040 1 Stochastic Processes & Simulation Davar Khoshnevisan University of Utah Module 1 Generation of Discrete Random Variables Just about every programming language and environment has a randomnumber

More information

A FEW REMARKS ON THE SUPREMUM OF STABLE PROCESSES P. PATIE

A FEW REMARKS ON THE SUPREMUM OF STABLE PROCESSES P. PATIE A FEW REMARKS ON THE SUPREMUM OF STABLE PROCESSES P. PATIE Abstract. In [1], Bernyk et al. offer a power series and an integral representation for the density of S 1, the maximum up to time 1, of a regular

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,

More information

Sobolev Spaces. Chapter 10

Sobolev Spaces. Chapter 10 Chapter 1 Sobolev Spaces We now define spaces H 1,p (R n ), known as Sobolev spaces. For u to belong to H 1,p (R n ), we require that u L p (R n ) and that u have weak derivatives of first order in L p

More information

THE INVERSE FUNCTION THEOREM

THE INVERSE FUNCTION THEOREM THE INVERSE FUNCTION THEOREM W. PATRICK HOOPER The implicit function theorem is the following result: Theorem 1. Let f be a C 1 function from a neighborhood of a point a R n into R n. Suppose A = Df(a)

More information

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities

PCMI Introduction to Random Matrix Theory Handout # REVIEW OF PROBABILITY THEORY. Chapter 1 - Events and Their Probabilities PCMI 207 - Introduction to Random Matrix Theory Handout #2 06.27.207 REVIEW OF PROBABILITY THEORY Chapter - Events and Their Probabilities.. Events as Sets Definition (σ-field). A collection F of subsets

More information

A slow transient diusion in a drifted stable potential

A slow transient diusion in a drifted stable potential A slow transient diusion in a drifted stable potential Arvind Singh Université Paris VI Abstract We consider a diusion process X in a random potential V of the form V x = S x δx, where δ is a positive

More information

Chapter 3 Second Order Linear Equations

Chapter 3 Second Order Linear Equations Partial Differential Equations (Math 3303) A Ë@ Õæ Aë áöß @. X. @ 2015-2014 ú GA JË@ É Ë@ Chapter 3 Second Order Linear Equations Second-order partial differential equations for an known function u(x,

More information

A Note on the Central Limit Theorem for a Class of Linear Systems 1

A Note on the Central Limit Theorem for a Class of Linear Systems 1 A Note on the Central Limit Theorem for a Class of Linear Systems 1 Contents Yukio Nagahata Department of Mathematics, Graduate School of Engineering Science Osaka University, Toyonaka 560-8531, Japan.

More information

Laplace s Equation. Chapter Mean Value Formulas

Laplace s Equation. Chapter Mean Value Formulas Chapter 1 Laplace s Equation Let be an open set in R n. A function u C 2 () is called harmonic in if it satisfies Laplace s equation n (1.1) u := D ii u = 0 in. i=1 A function u C 2 () is called subharmonic

More information

On the quantiles of the Brownian motion and their hitting times.

On the quantiles of the Brownian motion and their hitting times. On the quantiles of the Brownian motion and their hitting times. Angelos Dassios London School of Economics May 23 Abstract The distribution of the α-quantile of a Brownian motion on an interval [, t]

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4.1 (*. Let independent variables X 1,..., X n have U(0, 1 distribution. Show that for every x (0, 1, we have P ( X (1 < x 1 and P ( X (n > x 1 as n. Ex. 4.2 (**. By using induction

More information

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales.

Lecture 2. We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. Lecture 2 1 Martingales We now introduce some fundamental tools in martingale theory, which are useful in controlling the fluctuation of martingales. 1.1 Doob s inequality We have the following maximal

More information

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES

UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES Applied Probability Trust 7 May 22 UPPER DEVIATIONS FOR SPLIT TIMES OF BRANCHING PROCESSES HAMED AMINI, AND MARC LELARGE, ENS-INRIA Abstract Upper deviation results are obtained for the split time of a

More information

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R.

Ergodic Theorems. Samy Tindel. Purdue University. Probability Theory 2 - MA 539. Taken from Probability: Theory and examples by R. Ergodic Theorems Samy Tindel Purdue University Probability Theory 2 - MA 539 Taken from Probability: Theory and examples by R. Durrett Samy T. Ergodic theorems Probability Theory 1 / 92 Outline 1 Definitions

More information

Riemann integral and volume are generalized to unbounded functions and sets. is an admissible set, and its volume is a Riemann integral, 1l E,

Riemann integral and volume are generalized to unbounded functions and sets. is an admissible set, and its volume is a Riemann integral, 1l E, Tel Aviv University, 26 Analysis-III 9 9 Improper integral 9a Introduction....................... 9 9b Positive integrands................... 9c Special functions gamma and beta......... 4 9d Change of

More information

Potentials of stable processes

Potentials of stable processes Potentials of stable processes A. E. Kyprianou A. R. Watson 5th December 23 arxiv:32.222v [math.pr] 4 Dec 23 Abstract. For a stable process, we give an explicit formula for the potential measure of the

More information

Entropy dimensions and a class of constructive examples

Entropy dimensions and a class of constructive examples Entropy dimensions and a class of constructive examples Sébastien Ferenczi Institut de Mathématiques de Luminy CNRS - UMR 6206 Case 907, 63 av. de Luminy F3288 Marseille Cedex 9 (France) and Fédération

More information

Chapter 7. Markov chain background. 7.1 Finite state space

Chapter 7. Markov chain background. 7.1 Finite state space Chapter 7 Markov chain background A stochastic process is a family of random variables {X t } indexed by a varaible t which we will think of as time. Time can be discrete or continuous. We will only consider

More information

APPLICATIONS OF DIFFERENTIABILITY IN R n.

APPLICATIONS OF DIFFERENTIABILITY IN R n. APPLICATIONS OF DIFFERENTIABILITY IN R n. MATANIA BEN-ARTZI April 2015 Functions here are defined on a subset T R n and take values in R m, where m can be smaller, equal or greater than n. The (open) ball

More information

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan

Monte-Carlo MMD-MA, Université Paris-Dauphine. Xiaolu Tan Monte-Carlo MMD-MA, Université Paris-Dauphine Xiaolu Tan tan@ceremade.dauphine.fr Septembre 2015 Contents 1 Introduction 1 1.1 The principle.................................. 1 1.2 The error analysis

More information

ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER

ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER ON THE FIRST TIME THAT AN ITO PROCESS HITS A BARRIER GERARDO HERNANDEZ-DEL-VALLE arxiv:1209.2411v1 [math.pr] 10 Sep 2012 Abstract. This work deals with first hitting time densities of Ito processes whose

More information

Solutions to Homework # 1 Math 381, Rice University, Fall (x y) y 2 = 0. Part (b). We make a convenient change of variables:

Solutions to Homework # 1 Math 381, Rice University, Fall (x y) y 2 = 0. Part (b). We make a convenient change of variables: Hildebrand, Ch. 8, # : Part (a). We compute Subtracting, we eliminate f... Solutions to Homework # Math 38, Rice University, Fall 2003 x = f(x + y) + (x y)f (x + y) y = f(x + y) + (x y)f (x + y). x = 2f(x

More information

) ) = γ. and P ( X. B(a, b) = Γ(a)Γ(b) Γ(a + b) ; (x + y, ) I J}. Then, (rx) a 1 (ry) b 1 e (x+y)r r 2 dxdy Γ(a)Γ(b) D

) ) = γ. and P ( X. B(a, b) = Γ(a)Γ(b) Γ(a + b) ; (x + y, ) I J}. Then, (rx) a 1 (ry) b 1 e (x+y)r r 2 dxdy Γ(a)Γ(b) D 3 Independent Random Variables II: Examples 3.1 Some functions of independent r.v. s. Let X 1, X 2,... be independent r.v. s with the known distributions. Then, one can compute the distribution of a r.v.

More information

A note on the convex infimum convolution inequality

A note on the convex infimum convolution inequality A note on the convex infimum convolution inequality Naomi Feldheim, Arnaud Marsiglietti, Piotr Nayar, Jing Wang Abstract We characterize the symmetric measures which satisfy the one dimensional convex

More information

Regularity of the density for the stochastic heat equation

Regularity of the density for the stochastic heat equation Regularity of the density for the stochastic heat equation Carl Mueller 1 Department of Mathematics University of Rochester Rochester, NY 15627 USA email: cmlr@math.rochester.edu David Nualart 2 Department

More information

Metric Spaces and Topology

Metric Spaces and Topology Chapter 2 Metric Spaces and Topology From an engineering perspective, the most important way to construct a topology on a set is to define the topology in terms of a metric on the set. This approach underlies

More information

On Stopping Times and Impulse Control with Constraint

On Stopping Times and Impulse Control with Constraint On Stopping Times and Impulse Control with Constraint Jose Luis Menaldi Based on joint papers with M. Robin (216, 217) Department of Mathematics Wayne State University Detroit, Michigan 4822, USA (e-mail:

More information

The integrals in Gradshteyn and Ryzhik. Part 10: The digamma function

The integrals in Gradshteyn and Ryzhik. Part 10: The digamma function SCIENTIA Series A: Mathematical Sciences, Vol 7 29, 45 66 Universidad Técnica Federico Santa María Valparaíso, Chile ISSN 76-8446 c Universidad Técnica Federico Santa María 29 The integrals in Gradshteyn

More information

1 Continuous-time chains, finite state space

1 Continuous-time chains, finite state space Université Paris Diderot 208 Markov chains Exercises 3 Continuous-time chains, finite state space Exercise Consider a continuous-time taking values in {, 2, 3}, with generator 2 2. 2 2 0. Draw the diagramm

More information

THE INVERSE FUNCTION THEOREM FOR LIPSCHITZ MAPS

THE INVERSE FUNCTION THEOREM FOR LIPSCHITZ MAPS THE INVERSE FUNCTION THEOREM FOR LIPSCHITZ MAPS RALPH HOWARD DEPARTMENT OF MATHEMATICS UNIVERSITY OF SOUTH CAROLINA COLUMBIA, S.C. 29208, USA HOWARD@MATH.SC.EDU Abstract. This is an edited version of a

More information

VISCOSITY SOLUTIONS. We follow Han and Lin, Elliptic Partial Differential Equations, 5.

VISCOSITY SOLUTIONS. We follow Han and Lin, Elliptic Partial Differential Equations, 5. VISCOSITY SOLUTIONS PETER HINTZ We follow Han and Lin, Elliptic Partial Differential Equations, 5. 1. Motivation Throughout, we will assume that Ω R n is a bounded and connected domain and that a ij C(Ω)

More information

Recall that if X is a compact metric space, C(X), the space of continuous (real-valued) functions on X, is a Banach space with the norm

Recall that if X is a compact metric space, C(X), the space of continuous (real-valued) functions on X, is a Banach space with the norm Chapter 13 Radon Measures Recall that if X is a compact metric space, C(X), the space of continuous (real-valued) functions on X, is a Banach space with the norm (13.1) f = sup x X f(x). We want to identify

More information

Lecture 17 Brownian motion as a Markov process

Lecture 17 Brownian motion as a Markov process Lecture 17: Brownian motion as a Markov process 1 of 14 Course: Theory of Probability II Term: Spring 2015 Instructor: Gordan Zitkovic Lecture 17 Brownian motion as a Markov process Brownian motion is

More information

Study of some equivalence classes of primes

Study of some equivalence classes of primes Notes on Number Theory and Discrete Mathematics Print ISSN 3-532, Online ISSN 2367-8275 Vol 23, 27, No 2, 2 29 Study of some equivalence classes of primes Sadani Idir Department of Mathematics University

More information

SOLUTIONS OF SEMILINEAR WAVE EQUATION VIA STOCHASTIC CASCADES

SOLUTIONS OF SEMILINEAR WAVE EQUATION VIA STOCHASTIC CASCADES Communications on Stochastic Analysis Vol. 4, No. 3 010) 45-431 Serials Publications www.serialspublications.com SOLUTIONS OF SEMILINEAR WAVE EQUATION VIA STOCHASTIC CASCADES YURI BAKHTIN* AND CARL MUELLER

More information

2019 Spring MATH2060A Mathematical Analysis II 1

2019 Spring MATH2060A Mathematical Analysis II 1 2019 Spring MATH2060A Mathematical Analysis II 1 Notes 1. CONVEX FUNCTIONS First we define what a convex function is. Let f be a function on an interval I. For x < y in I, the straight line connecting

More information

MATH 425, FINAL EXAM SOLUTIONS

MATH 425, FINAL EXAM SOLUTIONS MATH 425, FINAL EXAM SOLUTIONS Each exercise is worth 50 points. Exercise. a The operator L is defined on smooth functions of (x, y by: Is the operator L linear? Prove your answer. L (u := arctan(xy u

More information

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535

Additive functionals of infinite-variance moving averages. Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Additive functionals of infinite-variance moving averages Wei Biao Wu The University of Chicago TECHNICAL REPORT NO. 535 Departments of Statistics The University of Chicago Chicago, Illinois 60637 June

More information

TRAVELING WAVES FOR A BOUNDARY REACTION-DIFFUSION EQUATION

TRAVELING WAVES FOR A BOUNDARY REACTION-DIFFUSION EQUATION TRAVELING WAVES FOR A BOUNDARY REACTION-DIFFUSION EQUATION L. CAFFARELLI, A. MELLET, AND Y. SIRE Abstract. We prove the existence of a traveling wave solution for a boundary reaction diffusion equation

More information

Continuous Functions on Metric Spaces

Continuous Functions on Metric Spaces Continuous Functions on Metric Spaces Math 201A, Fall 2016 1 Continuous functions Definition 1. Let (X, d X ) and (Y, d Y ) be metric spaces. A function f : X Y is continuous at a X if for every ɛ > 0

More information

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem

On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem On the martingales obtained by an extension due to Saisho, Tanemura and Yor of Pitman s theorem Koichiro TAKAOKA Dept of Applied Physics, Tokyo Institute of Technology Abstract M Yor constructed a family

More information

56 4 Integration against rough paths

56 4 Integration against rough paths 56 4 Integration against rough paths comes to the definition of a rough integral we typically take W = LV, W ; although other choices can be useful see e.g. remark 4.11. In the context of rough differential

More information

Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains.

Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains. Institute for Applied Mathematics WS17/18 Massimiliano Gubinelli Markov processes Course note 2. Martingale problems, recurrence properties of discrete time chains. [version 1, 2017.11.1] We introduce

More information

MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES

MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES J. Korean Math. Soc. 47 1, No., pp. 63 75 DOI 1.4134/JKMS.1.47..63 MOMENT CONVERGENCE RATES OF LIL FOR NEGATIVELY ASSOCIATED SEQUENCES Ke-Ang Fu Li-Hua Hu Abstract. Let X n ; n 1 be a strictly stationary

More information

Centers of projective vector fields of spatial quasi-homogeneous systems with weight (m, m, n) and degree 2 on the sphere

Centers of projective vector fields of spatial quasi-homogeneous systems with weight (m, m, n) and degree 2 on the sphere Electronic Journal of Qualitative Theory of Differential Equations 2016 No. 103 1 26; doi: 10.14232/ejqtde.2016.1.103 http://www.math.u-szeged.hu/ejqtde/ Centers of projective vector fields of spatial

More information

A note on the acyclic 3-choosability of some planar graphs

A note on the acyclic 3-choosability of some planar graphs A note on the acyclic 3-choosability of some planar graphs Hervé Hocquard, Mickael Montassier, André Raspaud To cite this version: Hervé Hocquard, Mickael Montassier, André Raspaud. A note on the acyclic

More information

Faithful couplings of Markov chains: now equals forever

Faithful couplings of Markov chains: now equals forever Faithful couplings of Markov chains: now equals forever by Jeffrey S. Rosenthal* Department of Statistics, University of Toronto, Toronto, Ontario, Canada M5S 1A1 Phone: (416) 978-4594; Internet: jeff@utstat.toronto.edu

More information

Gaussian vectors and central limit theorem

Gaussian vectors and central limit theorem Gaussian vectors and central limit theorem Samy Tindel Purdue University Probability Theory 2 - MA 539 Samy T. Gaussian vectors & CLT Probability Theory 1 / 86 Outline 1 Real Gaussian random variables

More information

On a class of stochastic differential equations in a financial network model

On a class of stochastic differential equations in a financial network model 1 On a class of stochastic differential equations in a financial network model Tomoyuki Ichiba Department of Statistics & Applied Probability, Center for Financial Mathematics and Actuarial Research, University

More information

Two viewpoints on measure valued processes

Two viewpoints on measure valued processes Two viewpoints on measure valued processes Olivier Hénard Université Paris-Est, Cermics Contents 1 The classical framework : from no particle to one particle 2 The lookdown framework : many particles.

More information

process on the hierarchical group

process on the hierarchical group Intertwining of Markov processes and the contact process on the hierarchical group April 27, 2010 Outline Intertwining of Markov processes Outline Intertwining of Markov processes First passage times of

More information

AN EXTREME-VALUE ANALYSIS OF THE LIL FOR BROWNIAN MOTION 1. INTRODUCTION

AN EXTREME-VALUE ANALYSIS OF THE LIL FOR BROWNIAN MOTION 1. INTRODUCTION AN EXTREME-VALUE ANALYSIS OF THE LIL FOR BROWNIAN MOTION DAVAR KHOSHNEVISAN, DAVID A. LEVIN, AND ZHAN SHI ABSTRACT. We present an extreme-value analysis of the classical law of the iterated logarithm LIL

More information

Chapter 3. Differentiable Mappings. 1. Differentiable Mappings

Chapter 3. Differentiable Mappings. 1. Differentiable Mappings Chapter 3 Differentiable Mappings 1 Differentiable Mappings Let V and W be two linear spaces over IR A mapping L from V to W is called a linear mapping if L(u + v) = Lu + Lv for all u, v V and L(λv) =

More information

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539

Brownian motion. Samy Tindel. Purdue University. Probability Theory 2 - MA 539 Brownian motion Samy Tindel Purdue University Probability Theory 2 - MA 539 Mostly taken from Brownian Motion and Stochastic Calculus by I. Karatzas and S. Shreve Samy T. Brownian motion Probability Theory

More information

Explicit Bounds for the Distribution Function of the Sum of Dependent Normally Distributed Random Variables

Explicit Bounds for the Distribution Function of the Sum of Dependent Normally Distributed Random Variables Explicit Bounds for the Distribution Function of the Sum of Dependent Normally Distributed Random Variables Walter Schneider July 26, 20 Abstract In this paper an analytic expression is given for the bounds

More information

Recurrence of Simple Random Walk on Z 2 is Dynamically Sensitive

Recurrence of Simple Random Walk on Z 2 is Dynamically Sensitive arxiv:math/5365v [math.pr] 3 Mar 25 Recurrence of Simple Random Walk on Z 2 is Dynamically Sensitive Christopher Hoffman August 27, 28 Abstract Benjamini, Häggström, Peres and Steif [2] introduced the

More information

) k ( 1 λ ) n k. ) n n e. k!(n k)! n

) k ( 1 λ ) n k. ) n n e. k!(n k)! n k ) λ ) k λ ) λk k! e λ ) π/!. e α + α) /α e k ) λ ) k λ )! λ k! k)! ) λ k λ k! λk e λ k! λk e λ. k! ) k λ ) k k + k k k ) k ) k e k λ e k ) k EX EX V arx) X Nα, σ ) Bp) Eα) Πλ) U, θ) X Nα, σ ) E ) X α

More information

A Central Limit Theorem for Fleming-Viot Particle Systems Application to the Adaptive Multilevel Splitting Algorithm

A Central Limit Theorem for Fleming-Viot Particle Systems Application to the Adaptive Multilevel Splitting Algorithm A Central Limit Theorem for Fleming-Viot Particle Systems Application to the Algorithm F. Cérou 1,2 B. Delyon 2 A. Guyader 3 M. Rousset 1,2 1 Inria Rennes Bretagne Atlantique 2 IRMAR, Université de Rennes

More information

1. Stochastic Processes and filtrations

1. Stochastic Processes and filtrations 1. Stochastic Processes and 1. Stoch. pr., A stochastic process (X t ) t T is a collection of random variables on (Ω, F) with values in a measurable space (S, S), i.e., for all t, In our case X t : Ω S

More information

Power Series. Part 1. J. Gonzalez-Zugasti, University of Massachusetts - Lowell

Power Series. Part 1. J. Gonzalez-Zugasti, University of Massachusetts - Lowell Power Series Part 1 1 Power Series Suppose x is a variable and c k & a are constants. A power series about x = 0 is c k x k A power series about x = a is c k x a k a = center of the power series c k =

More information

Properties of an infinite dimensional EDS system : the Muller s ratchet

Properties of an infinite dimensional EDS system : the Muller s ratchet Properties of an infinite dimensional EDS system : the Muller s ratchet LATP June 5, 2011 A ratchet source : wikipedia Plan 1 Introduction : The model of Haigh 2 3 Hypothesis (Biological) : The population

More information

Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms

Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms Simultaneous drift conditions for Adaptive Markov Chain Monte Carlo algorithms Yan Bai Feb 2009; Revised Nov 2009 Abstract In the paper, we mainly study ergodicity of adaptive MCMC algorithms. Assume that

More information

Potential theory of subordinate killed Brownian motions

Potential theory of subordinate killed Brownian motions Potential theory of subordinate killed Brownian motions Renming Song University of Illinois AMS meeting, Indiana University, April 2, 2017 References This talk is based on the following paper with Panki

More information

Chapter 4. Multiple Integrals. 1. Integrals on Rectangles

Chapter 4. Multiple Integrals. 1. Integrals on Rectangles Chapter 4. Multiple Integrals Let be a rectangle in I 2 given by 1. Integrals on ectangles = [a 1, b 1 [a 2, b 2 := {(x, y) I 2 : a 1 x b 1, a 2 y b 2 }. Let P 1 be a partition of [a 1, b 1 : P 1 : a 1

More information

The Subexponential Product Convolution of Two Weibull-type Distributions

The Subexponential Product Convolution of Two Weibull-type Distributions The Subexponential Product Convolution of Two Weibull-type Distributions Yan Liu School of Mathematics and Statistics Wuhan University Wuhan, Hubei 4372, P.R. China E-mail: yanliu@whu.edu.cn Qihe Tang

More information

Approximation exponents for algebraic functions in positive characteristic

Approximation exponents for algebraic functions in positive characteristic ACTA ARITHMETICA LX.4 (1992) Approximation exponents for algebraic functions in positive characteristic by Bernard de Mathan (Talence) In this paper, we study rational approximations for algebraic functions

More information

Logarithmic Sobolev inequalities in discrete product spaces: proof by a transportation cost distance

Logarithmic Sobolev inequalities in discrete product spaces: proof by a transportation cost distance Logarithmic Sobolev inequalities in discrete product spaces: proof by a transportation cost distance Katalin Marton Alfréd Rényi Institute of Mathematics of the Hungarian Academy of Sciences Relative entropy

More information

ON THE DEFINITION OF RELATIVE PRESSURE FOR FACTOR MAPS ON SHIFTS OF FINITE TYPE. 1. Introduction

ON THE DEFINITION OF RELATIVE PRESSURE FOR FACTOR MAPS ON SHIFTS OF FINITE TYPE. 1. Introduction ON THE DEFINITION OF RELATIVE PRESSURE FOR FACTOR MAPS ON SHIFTS OF FINITE TYPE KARL PETERSEN AND SUJIN SHIN Abstract. We show that two natural definitions of the relative pressure function for a locally

More information

Lecture Notes 3 Convergence (Chapter 5)

Lecture Notes 3 Convergence (Chapter 5) Lecture Notes 3 Convergence (Chapter 5) 1 Convergence of Random Variables Let X 1, X 2,... be a sequence of random variables and let X be another random variable. Let F n denote the cdf of X n and let

More information

1 Acceptance-Rejection Method

1 Acceptance-Rejection Method Copyright c 2016 by Karl Sigman 1 Acceptance-Rejection Method As we already know, finding an explicit formula for F 1 (y), y [0, 1], for the cdf of a rv X we wish to generate, F (x) = P (X x), x R, is

More information

SUPPLEMENT TO PAPER CONVERGENCE OF ADAPTIVE AND INTERACTING MARKOV CHAIN MONTE CARLO ALGORITHMS

SUPPLEMENT TO PAPER CONVERGENCE OF ADAPTIVE AND INTERACTING MARKOV CHAIN MONTE CARLO ALGORITHMS Submitted to the Annals of Statistics SUPPLEMENT TO PAPER CONERGENCE OF ADAPTIE AND INTERACTING MARKO CHAIN MONTE CARLO ALGORITHMS By G Fort,, E Moulines and P Priouret LTCI, CNRS - TELECOM ParisTech,

More information

A note on a scaling limit of successive approximation for a differential equation with moving singularity.

A note on a scaling limit of successive approximation for a differential equation with moving singularity. A note on a scaling limit of successive approximation for a differential equation with moving singularity. Tetsuya Hattori and Hiroyuki Ochiai ABSTRACT We study a method of successive approximation to

More information

Independence properties of the Matsumoto Yor type

Independence properties of the Matsumoto Yor type Bernoulli 18(1), 2012, 119 136 DOI: 10.3150/10-BEJ325 arxiv:1203.0381v1 [math.st] 2 Mar 2012 Independence properties of the Matsumoto Yor type A.E. KOUDOU * and P. VALLOIS ** Institut Elie Cartan, Laboratoire

More information

Hölder regularity for operator scaling stable random fields

Hölder regularity for operator scaling stable random fields Stochastic Processes and their Applications 119 2009 2222 2248 www.elsevier.com/locate/spa Hölder regularity for operator scaling stable random fields Hermine Biermé a, Céline Lacaux b, a MAP5 Université

More information

Geometric ρ-mixing property of the interarrival times of a stationary Markovian Arrival Process

Geometric ρ-mixing property of the interarrival times of a stationary Markovian Arrival Process Author manuscript, published in "Journal of Applied Probability 50, 2 (2013) 598-601" Geometric ρ-mixing property of the interarrival times of a stationary Markovian Arrival Process L. Hervé and J. Ledoux

More information

Conditional Distributions

Conditional Distributions Conditional Distributions The goal is to provide a general definition of the conditional distribution of Y given X, when (X, Y ) are jointly distributed. Let F be a distribution function on R. Let G(,

More information

Theory and applications of random Poincaré maps

Theory and applications of random Poincaré maps Nils Berglund nils.berglund@univ-orleans.fr http://www.univ-orleans.fr/mapmo/membres/berglund/ Random Dynamical Systems Theory and applications of random Poincaré maps Nils Berglund MAPMO, Université d'orléans,

More information

Gaussian Random Field: simulation and quantification of the error

Gaussian Random Field: simulation and quantification of the error Gaussian Random Field: simulation and quantification of the error EMSE Workshop 6 November 2017 3 2 1 0-1 -2-3 80 60 40 1 Continuity Separability Proving continuity without separability 2 The stationary

More information

Free Meixner distributions and random matrices

Free Meixner distributions and random matrices Free Meixner distributions and random matrices Michael Anshelevich July 13, 2006 Some common distributions first... 1 Gaussian Negative binomial Gamma Pascal chi-square geometric exponential 1 2πt e x2

More information

ON TWO QUESTIONS ABOUT CIRCULAR CHOOSABILITY

ON TWO QUESTIONS ABOUT CIRCULAR CHOOSABILITY ON TWO QUESTIONS ABOUT CIRCULAR CHOOSABILITY SERGUEI NORINE Abstract. We answer two questions of Zhu on circular choosability of graphs. We show that the circular list chromatic number of an even cycle

More information

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N

d(x n, x) d(x n, x nk ) + d(x nk, x) where we chose any fixed k > N Problem 1. Let f : A R R have the property that for every x A, there exists ɛ > 0 such that f(t) > ɛ if t (x ɛ, x + ɛ) A. If the set A is compact, prove there exists c > 0 such that f(x) > c for all x

More information

Chapter 2 Continuous Distributions

Chapter 2 Continuous Distributions Chapter Continuous Distributions Continuous random variables For a continuous random variable X the probability distribution is described by the probability density function f(x), which has the following

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3

Brownian Motion. 1 Definition Brownian Motion Wiener measure... 3 Brownian Motion Contents 1 Definition 2 1.1 Brownian Motion................................. 2 1.2 Wiener measure.................................. 3 2 Construction 4 2.1 Gaussian process.................................

More information

Sébastien Chaumont a a Institut Élie Cartan, Université Henri Poincaré Nancy I, B. P. 239, Vandoeuvre-lès-Nancy Cedex, France. 1.

Sébastien Chaumont a a Institut Élie Cartan, Université Henri Poincaré Nancy I, B. P. 239, Vandoeuvre-lès-Nancy Cedex, France. 1. A strong comparison result for viscosity solutions to Hamilton-Jacobi-Bellman equations with Dirichlet condition on a non-smooth boundary and application to parabolic problems Sébastien Chaumont a a Institut

More information

A C 0 coarse structure for families of pseudometrics and the Higson-Roe functor

A C 0 coarse structure for families of pseudometrics and the Higson-Roe functor A C 0 coarse structure for families of pseudometrics and the Higson-Roe functor Jesús P. Moreno-Damas arxiv:1410.2756v1 [math.gn] 10 Oct 2014 Abstract This paper deepens into the relations between coarse

More information

MATH 220: THE FOURIER TRANSFORM TEMPERED DISTRIBUTIONS. Beforehand we constructed distributions by taking the set Cc

MATH 220: THE FOURIER TRANSFORM TEMPERED DISTRIBUTIONS. Beforehand we constructed distributions by taking the set Cc MATH 220: THE FOURIER TRANSFORM TEMPERED DISTRIBUTIONS Beforehand we constructed distributions by taking the set Cc ( ) as the set of very nice functions, and defined distributions as continuous linear

More information

Selected Exercises on Expectations and Some Probability Inequalities

Selected Exercises on Expectations and Some Probability Inequalities Selected Exercises on Expectations and Some Probability Inequalities # If E(X 2 ) = and E X a > 0, then P( X λa) ( λ) 2 a 2 for 0 < λ

More information

ITERATED FUNCTION SYSTEMS WITH CONTINUOUS PLACE DEPENDENT PROBABILITIES

ITERATED FUNCTION SYSTEMS WITH CONTINUOUS PLACE DEPENDENT PROBABILITIES UNIVERSITATIS IAGELLONICAE ACTA MATHEMATICA, FASCICULUS XL 2002 ITERATED FUNCTION SYSTEMS WITH CONTINUOUS PLACE DEPENDENT PROBABILITIES by Joanna Jaroszewska Abstract. We study the asymptotic behaviour

More information

TD 1: Hilbert Spaces and Applications

TD 1: Hilbert Spaces and Applications Université Paris-Dauphine Functional Analysis and PDEs Master MMD-MA 2017/2018 Generalities TD 1: Hilbert Spaces and Applications Exercise 1 (Generalized Parallelogram law). Let (H,, ) be a Hilbert space.

More information

Part 2 Continuous functions and their properties

Part 2 Continuous functions and their properties Part 2 Continuous functions and their properties 2.1 Definition Definition A function f is continuous at a R if, and only if, that is lim f (x) = f (a), x a ε > 0, δ > 0, x, x a < δ f (x) f (a) < ε. Notice

More information