Marshall-Olkin Bivariate Exponential Distribution: Generalisations and Applications

Size: px
Start display at page:

Download "Marshall-Olkin Bivariate Exponential Distribution: Generalisations and Applications"

Transcription

1 CHAPTER 6 Marshall-Olkin Bivariate Exponential Distribution: Generalisations and Applications 6.1 Introduction Exponential distributions have been introduced as a simple model for statistical analysis of lifetimes. The bivariate exponential distribution and the multivariate extension of exponential distributions due to Marshall-Olkin (1967) has received considerable attention in describing the statistical dependence of components in a 2-component system and in developing statistical inference procedures. Moment generating function of the bivariate generalized Exponential distribution is discussed by Ashour et al. (2009). Multivariate generalized Exponential distribution is studied by Mu and Wang (2010). Hanagal (1995) studied testing reliability in a bivariate exponential stress-strength model. A bivariate Marshall and Olkin Exponential minification process is discussed by Ristic et al. (2008). Reliability 100

2 of stress-strength model with a bivariate exponential distribution is disscussed by Mokhlis (2006). Marshall and Olkin (1997) introduced a method of obtaining an extended family of distributions including one more parameter. For a random variable with a distribution function F (x) and survival function F (x), we can obtain a new family of distribution functions called univariate Marshall-Olkin family having cumulative distribution function G(x) given by G(x) = Then the corresponding survival function is F (x) ; < x < ; 0 < α <. α + (1 α)f (x) G(x) = αf (x) ; < x < ; 0 < α <. 1 (1 α)f (x) This new family involves an additional parameter α. In bivariate case if (X, Y ) be a random vector with joint survival function F (x, y), then G(x, y) = F (x, y) ; < x < ; < y < ; 0 < α <. α + (1 α)f (x, y) constitute the Marshall-Olkin bivariate family of distributions. The new parameter α results in added flexibility of distributions and influence the reliability properties. Autoregressive models are developed with the idea that the present value of the series, X t can be explained as a function of past values namely, X t 1, X t 2,..., X t p where p determines the number of steps in to the past, needed to forecast the current value. A first order autoregressive time series model with exponential stationary marginal distribution was developed by Gaver and Lewis (1980). Jose et al. (2011) introduced and studied a Marshall-Olkin bivariate Weibull Process. In this chapter we are discussing three different structures of the minification processes and develop minification process with Extended Marshall-Olkin bivariate exponential distribution as marginal. First we consider a bivariate 101

3 minification process {(X 1n, X 2n ), n 0} given by X 1n = min(p 1 X 1n 1, (1 p) 1 ɛ 1n ) X 2n = min(p 1 X 2n 1, (1 p) 1 ɛ 2n ) where {(ɛ 1n, ɛ 2n )} is a sequence of i.i.d. {(ɛ 1i, ɛ 2i ), i 1} are independent random vectors, 0 < p < 1. nonnegative random vectors, (X 10, X 20 ) and Now we consider a bivariate autoregressive minification process (Y 1n, Y 2n ) having the structure {(Y 1n, Y 2n )} given by (ε 1n, ε 2n ) w.p. α, (Y 1n, Y 2n ) = min(y 1n 1, ε 1n ), min(y 2n 1, ε 2n ), w.p. 1 α where 0 α 1. Finally we consider a bivariate autoregressive minification process (X n, Y n ) having the structure {(X n, Y n )} given by (ε n, η n ) w.p. p, (X n, Y n ) = (min(x n 1, Y n 1 ), (ε n, η n )), w.p. 1 p, where 0 p 1. This chapter is arranged as follows. In section 6.2, Marshall- Olkin bivariate exponential distribution and its properties are discussed. Extended marshall-olkin bivariate exponential distribution is introduced and studied in section 6.3. Multivariate extensions of the models are given in section 6.4. Conclusions are given in section

4 6.2 Marshall- Olkin bivariate exponential (MOBVE) distribution Marshall- Olkin bivariate exponential distribution with parameters λ 1, λ 2, λ 12 is defined by the survival function F (x 1, x 2 ) = e ( λ 1x 1 λ 2 x 2 λ 12 max(x 1,x 2 )), x 1, x 2 > 0 (6.2.1) and has univariate exponential marginals with survival functions given by F (x 1 ) = e (λ 1+λ 12 )x 1 F (x 2 ) = e (λ 2+λ 12 )x 2. Marshall and Olkin (1967) fatal shock model assumes that the components of a twocomponent system die after receiving a shock which is always fatal. Independent Poisson processes S 1 (t, λ 1 ), S 2 (t, λ 2 ), S 3 (t, λ 12 ) govern the occurrence of shocks. Events in the process S 1 (t, λ 1 ) are shocks to component 1, events in the process S 2 (t, λ 1 ) are shocks to component 2, and events in the process S 3 (t, λ 12 ) are shocks to both components. The joint survival distribution (X 1, X 2 ) of the components 1 and 2 is F (x 1, x 2 ) = P (X 1 > x 1 ; X 2 > x 2 ) = P {S 1 (x 1, λ 1 ) = 0; S 2 (x 2 λ 2 ) = 0; S 3 (max(x 1 ; x 2 ); λ 12 = 0)} = e ( λ 1x 1 λ 2 x 2 λ 12 max(x 1,x 2 )), x 1, x 2 > 0 The joint density of (6.2.1) is λ 2 (λ 1 + λ 12 ) F (x 1, x 2 ); x 1 < x 2, x 1, x 2 > 0 f(x 1, x 2 ) = λ 1 (λ 2 + λ 12 ) F (x 1, x 2 ); x 2 < x 1, x 1, x 2 > 0 λ 12 F (x1, x 1 ); x 1 = x 2 > 0 (6.2.2) 103

5 6.2.1 Minification Processes with MOBVE distribution Model 1 Consider a bivariate minification process {(X 1n, X 2n ), n 0} given by X 1n = min(p 1 X 1n 1, (1 p) 1 ɛ 1n ) X 2n = min(p 1 X 2n 1, (1 p) 1 ɛ 2n ) where {(ɛ 1n, ɛ 2n )} is a sequence of i.i.d. nonnegative random vectors. (ε 1n, ε 2n ) and (X 1m, X 2m ) are independent random vectors for all m < n and 0 < p < 1. Theorem A bivariate minification process {(X 1n, X 2n ), n 0} given by Model 1 is a strictly stationary Markov process with MOBVE (λ 1, λ 2, λ 12 ) marginal distribution if and only if (ε 1n, ε 2n ) has a MOBVE (λ 1, λ 2, λ 12 ) distribution and (X 10, X 20 ) = d (ε 11, ε 21 ). Proof. First assume that the process {(X 1n, X 2n )} is a strictly stationary process with MOBVE (λ 1, λ 2, λ 12 ) distribution. Let F (x1, x 2 ) be the survival function of the random vector (X 1n, X 2n ) and Ḡ(x 1, x 2 ) be the survival function of the random vector (ε 1n, ε 2n ). Then it follows from model 1 that F (x 1, x 2 ) = P [X 1n > x 1, X 2n > x 2 ] Hence we have, Ḡ ((1 p)x 1, (1 p)x 2 ) = F (x 1, x 2 ) F (px 1, px 2 ) = e λ 1(1 p)x 1 λ 2 (1 p)x 2 λ 12 (1 p) max(x 1,x 2 ). This implies that the random vector (ε 1n, ε 2n ) has a MOBVE (λ 1, λ 2, λ 12 ) distribution. Conversely, assume that (ε 1n, ε 2n ) follows MOBVE (λ 1, λ 2, λ 12 ) and (X 10, X 20 ) = d (ε 11, ε 21 ). Let Fn (x 1, x 2 ) be the survival function of the random vector (X 1n, X 2n ). Then for n = 1 we have F 1 (x 11, x 21 ) = F 0 (px 1, px 2 ) Ḡ ((1 p)x 1, (1 p)x 2 ) = e λ 1x 1 λ 2 x 2 λ 12 max(x 1,x 2 ), 104

6 which implies that (X 11, X 21 ) has a MOBVE (λ 1, λ 2, λ 12 ) distribution. Suppose now that (X 1i, X 2i ) has a MOBVE (λ 1, λ 2, λ 12 ) distribution, i = 1, 2,..., n 1. Then F n (x 1, x 2 ) = F n 1 (px 1, px 2 ) Ḡ ((1 p)x 1, (1 p)x 2 ) = e λ 1x 1 λ 2 x 2 λ 12 max(x 1,x 2 ), i.e. (X 1n, X 2n ) = d MOBVE(λ 1, λ 2, λ 12 ). Thus (X 1n, X 2n ) = d (X 10, X 20 ) for every n and since the process {(X 1n, X 2n )} is a Markov process, it follows that the process {(X 1n, X 2n )} is a strictly stationary process Properties of the Process First we shall consider the joint survival function of the random vectors (X 1n+h, X 2n+h ) and (X 1n, X 2n ). From model 1 the joint survival function is given by S h (x 1, x 2, z, v) = P (X 1n+h > x 1, X 2n+h > x 2, X 1n > z, X 2n > v) = S h 1 (px 1, px 2, z, v) Ḡ ((1 p)x 1, (1 p)x 2 ) = S ( 0 p h x 1, p h x 2, z, v ) h Ḡ ( ) p h i (1 p)x 1, p h i (1 p)x 2 i=1 = S 0 ( p h x 1, p h x 2, z, v ) h i=1 F (p (h i) (1 p)x 1, p (h i)/ (1 p)x 2 ) F (p (h i+1) (1 p)x 1, p (h i+1) (1 p)x 2 ) = F ( max(p h x 1, z), max(p h x 2, v) ) F ((1 p)x1, (1 p)x 2 ) F (p h (1 p)x 1, p h (1 p) h x 2 ) We can see that the joint survival function has an absolutely continuous component for z > p h x 1, v > p h x 2, z v and x 1 x 2. We now consider the autocovariance structure of the Marshall Olkin bivariate Exponential minification process. Let us first consider the autocovariance function of the random 105

7 variables X 1n+1 and X 1n. From equation (6.2.1), we obtain that e (λ 1+λ 12 )(1 p)x 2, x 1 > px 2, P (X 1n+1 > x 2 X 1n = x 1 ) = 0, x 1 px 2. Then the conditional probability density function is dp (X 1n+1 x 2 X 1n = x 1 ) = dx 2 Also, we have (λ 1 + λ 12 )(1 p)x 2 e (λ 1+λ 12 )(1 p)x 2, x 1 > px 2, 0, x 1 px 2. (6.2.3) P ( X 1n+1 = p 1 X 1n X 1n = x 1 ) = P ( ε1n+1 p 1 (1 p)x 1 ) = e (λ 1 +λ 12 )(1 p)x/p. Now, using equations (6.2.3) and (6.2.4) the conditional expectation is obtained as, E(X 1n+1 X 1n = x 1 ) = p 2 (λ 1 + λ 12 ) 2 (1 p) Using the fact that E(X 1n+1 X 1n ) = E(X 1n E(X 1n+1 X 1n )), we get E(X 1n+1 X 1n ) = Cov(X 1n+1, X 1n ) = Similarly, we have Cov(X 2n+1, X 2n ) = 1 p 2 (λ 1 + λ 12 ) 2 (1 p) p (λ 1 + λ 12 ) 2 (6.2.4) p (λ 2 +λ 12 ) 2. Let us consider now the autocovariance function of random variables X 1n+1 and X 2n. We need the joint survival function P (X 1n+1 > x 1, X 2n > x 2 ). It can be derived as P (X 1n+1 > x 1, X 2n > x 2 ) = P (X 1n > px 1, X 2n > x 2 ) P (ε 1n+1 > (1 p)x 1 ) = e (λ 1+λ 12 (1 p))x 1 λ 2 x 2 λ 12 max(px 1,x 2 ). 106

8 Then similar to the derivation of Cov(X 1n+1, X 1n ) we can obtain Cov(X 2n+1, X 2n ). The autocovariance matrix at lag 1 of a Marshall Olkin bivariate exponential minification process is C = p pλ 12 (λ 1 +λ 12 ) 2 (λ 1 +λ 12 )(λ 2 +λ 12 )(λ 1 +λ 12 +λ 2 p) p (λ 2 +λ 12 ) 2 pλ 12 (λ 1 +λ 12 )(λ 2 +λ 12 )(λ 2 +λ 12 +λ 1 p) Estimation of the Parameters In this section, we shall consider the problem of estimating the parameters p, α 1, α 2, λ 1, λ 2 and λ 12. Let X 0, X 1,..., X n be a sample of size n + 1. We shall consider first the estimation of the parameter p. Easy calculations show that P (X n+1 > X n ) = (2 p) 1 and P (Y n+1 > Y n ) = (2 p) 1. Let U i = I(X i+1 > X i ) and V i = I(Y i+1 > Y i ). Since the process {(X n, Y n )} is ergodic, the arithmetic means Ūn = 1 n 1 U n i and V n = 1 n 1 V n i are strongly consistent estimators of (2 p) 1. This implies that the estimators p 1n = 2 (Ūn) 1 and p 2n = 2 ( V n ) 1 are strongly consistent estimators of p. Now we shall consider the estimation of the parameters λ 1, λ 2 and λ 12. Since E(X n ) = (λ 1 + λ 12 ) 1, E(Y n ) = (λ 2 + λ 12 ) 1 and E(X n Y n ) = i=0 ( ) , λ 1 + λ 2 + λ 12 λ 1 + λ 12 λ 2 + λ 12 we can take the estimates of the parameters λ 1, λ 2 and λ 12 as the solutions of the system of the equations ( ) 1 λ 1 + λ 1 n 12 = X i n + 1 i=0 ( ) 1 λ 2 + λ 1 n 12 = Y i n + 1 i=0 ( ) λ 1 + λ 2 + λ 12 λ 1 + λ + 12 λ 2 + λ 12 = 1 n + 1 n X i Y i. i=0 i=0 107

9 Figure 6.1: The simulated sample path for various values of n and p when λ 1 = 0.5, λ 2 = 1, λ 12 = Sample Path Properties of MOBVE Process In order to study the behavior of the processes, we simulate the sample paths for various values of n and p. In particular we take λ 1 = 0.5, λ 2 = 1 and λ 12 = 1.5 and is given in Figure 6.1. In Fig. 1a, Fig. 1b and in Fig. 1c we take n = 200 and p = 0.6. In Fig. 2a, Fig. 2b and in Fig. 2c we take n = 300 and p = 0.8. In Fig. 3a, Fig. 3b and in Fig. 3c we take n = 400 and p = Determination of Reliability Mukherjee and Maiti (2005) discussed the determination of reliability with respect to MOBVE distribution. They consider the survival function given in (6.2.1) and also consider Y (strength) as non negative random variable following the exponential distribution with survival function Ḡ(y) = e y ; 0 < y < 108

10 When stress components are in series, the reliability is given by R = λ 1 + λ 2 + λ λ 1 + λ 2 + λ 12 When stress components are in parallel, the reliability is given by R = [1 1 λ 1 + λ λ 2 + λ λ 1 + λ 2 + λ ] 6.3 Extended Marshall-Olkin bivariate exponential (EMOBVE) model Here we construct the new probability model applying the technique given by Marshall and Olkin (1997). If F (x 1, x 2 ) is the survival function of a bivariate random vector (X 1, X 2 ) then the Marshall Olkin family of distributions with an additional parameter α is called Extended Marshall-Olkin bivariate exponential distribution and has the new survival function given by Ḡ(x 1, x 2 ) = αe ( λ 1x 1 λ 2 x 2 λ 12 max(x 1,x 2 )) 1 (1 α)e ( λ 1 λ 2 x 2 λ 12 max(x 1,x 2 )), α, x 1, x 2 > 0 Theorem Let {(X 1n, X 2n ), n 1} be a sequence of i.i.d. random vectors with common survival function F (x 1, x 2 ) which is survival function of MOBVE(λ 1, λ 2, λ 12 ). Let N be a random variable with a geometric(α) distribution and suppose that N and (X 1i, X 2i ) are independent for all i 1. Define U N = min X 1i and V N = min X 2i. 1 i N 1 i N Then the random vector (U N, V N ) is distributed as EMOBVE (α, λ 1, λ 2, λ 12 ) if and only if (X 1i, X 2i ) has MOBVE(λ 1, λ 2, λ 12 ) distribution. Proof. Let S(x 1, x 2 ) be the survival function of (U N, V N ). By definition S(x 1, x 2 ) = P (U N > x 1, V N > x 2 ) = = [ F (x 1, x 2 )] n (1 α) n 1 α n=1 α F (x 1, x 2 ) 1 (1 α) F (x 1, x 2 ) = αe λ1x1 λ2x2 λ12 max(x1,x2) 1 (1 α)e λ 1x 1 λ 2 x 2 λ 12 max(x 1,x 2 ) which is the EMOBVE (α, λ 1, λ 2, λ 12 ). Converse easily follows. 109

11 Model 1 Consider a bivariate minification process {(X 1n, X 2n ), n 0} given by X 1n = min(p 1 X 1n 1, (1 p) 1 ɛ 1n ) X 2n = min(p 1 X 2n 1, (1 p) 1 ɛ 2n ) where {(ɛ 1n, ɛ 2n )} is a sequence of i.i.d. nonnegative random vectors, (X 10, X 20 ) and {(ɛ 1i, ɛ 2i ), i 1} are independent random vectors, 0 < p < 1. Theorem A bivariate minification processes {(X 1n, X 2n ), n 0} has Extended Marshall-Olkin bivariate exponential (EMOBVE) stationary marginal distribution iff {(ɛ 1n, ɛ 2n )} has Marshall-Olkin bivariate exponential (MOBVE) distribution. Proof. Let F (x 1, x 2 ) be the survival function of the random vector (X 1n, X 2n ) and Ḡ(x 1, x 2 ) be the survival function of the random vector (ɛ 1n, ɛ 2n ) F (x 1, x 2 ) = P [X 1n > x 1, X 2n > x 2 ] Hence we have, Ḡ((1 p)x 1, (1 p)x 2 ) = F (x 1, x 2 ) F (px 1, px 2 ) = e λ 1(1 p)x 1 λ 2 (1 p)x 2 λ 12 (1 p)max(x 1,x 2 ) This implies that {(ɛ 1n, ɛ 2n )} has EMOBVE (λ 1, λ 2, λ 12 ) distribution. Conversely assume that {(ɛ 1n, ɛ 2n )} has EMOBVE (λ 1, λ 2, λ 12 ) distribution. Then F (x 1, x 2 ) = e ( λ 1x 1 λ 2 x 2 λ 12 max(x 1,x 2 )) Model 2 Consider a bivariate autoregressive minification process (Y 1n, Y 2n ) having the structure 110

12 {(Y 1n, Y 2n )} given by (ε 1n, ɛ 2n ) w.p. α, (Y 1n, Y 2n ) = min(y 1n 1, ε 1n ), min(y 2n 1, ɛ 2n ), w.p. 1 α where 0 α 1. Theorem A minification process {(Y 1n, Y 2n ), n 0} has EMOBV E(α, λ 1, λ 2, λ 12 ) stationary marginal distribution if and only if (ɛ 1n, ɛ 2n ) has MOBV E(λ 1, λ 2, λ 12 ) distribution and (X 0, Y 0 ) has MOBVE (α, λ 1, λ 2, λ 12 ) distribution. Proof. Let Ḡn(x 1, x 2 ) and F (x 1, x 2 ) be the survival functions of (X 1n, X 2n ) and (ɛ 1n, ɛ 2n ), respectively. From the definition of the process, we have that Ḡ n (x 1, x 2 ) = P (Y 1n > x 1, Y 2n > x 2 ) = α F (x 1, x 2 ) + (1 α)ḡn 1(x 1, x 2 ) F (x 1, x 2 ). Under stationarity (6.3.1) Ḡ(x 1, x 2 ) = [α + (1 α)ḡ(x 1, x 2 )] F (x 1, x 2 ). (6.3.2) Replacing Ḡ with the survival function of the random vector with EMOBVE (α, λ 1, λ 2, λ 12 ) distribution and solving the obtained equation on F (x 1, x 2 ), we obtain F (x 1, x 2 ) = e λ 1x 1 λ 2 x 2 λ 12 max(x 1,x 2 ). (6.3.3) Hence (ɛ 1n, ɛ 2n ) follows MOBVE(λ 1, λ 2, λ 12 ) distribution. Conversely using for n = 1, we can show that Ḡ 1 (x 1, x 2 ) = αe λ 1x 1 λ 2 x 2 λ 12 max(x 1,x 2 ) 1 (1 α)e λ 1x 1 λ 2 x 2 λ 12 max(x 1,x 2 ), which is the survival function of EMOBVE (α, λ 1, λ 2, λ 12 ). Hence it follows that (Y 11, Y 21 ) has EMOBVE(α, λ 1, λ 2, λ 12 ). Now assume that (Y 1n 1, Y 2n 1 ) d = EMOBVE(α, λ 1, λ 2, λ 12 ). 111

13 Then Ḡ n (x 1, x 2 ) = α F (x 1, x 2 ) + (1 α)ḡn 1(x 1, x 2 ) F (x 1, x 2 ) = [α + (1 α)ḡn 1(x 1, x 2 )] F (x 1, x 2 ) = αe λ 1x 1 λ 2 x 2 λ 12 max(x 1,x 2 ) 1 (1 α)e λ 1x 1 λ 2 x 2 λ 12 max(x 1,x 2 ). Thus, (Y 1n, Y 2n ) follows EMOBVE(α, λ 1, λ 2, λ 12 ) distribution. Hence by induction (Y 1n, Y 2n ) has EMOBVE(α, λ 1, λ 2, λ 12 ) distribution for every n 0. This establishes stationarity. Corollary If (Y 10, Y 20 ) has an arbitrary bivariate distribution and {(ɛ 1n, ɛ 2n )} has MOBVE (λ 1, λ 2, λ 12 ) distribution, then {(Y 1n, Y 2n )} has EMOBVE (α, λ 1, λ 2, λ 12 ) distribution asymptotically. Proof. Using the equation (6.3.1) repeatedly, we find Ḡ n (x 1, x 2 ) = α F (x 1, x 2 ) + (1 α) F (x 1, x 2 )Ḡn 1(x 1, x 2 ) = α F (x 1, x 2 ) ( 1 + (1 α) F (x 1, x 2 ) ) + (1 α) 2 F 2 (x 1, x 2 )Ḡn 2(x 1, x 2 ) = α F n 1 (x 1, x 2 ) (1 α) j F j (x 1, x 2 ) + (1 α) n F n (x 1, x 2 )Ḡ0(x 1, x 2 ) j=0 = α F (x 1, x 2 ) ( 1 (1 α) n F n (x 1, x 2 ) ) 1 (1 α) F (x 1, x 2 ) Taking limit as n, we have that lim Ḡ n (x 1, x 2 ) = n Model 3 + (1 α) n F n (x 1, x 2 )Ḡ0(x 1, x 2 ). α F (x 1, x 2 ) 1 (1 α) F (x 1, x 2 ) = αe λ1x1 λ2x2 λ12 max(x1,x2) 1 (1 α)e λ 1x 1 λ 2 x 2 λ 12 max(x 1,x 2 ). Consider a bivariate autoregressive minification process (X n, Y n ) having the structure 112

14 {(X n, Y n )} given by (ε n, η n ) w.p. p, (X n, Y n ) = (min(x n 1, Y n 1 ), (ε n, η n )), w.p. 1 p, Theorem A bivariate autoregressive minification process (X n, Y n ) has EMOBVE stationary marginal distribution if and only if (ε n, η n ) has MOBVE distribution. Proof. Let Ḡ(x, y) and F (x, y) be the survival functions of (X n, Y n ) and (ɛ n, η n ) respectively. Under stationarity Ḡ(x, y) = (p + (1 p)ḡ(x, y)) F (x, y), if we take F (x, y) = e λ 1x λ 2 y λ 12 max(x,y), we get Conversely, then we have Ḡ(x, y) = pe λ 1x λ 2 y λ 12 max(x,y) 1 (1 p)e λ 1x λ 2 y λ 12 max(x,y) F (x, y) = Ḡ(x, y) p + (1 p)ḡ(x, y), F (x, y) = e λ 1x λ 2 y λ 12 max(x,y), which is the survival function of MOBVE Determination of Reliability Let X 1 and X 2 be two non negative random variables jointly following EMOBVE distribution with survival function Ḡ(x 1, x 2 ) = αe ( λ 1x 1 λ 2 x 2 λ 12 max(x 1,x 2 )) 1 (1 α)e ( λ 1 λ 2 x 2 λ 12 max(x 1,x 2 )), α, x 1, x 2 > 0 113

15 Also let Y( Strength) be a non-negative random variable following the exponential distribution with Survival function G(y) = e y ; 0 < y < and its pdf is g(y) = e y ; 0 < y < Our objective is to derive stress-strength reliability R when stress (X) has two components X 1 and X 2 and strength Y is independently distributed. Case(i): Stress components are in series We define, The survival function for U is given by Ḡ(u) = Then reliability can be obtained as U = min(x 1, X 2 ). αe ( λ 1u λ 2 u λ 12 u) 1 (1 α)e ( λ 1u λ 2 u λ 12 u). R = P (U < Y ) = = 0 0 { y o f(u)du}g(y)dy e y e (1+λ 1+λ 2 +λ 12 )y 1 (1 α)e (λ 1+λ 2 +λ 12 )y dy From the Table 6.1, it is clear that as α increases reliability decreases. Using this systems with optimal reliability values can be designed. Case(ii): Stress components are in parallel In this case, we consider V = max(x 1, X 2 ) 114

16 Table 6.1: Reliability R under Extended Marshall-Olkin bivariate exponential model where stress components are in series and λ 1 = 1, λ 2 = 2 and λ 12 = 3 α R The cumulative distribution function for V is given by F V (x) = Then, αe (λ 1+λ 2 +λ 12 )x 1 (1 α)e (λ 1+λ 2 +λ 12 )x + 1 e (λ1+λ12)x 1 (1 α)e (λ 1+λ 12 )x + 1 e (λ2+λ12)x 1 (1 α)e (λ 2+λ 12 )x 1. R = P (V < Y ) = = 0 0 { y o f(v)dv}g(y)dy F V (y)e y dy From the Table 6.2, it is clear that as α increases reliability decreases. Using this systems with optimal reliability values can be designed. 6.4 Multivariate Extensions Now we extend the results to the multivariate case. For that we consider the Marshall- Olkin fatal shock model, where there are k components subject to failure. Let t i denote the 115

17 Table 6.2: Reliability under Extended Marshall-Olkin bivariate exponential model where stress components are in parallel and λ 1 = 1, λ 2 = 2 and λ 12 = 3 α R failure time of the i th component. The joint distribution of lifetimes (t 1,..., t k ) is given by the Marshall-Olkin multivariate exponential distribution. The joint survival function of lifetimes is given by k k P (t 1 > x 1,..., t k > x k ) = exp( λ i x i λ ij max{x i, x j } λ 1,...,k max{x 1,..., x k }) i=1 Multivariate Extended Marshall-Olkin bivariate exponential distribution can be written ( ) k k α exp λ i x i λ ij max{x i, x j } λ 1,...,k max{x 1,..., x k } i=1 i,j=1 Ḡ(x 1,..., x k ) = ( ) k k 1 (1 α) exp λ i x i λ ij max{x i, x j } λ 1,...,k max{x 1,..., x k } i,j=1 i=1 i,j=1 Model 1 can be extended to n variable case as follows. Let X n = {X 1n...X pn } where X jn = min(p 1 X jn 1, (1 p) 1 ɛ jn ), 0 < p < 1 and ɛ 1n, ɛ 2n... are sequences of i.i.d non negative random variables. Then X n have extended multivariate Marshall Olkin exponential marginal distribution. Model 2 can also be extended to n variable cases. 116

18 Let (ε 1n,...ε pn ) w.p. α, Y n = {Y 1n...Y pn } = min(y 1n 1, ε 1n ),... min(y pn 1, ε pn ), w.p. 1 α where 0 < α < 1. Then Y n have extended multivariate Marshall Olkin exponential stationary marginal distribution if and only if (ε 1n,...ε pn ) has multivariate Marshall Olkin exponential distribution. 6.5 Conclusions Extended Marshall-Olkin bivariate exponential distribution is introduced and its properties are studied. Expressions for stress-strength reliability of a two component system are derived. Reliability R for various parameter combinations are also computed. From the tables it is clear that as α increases reliability decreases. Using this systems with optimal reliability values can be designed. Multivariate extensions are also done to model multicomponent systems. We introduced three different forms of minification processes and necessary and sufficient conditions for stationarity are established. References Ashour, S.K., Amin, E.A., Muhammed, H.Z. (2009) Moment Generating Function of the Bivariate Generalized Exponential distribution, Applied Mathematical Sciences, 3, 59 ( ). Gaver, D.P., Lewis, P.A.W. (1980) First order autoregressive gamma sequences and point processes, Adv. Appl. Pro., 12, Hanagal, D.D.(1995) Testing reliability in a bivariate exponential stress-strength model, Journal of the Indian Statistical Association, 33, Jose, K.K., Ancy Joseph, Ristic, M.M. (2011) Marshall Olkin Weibull Distributions and Minification Processes, Statistical Papers, 52,

19 Marshall, A.W., Olkin, I. (1967) A Multivariate Exponential Distribution J. Amer. Statist. Assoc., 62, Marshall, A. W., Olkin, I. (1997) A new method for adding a parameter to a family of distributions with application to the exponential and weibull families, Biometrica 84(3), Mokhlis, N.M. (2006) Reliability of strength model with a bivariate exponential distribution, Journal of the Egyptian Mathematical Society, 14(1), Mu, J., Wang, Y. (2010) Multivariate Generalized Exponential distribution, Journal of Dynamical Systems and Geometric Theories, 8, 2( ) Mukherjee, S.P., Maiti, S.S. (2005) Stress -strength Reliability under two component stress system, Journal of Statistical Theory and Applications, Ristic, M.M., Popovic, B.C., Nastic, A., Dordevic, M. (2008) Bivariate Marshall Olkin Exponential Minification Processes, Filomat 22, 1,

Gumbel Distribution: Generalizations and Applications

Gumbel Distribution: Generalizations and Applications CHAPTER 3 Gumbel Distribution: Generalizations and Applications 31 Introduction Extreme Value Theory is widely used by many researchers in applied sciences when faced with modeling extreme values of certain

More information

A BIVARIATE MARSHALL AND OLKIN EXPONENTIAL MINIFICATION PROCESS

A BIVARIATE MARSHALL AND OLKIN EXPONENTIAL MINIFICATION PROCESS Faculty of Sciences Mathematics, University of Niš, Serbia Available at: http://wwwpmfniacyu/filomat Filomat 22: (2008), 69 77 A BIVARIATE MARSHALL AND OLKIN EXPONENTIAL MINIFICATION PROCESS Miroslav M

More information

Marshall-Olkin Univariate and Bivariate Logistic Processes

Marshall-Olkin Univariate and Bivariate Logistic Processes CHAPTER 5 Marshall-Olkin Univariate and Bivariate Logistic Processes 5. Introduction The logistic distribution is the most commonly used probability model for modelling data on population growth, bioassay,

More information

A Marshall-Olkin Gamma Distribution and Process

A Marshall-Olkin Gamma Distribution and Process CHAPTER 3 A Marshall-Olkin Gamma Distribution and Process 3.1 Introduction Gamma distribution is a widely used distribution in many fields such as lifetime data analysis, reliability, hydrology, medicine,

More information

On q-gamma Distributions, Marshall-Olkin q-gamma Distributions and Minification Processes

On q-gamma Distributions, Marshall-Olkin q-gamma Distributions and Minification Processes CHAPTER 4 On q-gamma Distributions, Marshall-Olkin q-gamma Distributions and Minification Processes 4.1 Introduction Several skewed distributions such as logistic, Weibull, gamma and beta distributions

More information

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued Chapter 3 sections Chapter 3 - continued 3.1 Random Variables and Discrete Distributions 3.2 Continuous Distributions 3.3 The Cumulative Distribution Function 3.4 Bivariate Distributions 3.5 Marginal Distributions

More information

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued Chapter 3 sections 3.1 Random Variables and Discrete Distributions 3.2 Continuous Distributions 3.3 The Cumulative Distribution Function 3.4 Bivariate Distributions 3.5 Marginal Distributions 3.6 Conditional

More information

A New Class of Positively Quadrant Dependent Bivariate Distributions with Pareto

A New Class of Positively Quadrant Dependent Bivariate Distributions with Pareto International Mathematical Forum, 2, 27, no. 26, 1259-1273 A New Class of Positively Quadrant Dependent Bivariate Distributions with Pareto A. S. Al-Ruzaiza and Awad El-Gohary 1 Department of Statistics

More information

Some Generalizations of Weibull Distribution and Related Processes

Some Generalizations of Weibull Distribution and Related Processes Journal of Statistical Theory and Applications, Vol. 4, No. 4 (December 205), 425-434 Some Generalizations of Weibull Distribution and Related Processes K. Jayakumar Department of Statistics, University

More information

Stat 512 Homework key 2

Stat 512 Homework key 2 Stat 51 Homework key October 4, 015 REGULAR PROBLEMS 1 Suppose continuous random variable X belongs to the family of all distributions having a linear probability density function (pdf) over the interval

More information

Chapter 5. Logistic Processes

Chapter 5. Logistic Processes Chapter 5 Logistic Processes * 5.1 Introduction The logistic distribution is the most commonly used probability model for demographic contexts for modeling growth of populations. Oliver (1964) use it to

More information

Random Variables and Their Distributions

Random Variables and Their Distributions Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital

More information

Three hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER.

Three hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER. Three hours To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER EXTREME VALUES AND FINANCIAL RISK Examiner: Answer QUESTION 1, QUESTION

More information

Multivariate Survival Data With Censoring.

Multivariate Survival Data With Censoring. 1 Multivariate Survival Data With Censoring. Shulamith Gross and Catherine Huber-Carol Baruch College of the City University of New York, Dept of Statistics and CIS, Box 11-220, 1 Baruch way, 10010 NY.

More information

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Theorems Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

Multivariate Random Variable

Multivariate Random Variable Multivariate Random Variable Author: Author: Andrés Hincapié and Linyi Cao This Version: August 7, 2016 Multivariate Random Variable 3 Now we consider models with more than one r.v. These are called multivariate

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4. (*. Let independent variables X,..., X n have U(0, distribution. Show that for every x (0,, we have P ( X ( < x and P ( X (n > x as n. Ex. 4.2 (**. By using induction or otherwise,

More information

2 Functions of random variables

2 Functions of random variables 2 Functions of random variables A basic statistical model for sample data is a collection of random variables X 1,..., X n. The data are summarised in terms of certain sample statistics, calculated as

More information

The Distributions of Sums, Products and Ratios of Inverted Bivariate Beta Distribution 1

The Distributions of Sums, Products and Ratios of Inverted Bivariate Beta Distribution 1 Applied Mathematical Sciences, Vol. 2, 28, no. 48, 2377-2391 The Distributions of Sums, Products and Ratios of Inverted Bivariate Beta Distribution 1 A. S. Al-Ruzaiza and Awad El-Gohary 2 Department of

More information

A Bivariate Weibull Regression Model

A Bivariate Weibull Regression Model c Heldermann Verlag Economic Quality Control ISSN 0940-5151 Vol 20 (2005), No. 1, 1 A Bivariate Weibull Regression Model David D. Hanagal Abstract: In this paper, we propose a new bivariate Weibull regression

More information

Probability Lecture III (August, 2006)

Probability Lecture III (August, 2006) robability Lecture III (August, 2006) 1 Some roperties of Random Vectors and Matrices We generalize univariate notions in this section. Definition 1 Let U = U ij k l, a matrix of random variables. Suppose

More information

Chapter 5 continued. Chapter 5 sections

Chapter 5 continued. Chapter 5 sections Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Structural Reliability

Structural Reliability Structural Reliability Thuong Van DANG May 28, 2018 1 / 41 2 / 41 Introduction to Structural Reliability Concept of Limit State and Reliability Review of Probability Theory First Order Second Moment Method

More information

Extreme Value Analysis and Spatial Extremes

Extreme Value Analysis and Spatial Extremes Extreme Value Analysis and Department of Statistics Purdue University 11/07/2013 Outline Motivation 1 Motivation 2 Extreme Value Theorem and 3 Bayesian Hierarchical Models Copula Models Max-stable Models

More information

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

Multivariate Normal-Laplace Distribution and Processes

Multivariate Normal-Laplace Distribution and Processes CHAPTER 4 Multivariate Normal-Laplace Distribution and Processes The normal-laplace distribution, which results from the convolution of independent normal and Laplace random variables is introduced by

More information

1 Exercises for lecture 1

1 Exercises for lecture 1 1 Exercises for lecture 1 Exercise 1 a) Show that if F is symmetric with respect to µ, and E( X )

More information

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2 Order statistics Ex. 4.1 (*. Let independent variables X 1,..., X n have U(0, 1 distribution. Show that for every x (0, 1, we have P ( X (1 < x 1 and P ( X (n > x 1 as n. Ex. 4.2 (**. By using induction

More information

Katz Family of Distributions and Processes

Katz Family of Distributions and Processes CHAPTER 7 Katz Family of Distributions and Processes 7. Introduction The Poisson distribution and the Negative binomial distribution are the most widely used discrete probability distributions for the

More information

MULTIVARIATE DISCRETE PHASE-TYPE DISTRIBUTIONS

MULTIVARIATE DISCRETE PHASE-TYPE DISTRIBUTIONS MULTIVARIATE DISCRETE PHASE-TYPE DISTRIBUTIONS By MATTHEW GOFF A dissertation submitted in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY WASHINGTON STATE UNIVERSITY Department

More information

Chapter 5. Chapter 5 sections

Chapter 5. Chapter 5 sections 1 / 43 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables This Version: July 30, 2015 Multiple Random Variables 2 Now we consider models with more than one r.v. These are called multivariate models For instance: height and weight An

More information

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems Review of Basic Probability The fundamentals, random variables, probability distributions Probability mass/density functions

More information

IE 230 Probability & Statistics in Engineering I. Closed book and notes. 60 minutes.

IE 230 Probability & Statistics in Engineering I. Closed book and notes. 60 minutes. Closed book and notes. 60 minutes. A summary table of some univariate continuous distributions is provided. Four Pages. In this version of the Key, I try to be more complete than necessary to receive full

More information

Partial Solutions for h4/2014s: Sampling Distributions

Partial Solutions for h4/2014s: Sampling Distributions 27 Partial Solutions for h4/24s: Sampling Distributions ( Let X and X 2 be two independent random variables, each with the same probability distribution given as follows. f(x 2 e x/2, x (a Compute the

More information

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

Probability Models. 4. What is the definition of the expectation of a discrete random variable? 1 Probability Models The list of questions below is provided in order to help you to prepare for the test and exam. It reflects only the theoretical part of the course. You should expect the questions

More information

IDENTIFIABILITY OF THE MULTIVARIATE NORMAL BY THE MAXIMUM AND THE MINIMUM

IDENTIFIABILITY OF THE MULTIVARIATE NORMAL BY THE MAXIMUM AND THE MINIMUM Surveys in Mathematics and its Applications ISSN 842-6298 (electronic), 843-7265 (print) Volume 5 (200), 3 320 IDENTIFIABILITY OF THE MULTIVARIATE NORMAL BY THE MAXIMUM AND THE MINIMUM Arunava Mukherjea

More information

2 Random Variable Generation

2 Random Variable Generation 2 Random Variable Generation Most Monte Carlo computations require, as a starting point, a sequence of i.i.d. random variables with given marginal distribution. We describe here some of the basic methods

More information

LIST OF FORMULAS FOR STK1100 AND STK1110

LIST OF FORMULAS FOR STK1100 AND STK1110 LIST OF FORMULAS FOR STK1100 AND STK1110 (Version of 11. November 2015) 1. Probability Let A, B, A 1, A 2,..., B 1, B 2,... be events, that is, subsets of a sample space Ω. a) Axioms: A probability function

More information

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline.

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline. Random Variables Amappingthattransformstheeventstotherealline. Example 1. Toss a fair coin. Define a random variable X where X is 1 if head appears and X is if tail appears. P (X =)=1/2 P (X =1)=1/2 Example

More information

Formulas for probability theory and linear models SF2941

Formulas for probability theory and linear models SF2941 Formulas for probability theory and linear models SF2941 These pages + Appendix 2 of Gut) are permitted as assistance at the exam. 11 maj 2008 Selected formulae of probability Bivariate probability Transforms

More information

Statistics (1): Estimation

Statistics (1): Estimation Statistics (1): Estimation Marco Banterlé, Christian Robert and Judith Rousseau Practicals 2014-2015 L3, MIDO, Université Paris Dauphine 1 Table des matières 1 Random variables, probability, expectation

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information

Estimation of the Bivariate Generalized. Lomax Distribution Parameters. Based on Censored Samples

Estimation of the Bivariate Generalized. Lomax Distribution Parameters. Based on Censored Samples Int. J. Contemp. Math. Sciences, Vol. 9, 2014, no. 6, 257-267 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ijcms.2014.4329 Estimation of the Bivariate Generalized Lomax Distribution Parameters

More information

Basics of Stochastic Modeling: Part II

Basics of Stochastic Modeling: Part II Basics of Stochastic Modeling: Part II Continuous Random Variables 1 Sandip Chakraborty Department of Computer Science and Engineering, INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR August 10, 2016 1 Reference

More information

STAT Chapter 5 Continuous Distributions

STAT Chapter 5 Continuous Distributions STAT 270 - Chapter 5 Continuous Distributions June 27, 2012 Shirin Golchi () STAT270 June 27, 2012 1 / 59 Continuous rv s Definition: X is a continuous rv if it takes values in an interval, i.e., range

More information

Lecture 11. Probability Theory: an Overveiw

Lecture 11. Probability Theory: an Overveiw Math 408 - Mathematical Statistics Lecture 11. Probability Theory: an Overveiw February 11, 2013 Konstantin Zuev (USC) Math 408, Lecture 11 February 11, 2013 1 / 24 The starting point in developing the

More information

1.1 Review of Probability Theory

1.1 Review of Probability Theory 1.1 Review of Probability Theory Angela Peace Biomathemtics II MATH 5355 Spring 2017 Lecture notes follow: Allen, Linda JS. An introduction to stochastic processes with applications to biology. CRC Press,

More information

On Extreme Bernoulli and Dependent Families of Bivariate Distributions

On Extreme Bernoulli and Dependent Families of Bivariate Distributions Int J Contemp Math Sci, Vol 3, 2008, no 23, 1103-1112 On Extreme Bernoulli and Dependent Families of Bivariate Distributions Broderick O Oluyede Department of Mathematical Sciences Georgia Southern University,

More information

Stat 5101 Notes: Brand Name Distributions

Stat 5101 Notes: Brand Name Distributions Stat 5101 Notes: Brand Name Distributions Charles J. Geyer September 5, 2012 Contents 1 Discrete Uniform Distribution 2 2 General Discrete Uniform Distribution 2 3 Uniform Distribution 3 4 General Uniform

More information

Asymptotic Statistics-III. Changliang Zou

Asymptotic Statistics-III. Changliang Zou Asymptotic Statistics-III Changliang Zou The multivariate central limit theorem Theorem (Multivariate CLT for iid case) Let X i be iid random p-vectors with mean µ and and covariance matrix Σ. Then n (

More information

Information geometry for bivariate distribution control

Information geometry for bivariate distribution control Information geometry for bivariate distribution control C.T.J.Dodson + Hong Wang Mathematics + Control Systems Centre, University of Manchester Institute of Science and Technology Optimal control of stochastic

More information

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER.

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER. Two hours MATH38181 To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER EXTREME VALUES AND FINANCIAL RISK Examiner: Answer any FOUR

More information

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable Distributions of Functions of Random Variables 5.1 Functions of One Random Variable 5.2 Transformations of Two Random Variables 5.3 Several Random Variables 5.4 The Moment-Generating Function Technique

More information

18.440: Lecture 28 Lectures Review

18.440: Lecture 28 Lectures Review 18.440: Lecture 28 Lectures 18-27 Review Scott Sheffield MIT Outline Outline It s the coins, stupid Much of what we have done in this course can be motivated by the i.i.d. sequence X i where each X i is

More information

Mathematical Preliminaries

Mathematical Preliminaries Mathematical Preliminaries Economics 3307 - Intermediate Macroeconomics Aaron Hedlund Baylor University Fall 2013 Econ 3307 (Baylor University) Mathematical Preliminaries Fall 2013 1 / 25 Outline I: Sequences

More information

HW4 : Bivariate Distributions (1) Solutions

HW4 : Bivariate Distributions (1) Solutions STAT/MATH 395 A - PROBABILITY II UW Winter Quarter 7 Néhémy Lim HW4 : Bivariate Distributions () Solutions Problem. The joint probability mass function of X and Y is given by the following table : X Y

More information

4. CONTINUOUS RANDOM VARIABLES

4. CONTINUOUS RANDOM VARIABLES IA Probability Lent Term 4 CONTINUOUS RANDOM VARIABLES 4 Introduction Up to now we have restricted consideration to sample spaces Ω which are finite, or countable; we will now relax that assumption We

More information

1 Review of Probability

1 Review of Probability 1 Review of Probability Random variables are denoted by X, Y, Z, etc. The cumulative distribution function (c.d.f.) of a random variable X is denoted by F (x) = P (X x), < x

More information

Preliminaries. Probability space

Preliminaries. Probability space Preliminaries This section revises some parts of Core A Probability, which are essential for this course, and lists some other mathematical facts to be used (without proof) in the following. Probability

More information

Lecture 21: Convergence of transformations and generating a random variable

Lecture 21: Convergence of transformations and generating a random variable Lecture 21: Convergence of transformations and generating a random variable If Z n converges to Z in some sense, we often need to check whether h(z n ) converges to h(z ) in the same sense. Continuous

More information

Lecture 1: August 28

Lecture 1: August 28 36-705: Intermediate Statistics Fall 2017 Lecturer: Siva Balakrishnan Lecture 1: August 28 Our broad goal for the first few lectures is to try to understand the behaviour of sums of independent random

More information

Weighted Exponential Distribution and Process

Weighted Exponential Distribution and Process Weighted Exponential Distribution and Process Jilesh V Some generalizations of exponential distribution and related time series models Thesis. Department of Statistics, University of Calicut, 200 Chapter

More information

ON THE CHARACTERIZATION OF A BIVARIATE GEOMETRIC DISTRIBUTION ABSTRACT

ON THE CHARACTERIZATION OF A BIVARIATE GEOMETRIC DISTRIBUTION ABSTRACT !" # $ % & '( *),+-./1032547689:=?;@0BA.=C2D032FEG;>HI;@0 JK;@4F;MLN2 zpm{ O#PRQSUTWVYXZPR[\V^]`_a SbVYcPRXZSbVYdfeMShgijeIdfPR[eMPIkSh[l m PR[nVWPIT?_o]`TprQ>Q>gfdsPRlta SbVYcPuXvSbVYdfeMkwSh[lxiyVWSbVYdfk%V*dseMk

More information

. Find E(V ) and var(v ).

. Find E(V ) and var(v ). Math 6382/6383: Probability Models and Mathematical Statistics Sample Preliminary Exam Questions 1. A person tosses a fair coin until she obtains 2 heads in a row. She then tosses a fair die the same number

More information

Continuous Random Variables

Continuous Random Variables 1 / 24 Continuous Random Variables Saravanan Vijayakumaran sarva@ee.iitb.ac.in Department of Electrical Engineering Indian Institute of Technology Bombay February 27, 2013 2 / 24 Continuous Random Variables

More information

Probability and Distributions

Probability and Distributions Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated

More information

BMIR Lecture Series on Probability and Statistics Fall 2015 Discrete RVs

BMIR Lecture Series on Probability and Statistics Fall 2015 Discrete RVs Lecture #7 BMIR Lecture Series on Probability and Statistics Fall 2015 Department of Biomedical Engineering and Environmental Sciences National Tsing Hua University 7.1 Function of Single Variable Theorem

More information

On Sarhan-Balakrishnan Bivariate Distribution

On Sarhan-Balakrishnan Bivariate Distribution J. Stat. Appl. Pro. 1, No. 3, 163-17 (212) 163 Journal of Statistics Applications & Probability An International Journal c 212 NSP On Sarhan-Balakrishnan Bivariate Distribution D. Kundu 1, A. Sarhan 2

More information

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University Chapter 3, 4 Random Variables ENCS6161 - Probability and Stochastic Processes Concordia University ENCS6161 p.1/47 The Notion of a Random Variable A random variable X is a function that assigns a real

More information

Master s Written Examination

Master s Written Examination Master s Written Examination Option: Statistics and Probability Spring 016 Full points may be obtained for correct answers to eight questions. Each numbered question which may have several parts is worth

More information

Basic concepts of probability theory

Basic concepts of probability theory Basic concepts of probability theory Random variable discrete/continuous random variable Transform Z transform, Laplace transform Distribution Geometric, mixed-geometric, Binomial, Poisson, exponential,

More information

Conditional Tail Expectations for Multivariate Phase Type Distributions

Conditional Tail Expectations for Multivariate Phase Type Distributions Conditional Tail Expectations for Multivariate Phase Type Distributions Jun Cai Department of Statistics and Actuarial Science University of Waterloo Waterloo, ON N2L 3G1, Canada Telphone: 1-519-8884567,

More information

COMPOSITE RELIABILITY MODELS FOR SYSTEMS WITH TWO DISTINCT KINDS OF STOCHASTIC DEPENDENCES BETWEEN THEIR COMPONENTS LIFE TIMES

COMPOSITE RELIABILITY MODELS FOR SYSTEMS WITH TWO DISTINCT KINDS OF STOCHASTIC DEPENDENCES BETWEEN THEIR COMPONENTS LIFE TIMES COMPOSITE RELIABILITY MODELS FOR SYSTEMS WITH TWO DISTINCT KINDS OF STOCHASTIC DEPENDENCES BETWEEN THEIR COMPONENTS LIFE TIMES Jerzy Filus Department of Mathematics and Computer Science, Oakton Community

More information

Bivariate Geometric (Maximum) Generalized Exponential Distribution

Bivariate Geometric (Maximum) Generalized Exponential Distribution Bivariate Geometric (Maximum) Generalized Exponential Distribution Debasis Kundu 1 Abstract In this paper we propose a new five parameter bivariate distribution obtained by taking geometric maximum of

More information

Conditional independence of blocked ordered data

Conditional independence of blocked ordered data Conditional independence of blocked ordered data G. Iliopoulos 1 and N. Balakrishnan 2 Abstract In this paper, we prove that blocks of ordered data formed by some conditioning events are mutually independent.

More information

This exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text.

This exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text. TEST #3 STA 536 December, 00 Name: Please read the following directions. DO NOT TURN THE PAGE UNTIL INSTRUCTED TO DO SO Directions This exam is closed book and closed notes. You will have access to a copy

More information

Order Statistics and Distributions

Order Statistics and Distributions Order Statistics and Distributions 1 Some Preliminary Comments and Ideas In this section we consider a random sample X 1, X 2,..., X n common continuous distribution function F and probability density

More information

Reliability of Coherent Systems with Dependent Component Lifetimes

Reliability of Coherent Systems with Dependent Component Lifetimes Reliability of Coherent Systems with Dependent Component Lifetimes M. Burkschat Abstract In reliability theory, coherent systems represent a classical framework for describing the structure of technical

More information

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ). .8.6 µ =, σ = 1 µ = 1, σ = 1 / µ =, σ =.. 3 1 1 3 x Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ ). The Gaussian distribution Probably the most-important distribution in all of statistics

More information

ISyE 3044 Fall 2017 Test #1a Solutions

ISyE 3044 Fall 2017 Test #1a Solutions 1 NAME ISyE 344 Fall 217 Test #1a Solutions This test is 75 minutes. You re allowed one cheat sheet. Good luck! 1. Suppose X has p.d.f. f(x) = 4x 3, < x < 1. Find E[ 2 X 2 3]. Solution: By LOTUS, we have

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables Joint Probability Density Let X and Y be two random variables. Their joint distribution function is F ( XY x, y) P X x Y y. F XY ( ) 1, < x

More information

Weighted Marshall-Olkin Bivariate Exponential Distribution

Weighted Marshall-Olkin Bivariate Exponential Distribution Weighted Marshall-Olkin Bivariate Exponential Distribution Ahad Jamalizadeh & Debasis Kundu Abstract Recently Gupta and Kundu [9] introduced a new class of weighted exponential distributions, and it can

More information

18.440: Lecture 28 Lectures Review

18.440: Lecture 28 Lectures Review 18.440: Lecture 28 Lectures 17-27 Review Scott Sheffield MIT 1 Outline Continuous random variables Problems motivated by coin tossing Random variable properties 2 Outline Continuous random variables Problems

More information

STAT 6385 Survey of Nonparametric Statistics. Order Statistics, EDF and Censoring

STAT 6385 Survey of Nonparametric Statistics. Order Statistics, EDF and Censoring STAT 6385 Survey of Nonparametric Statistics Order Statistics, EDF and Censoring Quantile Function A quantile (or a percentile) of a distribution is that value of X such that a specific percentage of the

More information

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n JOINT DENSITIES - RANDOM VECTORS - REVIEW Joint densities describe probability distributions of a random vector X: an n-dimensional vector of random variables, ie, X = (X 1,, X n ), where all X is are

More information

Master s Written Examination

Master s Written Examination Master s Written Examination Option: Statistics and Probability Spring 05 Full points may be obtained for correct answers to eight questions Each numbered question (which may have several parts) is worth

More information

UQ, Semester 1, 2017, Companion to STAT2201/CIVL2530 Exam Formulae and Tables

UQ, Semester 1, 2017, Companion to STAT2201/CIVL2530 Exam Formulae and Tables UQ, Semester 1, 2017, Companion to STAT2201/CIVL2530 Exam Formulae and Tables To be provided to students with STAT2201 or CIVIL-2530 (Probability and Statistics) Exam Main exam date: Tuesday, 20 June 1

More information

Lecture 2: Repetition of probability theory and statistics

Lecture 2: Repetition of probability theory and statistics Algorithms for Uncertainty Quantification SS8, IN2345 Tobias Neckel Scientific Computing in Computer Science TUM Lecture 2: Repetition of probability theory and statistics Concept of Building Block: Prerequisites:

More information

Stochastic Comparisons of Order Statistics from Generalized Normal Distributions

Stochastic Comparisons of Order Statistics from Generalized Normal Distributions A^VÇÚO 1 33 ò 1 6 Ï 2017 c 12 Chinese Journal of Applied Probability and Statistics Dec. 2017 Vol. 33 No. 6 pp. 591-607 doi: 10.3969/j.issn.1001-4268.2017.06.004 Stochastic Comparisons of Order Statistics

More information

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA, 2016 MODULE 1 : Probability distributions Time allowed: Three hours Candidates should answer FIVE questions. All questions carry equal marks.

More information

Modelling Dependence with Copulas and Applications to Risk Management. Filip Lindskog, RiskLab, ETH Zürich

Modelling Dependence with Copulas and Applications to Risk Management. Filip Lindskog, RiskLab, ETH Zürich Modelling Dependence with Copulas and Applications to Risk Management Filip Lindskog, RiskLab, ETH Zürich 02-07-2000 Home page: http://www.math.ethz.ch/ lindskog E-mail: lindskog@math.ethz.ch RiskLab:

More information

Sampling Distributions

Sampling Distributions Sampling Distributions In statistics, a random sample is a collection of independent and identically distributed (iid) random variables, and a sampling distribution is the distribution of a function of

More information

COPYRIGHTED MATERIAL CONTENTS. Preface Preface to the First Edition

COPYRIGHTED MATERIAL CONTENTS. Preface Preface to the First Edition Preface Preface to the First Edition xi xiii 1 Basic Probability Theory 1 1.1 Introduction 1 1.2 Sample Spaces and Events 3 1.3 The Axioms of Probability 7 1.4 Finite Sample Spaces and Combinatorics 15

More information

REVIEW OF MAIN CONCEPTS AND FORMULAS A B = Ā B. Pr(A B C) = Pr(A) Pr(A B C) =Pr(A) Pr(B A) Pr(C A B)

REVIEW OF MAIN CONCEPTS AND FORMULAS A B = Ā B. Pr(A B C) = Pr(A) Pr(A B C) =Pr(A) Pr(B A) Pr(C A B) REVIEW OF MAIN CONCEPTS AND FORMULAS Boolean algebra of events (subsets of a sample space) DeMorgan s formula: A B = Ā B A B = Ā B The notion of conditional probability, and of mutual independence of two

More information

For a stochastic process {Y t : t = 0, ±1, ±2, ±3, }, the mean function is defined by (2.2.1) ± 2..., γ t,

For a stochastic process {Y t : t = 0, ±1, ±2, ±3, }, the mean function is defined by (2.2.1) ± 2..., γ t, CHAPTER 2 FUNDAMENTAL CONCEPTS This chapter describes the fundamental concepts in the theory of time series models. In particular, we introduce the concepts of stochastic processes, mean and covariance

More information

Chapter 6: Random Processes 1

Chapter 6: Random Processes 1 Chapter 6: Random Processes 1 Yunghsiang S. Han Graduate Institute of Communication Engineering, National Taipei University Taiwan E-mail: yshan@mail.ntpu.edu.tw 1 Modified from the lecture notes by Prof.

More information

Research Reports on Mathematical and Computing Sciences

Research Reports on Mathematical and Computing Sciences ISSN 1342-2804 Research Reports on Mathematical and Computing Sciences Long-tailed degree distribution of a random geometric graph constructed by the Boolean model with spherical grains Naoto Miyoshi,

More information

Stationary particle Systems

Stationary particle Systems Stationary particle Systems Kaspar Stucki (joint work with Ilya Molchanov) University of Bern 5.9 2011 Introduction Poisson process Definition Let Λ be a Radon measure on R p. A Poisson process Π is a

More information

The Multivariate Normal Distribution 1

The Multivariate Normal Distribution 1 The Multivariate Normal Distribution 1 STA 302 Fall 2017 1 See last slide for copyright information. 1 / 40 Overview 1 Moment-generating Functions 2 Definition 3 Properties 4 χ 2 and t distributions 2

More information