Marshall-Olkin Bivariate Exponential Distribution: Generalisations and Applications

Similar documents
Gumbel Distribution: Generalizations and Applications

A BIVARIATE MARSHALL AND OLKIN EXPONENTIAL MINIFICATION PROCESS

Marshall-Olkin Univariate and Bivariate Logistic Processes

A Marshall-Olkin Gamma Distribution and Process

On q-gamma Distributions, Marshall-Olkin q-gamma Distributions and Minification Processes

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

A New Class of Positively Quadrant Dependent Bivariate Distributions with Pareto

Some Generalizations of Weibull Distribution and Related Processes

Stat 512 Homework key 2

Chapter 5. Logistic Processes

Random Variables and Their Distributions

Three hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER.

Multivariate Survival Data With Censoring.

Part IA Probability. Theorems. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Multivariate Random Variable

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

2 Functions of random variables

The Distributions of Sums, Products and Ratios of Inverted Bivariate Beta Distribution 1

A Bivariate Weibull Regression Model

Probability Lecture III (August, 2006)

Chapter 5 continued. Chapter 5 sections

Structural Reliability

Extreme Value Analysis and Spatial Extremes

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Multivariate Normal-Laplace Distribution and Processes

1 Exercises for lecture 1

n! (k 1)!(n k)! = F (X) U(0, 1). (x, y) = n(n 1) ( F (y) F (x) ) n 2

Katz Family of Distributions and Processes

MULTIVARIATE DISCRETE PHASE-TYPE DISTRIBUTIONS

Chapter 5. Chapter 5 sections

Multiple Random Variables

MA/ST 810 Mathematical-Statistical Modeling and Analysis of Complex Systems

IE 230 Probability & Statistics in Engineering I. Closed book and notes. 60 minutes.

Partial Solutions for h4/2014s: Sampling Distributions

Probability Models. 4. What is the definition of the expectation of a discrete random variable?

IDENTIFIABILITY OF THE MULTIVARIATE NORMAL BY THE MAXIMUM AND THE MINIMUM

2 Random Variable Generation

LIST OF FORMULAS FOR STK1100 AND STK1110

Random Variables. Cumulative Distribution Function (CDF) Amappingthattransformstheeventstotherealline.

Formulas for probability theory and linear models SF2941

Statistics (1): Estimation

MAS223 Statistical Inference and Modelling Exercises

Estimation of the Bivariate Generalized. Lomax Distribution Parameters. Based on Censored Samples

Basics of Stochastic Modeling: Part II

STAT Chapter 5 Continuous Distributions

Lecture 11. Probability Theory: an Overveiw

1.1 Review of Probability Theory

On Extreme Bernoulli and Dependent Families of Bivariate Distributions

Stat 5101 Notes: Brand Name Distributions

Asymptotic Statistics-III. Changliang Zou

Information geometry for bivariate distribution control

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER.

Distributions of Functions of Random Variables. 5.1 Functions of One Random Variable

18.440: Lecture 28 Lectures Review

Mathematical Preliminaries

HW4 : Bivariate Distributions (1) Solutions

4. CONTINUOUS RANDOM VARIABLES

1 Review of Probability

Preliminaries. Probability space

Lecture 21: Convergence of transformations and generating a random variable

Lecture 1: August 28

Weighted Exponential Distribution and Process

ON THE CHARACTERIZATION OF A BIVARIATE GEOMETRIC DISTRIBUTION ABSTRACT

. Find E(V ) and var(v ).

Continuous Random Variables

Probability and Distributions

BMIR Lecture Series on Probability and Statistics Fall 2015 Discrete RVs

On Sarhan-Balakrishnan Bivariate Distribution

Chapter 3, 4 Random Variables ENCS Probability and Stochastic Processes. Concordia University

Master s Written Examination

Basic concepts of probability theory

Conditional Tail Expectations for Multivariate Phase Type Distributions

COMPOSITE RELIABILITY MODELS FOR SYSTEMS WITH TWO DISTINCT KINDS OF STOCHASTIC DEPENDENCES BETWEEN THEIR COMPONENTS LIFE TIMES

Bivariate Geometric (Maximum) Generalized Exponential Distribution

Conditional independence of blocked ordered data

This exam is closed book and closed notes. (You will have access to a copy of the Table of Common Distributions given in the back of the text.

Order Statistics and Distributions

Reliability of Coherent Systems with Dependent Component Lifetimes

x. Figure 1: Examples of univariate Gaussian pdfs N (x; µ, σ 2 ).

ISyE 3044 Fall 2017 Test #1a Solutions

Multiple Random Variables

Weighted Marshall-Olkin Bivariate Exponential Distribution

18.440: Lecture 28 Lectures Review

STAT 6385 Survey of Nonparametric Statistics. Order Statistics, EDF and Censoring

P (x). all other X j =x j. If X is a continuous random vector (see p.172), then the marginal distributions of X i are: f(x)dx 1 dx n

Master s Written Examination

UQ, Semester 1, 2017, Companion to STAT2201/CIVL2530 Exam Formulae and Tables

Lecture 2: Repetition of probability theory and statistics

Stochastic Comparisons of Order Statistics from Generalized Normal Distributions

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY

Modelling Dependence with Copulas and Applications to Risk Management. Filip Lindskog, RiskLab, ETH Zürich

Sampling Distributions

COPYRIGHTED MATERIAL CONTENTS. Preface Preface to the First Edition

REVIEW OF MAIN CONCEPTS AND FORMULAS A B = Ā B. Pr(A B C) = Pr(A) Pr(A B C) =Pr(A) Pr(B A) Pr(C A B)

For a stochastic process {Y t : t = 0, ±1, ±2, ±3, }, the mean function is defined by (2.2.1) ± 2..., γ t,

Chapter 6: Random Processes 1

Research Reports on Mathematical and Computing Sciences

Stationary particle Systems

The Multivariate Normal Distribution 1

Transcription:

CHAPTER 6 Marshall-Olkin Bivariate Exponential Distribution: Generalisations and Applications 6.1 Introduction Exponential distributions have been introduced as a simple model for statistical analysis of lifetimes. The bivariate exponential distribution and the multivariate extension of exponential distributions due to Marshall-Olkin (1967) has received considerable attention in describing the statistical dependence of components in a 2-component system and in developing statistical inference procedures. Moment generating function of the bivariate generalized Exponential distribution is discussed by Ashour et al. (2009). Multivariate generalized Exponential distribution is studied by Mu and Wang (2010). Hanagal (1995) studied testing reliability in a bivariate exponential stress-strength model. A bivariate Marshall and Olkin Exponential minification process is discussed by Ristic et al. (2008). Reliability 100

of stress-strength model with a bivariate exponential distribution is disscussed by Mokhlis (2006). Marshall and Olkin (1997) introduced a method of obtaining an extended family of distributions including one more parameter. For a random variable with a distribution function F (x) and survival function F (x), we can obtain a new family of distribution functions called univariate Marshall-Olkin family having cumulative distribution function G(x) given by G(x) = Then the corresponding survival function is F (x) ; < x < ; 0 < α <. α + (1 α)f (x) G(x) = αf (x) ; < x < ; 0 < α <. 1 (1 α)f (x) This new family involves an additional parameter α. In bivariate case if (X, Y ) be a random vector with joint survival function F (x, y), then G(x, y) = F (x, y) ; < x < ; < y < ; 0 < α <. α + (1 α)f (x, y) constitute the Marshall-Olkin bivariate family of distributions. The new parameter α results in added flexibility of distributions and influence the reliability properties. Autoregressive models are developed with the idea that the present value of the series, X t can be explained as a function of past values namely, X t 1, X t 2,..., X t p where p determines the number of steps in to the past, needed to forecast the current value. A first order autoregressive time series model with exponential stationary marginal distribution was developed by Gaver and Lewis (1980). Jose et al. (2011) introduced and studied a Marshall-Olkin bivariate Weibull Process. In this chapter we are discussing three different structures of the minification processes and develop minification process with Extended Marshall-Olkin bivariate exponential distribution as marginal. First we consider a bivariate 101

minification process {(X 1n, X 2n ), n 0} given by X 1n = min(p 1 X 1n 1, (1 p) 1 ɛ 1n ) X 2n = min(p 1 X 2n 1, (1 p) 1 ɛ 2n ) where {(ɛ 1n, ɛ 2n )} is a sequence of i.i.d. {(ɛ 1i, ɛ 2i ), i 1} are independent random vectors, 0 < p < 1. nonnegative random vectors, (X 10, X 20 ) and Now we consider a bivariate autoregressive minification process (Y 1n, Y 2n ) having the structure {(Y 1n, Y 2n )} given by (ε 1n, ε 2n ) w.p. α, (Y 1n, Y 2n ) = min(y 1n 1, ε 1n ), min(y 2n 1, ε 2n ), w.p. 1 α where 0 α 1. Finally we consider a bivariate autoregressive minification process (X n, Y n ) having the structure {(X n, Y n )} given by (ε n, η n ) w.p. p, (X n, Y n ) = (min(x n 1, Y n 1 ), (ε n, η n )), w.p. 1 p, where 0 p 1. This chapter is arranged as follows. In section 6.2, Marshall- Olkin bivariate exponential distribution and its properties are discussed. Extended marshall-olkin bivariate exponential distribution is introduced and studied in section 6.3. Multivariate extensions of the models are given in section 6.4. Conclusions are given in section 6.5. 102

6.2 Marshall- Olkin bivariate exponential (MOBVE) distribution Marshall- Olkin bivariate exponential distribution with parameters λ 1, λ 2, λ 12 is defined by the survival function F (x 1, x 2 ) = e ( λ 1x 1 λ 2 x 2 λ 12 max(x 1,x 2 )), x 1, x 2 > 0 (6.2.1) and has univariate exponential marginals with survival functions given by F (x 1 ) = e (λ 1+λ 12 )x 1 F (x 2 ) = e (λ 2+λ 12 )x 2. Marshall and Olkin (1967) fatal shock model assumes that the components of a twocomponent system die after receiving a shock which is always fatal. Independent Poisson processes S 1 (t, λ 1 ), S 2 (t, λ 2 ), S 3 (t, λ 12 ) govern the occurrence of shocks. Events in the process S 1 (t, λ 1 ) are shocks to component 1, events in the process S 2 (t, λ 1 ) are shocks to component 2, and events in the process S 3 (t, λ 12 ) are shocks to both components. The joint survival distribution (X 1, X 2 ) of the components 1 and 2 is F (x 1, x 2 ) = P (X 1 > x 1 ; X 2 > x 2 ) = P {S 1 (x 1, λ 1 ) = 0; S 2 (x 2 λ 2 ) = 0; S 3 (max(x 1 ; x 2 ); λ 12 = 0)} = e ( λ 1x 1 λ 2 x 2 λ 12 max(x 1,x 2 )), x 1, x 2 > 0 The joint density of (6.2.1) is λ 2 (λ 1 + λ 12 ) F (x 1, x 2 ); x 1 < x 2, x 1, x 2 > 0 f(x 1, x 2 ) = λ 1 (λ 2 + λ 12 ) F (x 1, x 2 ); x 2 < x 1, x 1, x 2 > 0 λ 12 F (x1, x 1 ); x 1 = x 2 > 0 (6.2.2) 103

6.2.1 Minification Processes with MOBVE distribution Model 1 Consider a bivariate minification process {(X 1n, X 2n ), n 0} given by X 1n = min(p 1 X 1n 1, (1 p) 1 ɛ 1n ) X 2n = min(p 1 X 2n 1, (1 p) 1 ɛ 2n ) where {(ɛ 1n, ɛ 2n )} is a sequence of i.i.d. nonnegative random vectors. (ε 1n, ε 2n ) and (X 1m, X 2m ) are independent random vectors for all m < n and 0 < p < 1. Theorem 6.2.1. A bivariate minification process {(X 1n, X 2n ), n 0} given by Model 1 is a strictly stationary Markov process with MOBVE (λ 1, λ 2, λ 12 ) marginal distribution if and only if (ε 1n, ε 2n ) has a MOBVE (λ 1, λ 2, λ 12 ) distribution and (X 10, X 20 ) = d (ε 11, ε 21 ). Proof. First assume that the process {(X 1n, X 2n )} is a strictly stationary process with MOBVE (λ 1, λ 2, λ 12 ) distribution. Let F (x1, x 2 ) be the survival function of the random vector (X 1n, X 2n ) and Ḡ(x 1, x 2 ) be the survival function of the random vector (ε 1n, ε 2n ). Then it follows from model 1 that F (x 1, x 2 ) = P [X 1n > x 1, X 2n > x 2 ] Hence we have, Ḡ ((1 p)x 1, (1 p)x 2 ) = F (x 1, x 2 ) F (px 1, px 2 ) = e λ 1(1 p)x 1 λ 2 (1 p)x 2 λ 12 (1 p) max(x 1,x 2 ). This implies that the random vector (ε 1n, ε 2n ) has a MOBVE (λ 1, λ 2, λ 12 ) distribution. Conversely, assume that (ε 1n, ε 2n ) follows MOBVE (λ 1, λ 2, λ 12 ) and (X 10, X 20 ) = d (ε 11, ε 21 ). Let Fn (x 1, x 2 ) be the survival function of the random vector (X 1n, X 2n ). Then for n = 1 we have F 1 (x 11, x 21 ) = F 0 (px 1, px 2 ) Ḡ ((1 p)x 1, (1 p)x 2 ) = e λ 1x 1 λ 2 x 2 λ 12 max(x 1,x 2 ), 104

which implies that (X 11, X 21 ) has a MOBVE (λ 1, λ 2, λ 12 ) distribution. Suppose now that (X 1i, X 2i ) has a MOBVE (λ 1, λ 2, λ 12 ) distribution, i = 1, 2,..., n 1. Then F n (x 1, x 2 ) = F n 1 (px 1, px 2 ) Ḡ ((1 p)x 1, (1 p)x 2 ) = e λ 1x 1 λ 2 x 2 λ 12 max(x 1,x 2 ), i.e. (X 1n, X 2n ) = d MOBVE(λ 1, λ 2, λ 12 ). Thus (X 1n, X 2n ) = d (X 10, X 20 ) for every n and since the process {(X 1n, X 2n )} is a Markov process, it follows that the process {(X 1n, X 2n )} is a strictly stationary process. 6.2.2 Properties of the Process First we shall consider the joint survival function of the random vectors (X 1n+h, X 2n+h ) and (X 1n, X 2n ). From model 1 the joint survival function is given by S h (x 1, x 2, z, v) = P (X 1n+h > x 1, X 2n+h > x 2, X 1n > z, X 2n > v) = S h 1 (px 1, px 2, z, v) Ḡ ((1 p)x 1, (1 p)x 2 ) = S ( 0 p h x 1, p h x 2, z, v ) h Ḡ ( ) p h i (1 p)x 1, p h i (1 p)x 2 i=1 = S 0 ( p h x 1, p h x 2, z, v ) h i=1 F (p (h i) (1 p)x 1, p (h i)/ (1 p)x 2 ) F (p (h i+1) (1 p)x 1, p (h i+1) (1 p)x 2 ) = F ( max(p h x 1, z), max(p h x 2, v) ) F ((1 p)x1, (1 p)x 2 ) F (p h (1 p)x 1, p h (1 p) h x 2 ) We can see that the joint survival function has an absolutely continuous component for z > p h x 1, v > p h x 2, z v and x 1 x 2. We now consider the autocovariance structure of the Marshall Olkin bivariate Exponential minification process. Let us first consider the autocovariance function of the random 105

variables X 1n+1 and X 1n. From equation (6.2.1), we obtain that e (λ 1+λ 12 )(1 p)x 2, x 1 > px 2, P (X 1n+1 > x 2 X 1n = x 1 ) = 0, x 1 px 2. Then the conditional probability density function is dp (X 1n+1 x 2 X 1n = x 1 ) = dx 2 Also, we have (λ 1 + λ 12 )(1 p)x 2 e (λ 1+λ 12 )(1 p)x 2, x 1 > px 2, 0, x 1 px 2. (6.2.3) P ( X 1n+1 = p 1 X 1n X 1n = x 1 ) = P ( ε1n+1 p 1 (1 p)x 1 ) = e (λ 1 +λ 12 )(1 p)x/p. Now, using equations (6.2.3) and (6.2.4) the conditional expectation is obtained as, E(X 1n+1 X 1n = x 1 ) = p 2 (λ 1 + λ 12 ) 2 (1 p) Using the fact that E(X 1n+1 X 1n ) = E(X 1n E(X 1n+1 X 1n )), we get E(X 1n+1 X 1n ) = Cov(X 1n+1, X 1n ) = Similarly, we have Cov(X 2n+1, X 2n ) = 1 p 2 (λ 1 + λ 12 ) 2 (1 p) p (λ 1 + λ 12 ) 2 (6.2.4) p (λ 2 +λ 12 ) 2. Let us consider now the autocovariance function of random variables X 1n+1 and X 2n. We need the joint survival function P (X 1n+1 > x 1, X 2n > x 2 ). It can be derived as P (X 1n+1 > x 1, X 2n > x 2 ) = P (X 1n > px 1, X 2n > x 2 ) P (ε 1n+1 > (1 p)x 1 ) = e (λ 1+λ 12 (1 p))x 1 λ 2 x 2 λ 12 max(px 1,x 2 ). 106

Then similar to the derivation of Cov(X 1n+1, X 1n ) we can obtain Cov(X 2n+1, X 2n ). The autocovariance matrix at lag 1 of a Marshall Olkin bivariate exponential minification process is C = p pλ 12 (λ 1 +λ 12 ) 2 (λ 1 +λ 12 )(λ 2 +λ 12 )(λ 1 +λ 12 +λ 2 p) p (λ 2 +λ 12 ) 2 pλ 12 (λ 1 +λ 12 )(λ 2 +λ 12 )(λ 2 +λ 12 +λ 1 p). 6.2.3 Estimation of the Parameters In this section, we shall consider the problem of estimating the parameters p, α 1, α 2, λ 1, λ 2 and λ 12. Let X 0, X 1,..., X n be a sample of size n + 1. We shall consider first the estimation of the parameter p. Easy calculations show that P (X n+1 > X n ) = (2 p) 1 and P (Y n+1 > Y n ) = (2 p) 1. Let U i = I(X i+1 > X i ) and V i = I(Y i+1 > Y i ). Since the process {(X n, Y n )} is ergodic, the arithmetic means Ūn = 1 n 1 U n i and V n = 1 n 1 V n i are strongly consistent estimators of (2 p) 1. This implies that the estimators p 1n = 2 (Ūn) 1 and p 2n = 2 ( V n ) 1 are strongly consistent estimators of p. Now we shall consider the estimation of the parameters λ 1, λ 2 and λ 12. Since E(X n ) = (λ 1 + λ 12 ) 1, E(Y n ) = (λ 2 + λ 12 ) 1 and E(X n Y n ) = i=0 ( ) 1 1 1 +, λ 1 + λ 2 + λ 12 λ 1 + λ 12 λ 2 + λ 12 we can take the estimates of the parameters λ 1, λ 2 and λ 12 as the solutions of the system of the equations ( ) 1 λ 1 + λ 1 n 12 = X i n + 1 i=0 ( ) 1 λ 2 + λ 1 n 12 = Y i n + 1 i=0 ( ) 1 1 1 λ 1 + λ 2 + λ 12 λ 1 + λ + 12 λ 2 + λ 12 = 1 n + 1 n X i Y i. i=0 i=0 107

Figure 6.1: The simulated sample path for various values of n and p when λ 1 = 0.5, λ 2 = 1, λ 12 = 1.5 6.2.4 Sample Path Properties of MOBVE Process In order to study the behavior of the processes, we simulate the sample paths for various values of n and p. In particular we take λ 1 = 0.5, λ 2 = 1 and λ 12 = 1.5 and is given in Figure 6.1. In Fig. 1a, Fig. 1b and in Fig. 1c we take n = 200 and p = 0.6. In Fig. 2a, Fig. 2b and in Fig. 2c we take n = 300 and p = 0.8. In Fig. 3a, Fig. 3b and in Fig. 3c we take n = 400 and p = 0.9. 6.2.5 Determination of Reliability Mukherjee and Maiti (2005) discussed the determination of reliability with respect to MOBVE distribution. They consider the survival function given in (6.2.1) and also consider Y (strength) as non negative random variable following the exponential distribution with survival function Ḡ(y) = e y ; 0 < y < 108

When stress components are in series, the reliability is given by R = λ 1 + λ 2 + λ 12 1 + λ 1 + λ 2 + λ 12 When stress components are in parallel, the reliability is given by R = [1 1 λ 1 + λ 12 + 1 1 λ 2 + λ 12 + 1 + 1 λ 1 + λ 2 + λ 12 + 1 ] 6.3 Extended Marshall-Olkin bivariate exponential (EMOBVE) model Here we construct the new probability model applying the technique given by Marshall and Olkin (1997). If F (x 1, x 2 ) is the survival function of a bivariate random vector (X 1, X 2 ) then the Marshall Olkin family of distributions with an additional parameter α is called Extended Marshall-Olkin bivariate exponential distribution and has the new survival function given by Ḡ(x 1, x 2 ) = αe ( λ 1x 1 λ 2 x 2 λ 12 max(x 1,x 2 )) 1 (1 α)e ( λ 1 λ 2 x 2 λ 12 max(x 1,x 2 )), α, x 1, x 2 > 0 Theorem 6.3.1. Let {(X 1n, X 2n ), n 1} be a sequence of i.i.d. random vectors with common survival function F (x 1, x 2 ) which is survival function of MOBVE(λ 1, λ 2, λ 12 ). Let N be a random variable with a geometric(α) distribution and suppose that N and (X 1i, X 2i ) are independent for all i 1. Define U N = min X 1i and V N = min X 2i. 1 i N 1 i N Then the random vector (U N, V N ) is distributed as EMOBVE (α, λ 1, λ 2, λ 12 ) if and only if (X 1i, X 2i ) has MOBVE(λ 1, λ 2, λ 12 ) distribution. Proof. Let S(x 1, x 2 ) be the survival function of (U N, V N ). By definition S(x 1, x 2 ) = P (U N > x 1, V N > x 2 ) = = [ F (x 1, x 2 )] n (1 α) n 1 α n=1 α F (x 1, x 2 ) 1 (1 α) F (x 1, x 2 ) = αe λ1x1 λ2x2 λ12 max(x1,x2) 1 (1 α)e λ 1x 1 λ 2 x 2 λ 12 max(x 1,x 2 ) which is the EMOBVE (α, λ 1, λ 2, λ 12 ). Converse easily follows. 109

Model 1 Consider a bivariate minification process {(X 1n, X 2n ), n 0} given by X 1n = min(p 1 X 1n 1, (1 p) 1 ɛ 1n ) X 2n = min(p 1 X 2n 1, (1 p) 1 ɛ 2n ) where {(ɛ 1n, ɛ 2n )} is a sequence of i.i.d. nonnegative random vectors, (X 10, X 20 ) and {(ɛ 1i, ɛ 2i ), i 1} are independent random vectors, 0 < p < 1. Theorem 6.3.2. A bivariate minification processes {(X 1n, X 2n ), n 0} has Extended Marshall-Olkin bivariate exponential (EMOBVE) stationary marginal distribution iff {(ɛ 1n, ɛ 2n )} has Marshall-Olkin bivariate exponential (MOBVE) distribution. Proof. Let F (x 1, x 2 ) be the survival function of the random vector (X 1n, X 2n ) and Ḡ(x 1, x 2 ) be the survival function of the random vector (ɛ 1n, ɛ 2n ) F (x 1, x 2 ) = P [X 1n > x 1, X 2n > x 2 ] Hence we have, Ḡ((1 p)x 1, (1 p)x 2 ) = F (x 1, x 2 ) F (px 1, px 2 ) = e λ 1(1 p)x 1 λ 2 (1 p)x 2 λ 12 (1 p)max(x 1,x 2 ) This implies that {(ɛ 1n, ɛ 2n )} has EMOBVE (λ 1, λ 2, λ 12 ) distribution. Conversely assume that {(ɛ 1n, ɛ 2n )} has EMOBVE (λ 1, λ 2, λ 12 ) distribution. Then F (x 1, x 2 ) = e ( λ 1x 1 λ 2 x 2 λ 12 max(x 1,x 2 )) Model 2 Consider a bivariate autoregressive minification process (Y 1n, Y 2n ) having the structure 110

{(Y 1n, Y 2n )} given by (ε 1n, ɛ 2n ) w.p. α, (Y 1n, Y 2n ) = min(y 1n 1, ε 1n ), min(y 2n 1, ɛ 2n ), w.p. 1 α where 0 α 1. Theorem 6.3.3. A minification process {(Y 1n, Y 2n ), n 0} has EMOBV E(α, λ 1, λ 2, λ 12 ) stationary marginal distribution if and only if (ɛ 1n, ɛ 2n ) has MOBV E(λ 1, λ 2, λ 12 ) distribution and (X 0, Y 0 ) has MOBVE (α, λ 1, λ 2, λ 12 ) distribution. Proof. Let Ḡn(x 1, x 2 ) and F (x 1, x 2 ) be the survival functions of (X 1n, X 2n ) and (ɛ 1n, ɛ 2n ), respectively. From the definition of the process, we have that Ḡ n (x 1, x 2 ) = P (Y 1n > x 1, Y 2n > x 2 ) = α F (x 1, x 2 ) + (1 α)ḡn 1(x 1, x 2 ) F (x 1, x 2 ). Under stationarity (6.3.1) Ḡ(x 1, x 2 ) = [α + (1 α)ḡ(x 1, x 2 )] F (x 1, x 2 ). (6.3.2) Replacing Ḡ with the survival function of the random vector with EMOBVE (α, λ 1, λ 2, λ 12 ) distribution and solving the obtained equation on F (x 1, x 2 ), we obtain F (x 1, x 2 ) = e λ 1x 1 λ 2 x 2 λ 12 max(x 1,x 2 ). (6.3.3) Hence (ɛ 1n, ɛ 2n ) follows MOBVE(λ 1, λ 2, λ 12 ) distribution. Conversely using for n = 1, we can show that Ḡ 1 (x 1, x 2 ) = αe λ 1x 1 λ 2 x 2 λ 12 max(x 1,x 2 ) 1 (1 α)e λ 1x 1 λ 2 x 2 λ 12 max(x 1,x 2 ), which is the survival function of EMOBVE (α, λ 1, λ 2, λ 12 ). Hence it follows that (Y 11, Y 21 ) has EMOBVE(α, λ 1, λ 2, λ 12 ). Now assume that (Y 1n 1, Y 2n 1 ) d = EMOBVE(α, λ 1, λ 2, λ 12 ). 111

Then Ḡ n (x 1, x 2 ) = α F (x 1, x 2 ) + (1 α)ḡn 1(x 1, x 2 ) F (x 1, x 2 ) = [α + (1 α)ḡn 1(x 1, x 2 )] F (x 1, x 2 ) = αe λ 1x 1 λ 2 x 2 λ 12 max(x 1,x 2 ) 1 (1 α)e λ 1x 1 λ 2 x 2 λ 12 max(x 1,x 2 ). Thus, (Y 1n, Y 2n ) follows EMOBVE(α, λ 1, λ 2, λ 12 ) distribution. Hence by induction (Y 1n, Y 2n ) has EMOBVE(α, λ 1, λ 2, λ 12 ) distribution for every n 0. This establishes stationarity. Corollary 6.3.1. If (Y 10, Y 20 ) has an arbitrary bivariate distribution and {(ɛ 1n, ɛ 2n )} has MOBVE (λ 1, λ 2, λ 12 ) distribution, then {(Y 1n, Y 2n )} has EMOBVE (α, λ 1, λ 2, λ 12 ) distribution asymptotically. Proof. Using the equation (6.3.1) repeatedly, we find Ḡ n (x 1, x 2 ) = α F (x 1, x 2 ) + (1 α) F (x 1, x 2 )Ḡn 1(x 1, x 2 ) = α F (x 1, x 2 ) ( 1 + (1 α) F (x 1, x 2 ) ) + (1 α) 2 F 2 (x 1, x 2 )Ḡn 2(x 1, x 2 ) = α F n 1 (x 1, x 2 ) (1 α) j F j (x 1, x 2 ) + (1 α) n F n (x 1, x 2 )Ḡ0(x 1, x 2 ) j=0 = α F (x 1, x 2 ) ( 1 (1 α) n F n (x 1, x 2 ) ) 1 (1 α) F (x 1, x 2 ) Taking limit as n, we have that lim Ḡ n (x 1, x 2 ) = n Model 3 + (1 α) n F n (x 1, x 2 )Ḡ0(x 1, x 2 ). α F (x 1, x 2 ) 1 (1 α) F (x 1, x 2 ) = αe λ1x1 λ2x2 λ12 max(x1,x2) 1 (1 α)e λ 1x 1 λ 2 x 2 λ 12 max(x 1,x 2 ). Consider a bivariate autoregressive minification process (X n, Y n ) having the structure 112

{(X n, Y n )} given by (ε n, η n ) w.p. p, (X n, Y n ) = (min(x n 1, Y n 1 ), (ε n, η n )), w.p. 1 p, Theorem 6.3.4. A bivariate autoregressive minification process (X n, Y n ) has EMOBVE stationary marginal distribution if and only if (ε n, η n ) has MOBVE distribution. Proof. Let Ḡ(x, y) and F (x, y) be the survival functions of (X n, Y n ) and (ɛ n, η n ) respectively. Under stationarity Ḡ(x, y) = (p + (1 p)ḡ(x, y)) F (x, y), if we take F (x, y) = e λ 1x λ 2 y λ 12 max(x,y), we get Conversely, then we have Ḡ(x, y) = pe λ 1x λ 2 y λ 12 max(x,y) 1 (1 p)e λ 1x λ 2 y λ 12 max(x,y) F (x, y) = Ḡ(x, y) p + (1 p)ḡ(x, y), F (x, y) = e λ 1x λ 2 y λ 12 max(x,y), which is the survival function of MOBVE. 6.3.1 Determination of Reliability Let X 1 and X 2 be two non negative random variables jointly following EMOBVE distribution with survival function Ḡ(x 1, x 2 ) = αe ( λ 1x 1 λ 2 x 2 λ 12 max(x 1,x 2 )) 1 (1 α)e ( λ 1 λ 2 x 2 λ 12 max(x 1,x 2 )), α, x 1, x 2 > 0 113

Also let Y( Strength) be a non-negative random variable following the exponential distribution with Survival function G(y) = e y ; 0 < y < and its pdf is g(y) = e y ; 0 < y < Our objective is to derive stress-strength reliability R when stress (X) has two components X 1 and X 2 and strength Y is independently distributed. Case(i): Stress components are in series We define, The survival function for U is given by Ḡ(u) = Then reliability can be obtained as U = min(x 1, X 2 ). αe ( λ 1u λ 2 u λ 12 u) 1 (1 α)e ( λ 1u λ 2 u λ 12 u). R = P (U < Y ) = = 0 0 { y o f(u)du}g(y)dy e y e (1+λ 1+λ 2 +λ 12 )y 1 (1 α)e (λ 1+λ 2 +λ 12 )y dy From the Table 6.1, it is clear that as α increases reliability decreases. Using this systems with optimal reliability values can be designed. Case(ii): Stress components are in parallel In this case, we consider V = max(x 1, X 2 ) 114

Table 6.1: Reliability R under Extended Marshall-Olkin bivariate exponential model where stress components are in series and λ 1 = 1, λ 2 = 2 and λ 12 = 3 α R 0.005 0.995763 0.500 0.898504 1 0.857142 2 0.807543 3 0.775411 5 0.732582 10 0.672124 50 0.534067 100 0.479660 2000 0.200940 The cumulative distribution function for V is given by F V (x) = Then, αe (λ 1+λ 2 +λ 12 )x 1 (1 α)e (λ 1+λ 2 +λ 12 )x + 1 e (λ1+λ12)x 1 (1 α)e (λ 1+λ 12 )x + 1 e (λ2+λ12)x 1 (1 α)e (λ 2+λ 12 )x 1. R = P (V < Y ) = = 0 0 { y o f(v)dv}g(y)dy F V (y)e y dy From the Table 6.2, it is clear that as α increases reliability decreases. Using this systems with optimal reliability values can be designed. 6.4 Multivariate Extensions Now we extend the results to the multivariate case. For that we consider the Marshall- Olkin fatal shock model, where there are k components subject to failure. Let t i denote the 115

Table 6.2: Reliability under Extended Marshall-Olkin bivariate exponential model where stress components are in parallel and λ 1 = 1, λ 2 = 2 and λ 12 = 3 α R 0.05 0.960070 0.50 0.838880 2 0.703029 3 0.656812 5 0.596680 20 0.436225 100 0.280610 1000 0.133670 failure time of the i th component. The joint distribution of lifetimes (t 1,..., t k ) is given by the Marshall-Olkin multivariate exponential distribution. The joint survival function of lifetimes is given by k k P (t 1 > x 1,..., t k > x k ) = exp( λ i x i λ ij max{x i, x j } λ 1,...,k max{x 1,..., x k }) i=1 Multivariate Extended Marshall-Olkin bivariate exponential distribution can be written ( ) k k α exp λ i x i λ ij max{x i, x j } λ 1,...,k max{x 1,..., x k } i=1 i,j=1 Ḡ(x 1,..., x k ) = ( ) k k 1 (1 α) exp λ i x i λ ij max{x i, x j } λ 1,...,k max{x 1,..., x k } i,j=1 i=1 i,j=1 Model 1 can be extended to n variable case as follows. Let X n = {X 1n...X pn } where X jn = min(p 1 X jn 1, (1 p) 1 ɛ jn ), 0 < p < 1 and ɛ 1n, ɛ 2n... are sequences of i.i.d non negative random variables. Then X n have extended multivariate Marshall Olkin exponential marginal distribution. Model 2 can also be extended to n variable cases. 116

Let (ε 1n,...ε pn ) w.p. α, Y n = {Y 1n...Y pn } = min(y 1n 1, ε 1n ),... min(y pn 1, ε pn ), w.p. 1 α where 0 < α < 1. Then Y n have extended multivariate Marshall Olkin exponential stationary marginal distribution if and only if (ε 1n,...ε pn ) has multivariate Marshall Olkin exponential distribution. 6.5 Conclusions Extended Marshall-Olkin bivariate exponential distribution is introduced and its properties are studied. Expressions for stress-strength reliability of a two component system are derived. Reliability R for various parameter combinations are also computed. From the tables it is clear that as α increases reliability decreases. Using this systems with optimal reliability values can be designed. Multivariate extensions are also done to model multicomponent systems. We introduced three different forms of minification processes and necessary and sufficient conditions for stationarity are established. References Ashour, S.K., Amin, E.A., Muhammed, H.Z. (2009) Moment Generating Function of the Bivariate Generalized Exponential distribution, Applied Mathematical Sciences, 3, 59 (2911-2918). Gaver, D.P., Lewis, P.A.W. (1980) First order autoregressive gamma sequences and point processes, Adv. Appl. Pro., 12, 727-745. Hanagal, D.D.(1995) Testing reliability in a bivariate exponential stress-strength model, Journal of the Indian Statistical Association, 33, 41-45. Jose, K.K., Ancy Joseph, Ristic, M.M. (2011) Marshall Olkin Weibull Distributions and Minification Processes, Statistical Papers, 52, 789-798. 117

Marshall, A.W., Olkin, I. (1967) A Multivariate Exponential Distribution J. Amer. Statist. Assoc., 62, 30-44. Marshall, A. W., Olkin, I. (1997) A new method for adding a parameter to a family of distributions with application to the exponential and weibull families, Biometrica 84(3), 641-652. Mokhlis, N.M. (2006) Reliability of strength model with a bivariate exponential distribution, Journal of the Egyptian Mathematical Society, 14(1), 69-78. Mu, J., Wang, Y. (2010) Multivariate Generalized Exponential distribution, Journal of Dynamical Systems and Geometric Theories, 8, 2(189-199) Mukherjee, S.P., Maiti, S.S. (2005) Stress -strength Reliability under two component stress system, Journal of Statistical Theory and Applications, 341-347. Ristic, M.M., Popovic, B.C., Nastic, A., Dordevic, M. (2008) Bivariate Marshall Olkin Exponential Minification Processes, Filomat 22, 1, 67-75. 118