The Kumaraswamy Inverse Weibull Poisson Distribution With Applications

Size: px
Start display at page:

Download "The Kumaraswamy Inverse Weibull Poisson Distribution With Applications"

Transcription

1 Indiana University of Pennsylvania Knowledge IUP Theses and Dissertations (All) The Kumaraswamy Inverse Weibull Poisson Distribution With Applications Walter T. Bera Indiana University of Pennsylvania Follow this and additional works at: Recommended Citation Bera, Walter T., "The Kumaraswamy Inverse Weibull Poisson Distribution With Applications" (2015). Theses and Dissertations (All) This Thesis is brought to you for free and open access by Knowledge IUP. It has been accepted for inclusion in Theses and Dissertations (All) by an authorized administrator of Knowledge IUP. For more information, please contact cclouser@iup.edu, sara.parme@iup.edu.

2 THE KUMARASWAMY INVERSE WEIBULL POISSON DISTRIBUTION WITH APPLICATIONS A Thesis Submitted to the School of Graduate Studies and Research in Partial Fulfillment of the Requirements for the Degree Master of Science Walter T. Bera Indiana University of Pennsylvania August 2015

3 Indiana University of Pennsylvania School of Graduate Studies and Research Department of Mathematics We hereby approve the thesis of Walter T. Bera Candidate for the degree of Master of Science Mavis Pararai, Ph.D. Associate Professor of Mathematics Christoph E. Maier, Ph.D. Associate Professor of Mathematics Russell Stocker, Ph.D. Assistant Professor of Mathematics ACCEPTED Randy L. Martin, Ph.D. Dean School of Graduate Studies and Research ii

4 Title: The Kumaraswamy Inverse Weibull Poisson Distribution with Applications Author: Walter T. Bera Thesis Chair: Dr. Mavis Pararai Thesis Committee Members: Dr. Russell Stocker Dr. Christoph E. Maier The aim of this thesis is to propose a new distribution called the Kumaraswamy inverse Weibull Poisson (KIWP) Distribution. The distribution properties including hazard functions, reverse hazard functions, survival functions, quantile functions, moments, distributions of order statistics, mean deviations, Lorenz and Bonferroni curves and Fischer information are presented. The maximum likelihood method is used to estimate the model parameters of this new distribution. The special cases of the KIWP distribution including the inverse Weibull Poisson (IWP), Kumaraswamy Frechet Poisson (KFP), Kumaraswamy inverse exponentiated Poisson (KIEP) and Kumaraswamy inverse Rayleigh Poisson (KIRP) distributions are presented. A Monte Carlo simulation study is presented to exhibit the performance and accuracy of the maximum likelihood estimates of the KIWP model parameters. Real data examples are used to show the usefulness of the proposed model. iii

5 ACKNOWLEDGMENTS I owe my deepest gratitude to my thesis adviser, Dr. Mavis Pararai for her support and guidance during the completion of my thesis. I would also like to thank my committee members Dr. Russell Stocker and Dr. Christoph Maier for their investment of time in providing guidance and direction. I am thankful to Dr. John Chrispell for assisting me with the formatting of my document and also to my colleague Gayan Warahena Liyanage who taught me a lot. Finally, I am grateful to my family for their unwavering support and encouragement. iv

6 TABLE OF CONTENTS Chapter Page 1 INTRODUCTION Introduction Thesis Outline THE KUMARASWAMY INVERSE WEIBULL POISSON DISTRIBUTION General Class of Distribution Expansion of the Density Function Monotonicity Properties Survival, Hazard and Reverse Hazard Functions Moments, Moment Generating Function and Conditional Moments Quantile Function Moments Moment Generating Function Conditional Moments Order Statistics Sub-Models of the KIWP distribution Mean and Median Deviations Bonferroni and Lorenz curves MEASURES OF UNCERTAINTY Rényi Entropy Shannon Entropy Reliability Maximum Likelihood Estimation Fisher Information Matrix Asymptotic Confidence Intervals Concluding Remarks MONTE CARLO SIMULATION STUDY Monte Carlo Simulation Study Concluding Remarks APPLICATIONS TO LIFETIME DATA Cancer Patients Data Set Guinea Pigs Data Set Yarn Specimen Data Set Glass Fibers Data Set Concluding Remarks v

7 Chapter Page 6 CONCLUSIONS AND SUGGESTIONS FOR FURTHER STUDY Suggestions for Further Study REFERENCES APPENDICES R Algorithms Code for Simulation Study vi

8 LIST OF TABLES Table Page 1 Moments of the KIWP Distribution for α = 1.5, β = 6.5 and θ = Moments of the KIWP Distribution for a = 1.0 and b = Monte Carlo Simulation Results: Average Bias, RMSE and AW Remission Times (in months) of Bladder Cancer Patients Estimates Of Models For Cancer Patients Data Survival Times (in days) of Guinea Pigs Infected with Virulent Tubercle Bacilli 62 7 Estimates Of Models For Guinea Pigs Data Number of Cycles of Failure for Yarn Specimens Estimates Of Models For Yarn Specimens Data Strengths of Glass Fibers measured by the National Physical laboratory Estimates Of Models For Glass Fibers Data vii

9 LIST OF FIGURES Figure Page 1 Plot of the cdf for different parameters Plot of KIWP density for different parameters Plot of the hazard function for different parameter values Fitted densities of cancer patients Data Probability plots of cancer patients data Fitted densities of guinea pigs data Probability plots of guinea pigs data Fitted densities of yarn specimens data Probability plots of yarn specimen data Fitted densities of glass fibers data Probability plots of glass fibers data viii

10 CHAPTER 1 INTRODUCTION Introduction Statistical lifetime distributions are employed broadly in data modeling. They are greatly utilized in areas such as reliability engineering, survival analysis, social sciences and a host of other applications. Of particular interest are applications of lifetime distributions in reliability engineering, including the measurement of survival time of electrical components. In medicine, an interesting application of lifetime distributions is in the calculation of survival times of patients post surgery. Khan et al. (2008) defined reliability as the probability that a system or process performs its prescribed duty without failure for a given time given it is operated correctly in a specified environment. In social sciences, an interesting application is modelling the lifetime of marriages, Almalki and Nadarajah (2014). One of the most important distributions used in modeling lifetime data is the Weibull distribution whose cumulative distribution function (cdf) and probability density function (pdf) are respectively given by F (x) = 1 exp( αx β ) and f(x) = αβx β 1 exp( αx β ), where x > 0, α > 0, and β > 0. Notice that α is a scale parameter and β is a shape parameter. Liu (1997) explained the use of the Weibull distribution as being an appropriate model of failure in instances where an item consists of numerous components and each component has an identical failure time distribution and the item fails when the weakest part fails. If X is a random variable following a Weibull Distribution with parameters α, β > 0, under the 1

11 transformation 1/X, the inverse Weibull distribution proposed by Keller and Kamath (1982) is obtained. The cdf and pdf of the inverse Weibull distribution are respectively given by F (x; α, β) = exp( αx β ), x > 0, α > 0, β > 0, (1.1) f(x; α, β) = αβx β 1 exp( αx β ). x > 0, α > 0, β > 0 (1.2) The hazard function of the inverse Weibull distribution is given by: h(x, α, β) = βα β x β 1 exp[ (αx) β ] 1 exp[ (αx) β ] The inverse Weibull distribution has proved to be very useful in the modeling of lifetime data. It has been compounded with other continuous distributions to produce new lifetime distributions. Pararai et al. (2014) proposed the gamma inverse Weibull distribution by compounding the inverse Weibull distribution density function with the gamma generator proposed by Ristić and Balakrishnan (2012). The gamma inverse Weibull distribution is a 3 parameter distribution with parameters λ, β and σ. Some of the submodels of the gamma inverse Weibull (GIW) include the inverse Weibull distribution, the gamma Frechet (GF) distribution and the inverse Rayleigh (IR) distribution. The hazard rate function assumes unimodal and upside down bathtub shapes. The cdf and pdf of the GIW distribution are respectively given by F GIW (x) = 1 γ( log(exp[ (αx) β ]), δ) Γ(δ) then, g GIW (x) = βx 1 Γ(δ) [ λx β ] δ exp [ λx β ]. 2

12 where λ > 0, β > 0, and δ > 0. The GIW distribution was applied to a data set from Bjerkedal et al. (1960) on the survival times in days of guinea pigs injected with different doses of tubercle bacilli. The GIW distribution was also applied to a data set from Lawless (1982) on the number of millions of revolutions before failure of each of 23 ball bearings in a life testing experiment. The GIW was found to be superior to all the models it was compared against. The Kumaraswamy inverse Weibull (KIW) distribution was proposed and studied by Shahbaz et al. (2012). Its cdf and pdf are respectively given by F KIW (x) = 1 [ 1 exp( aαx β ) ] b and f KIW (x) = abαβx β 1 exp( αx β ) [ exp( αx β ) ] a 1 [ 1 {exp( αx β )} a] b 1. The KIW distribution was applied to survival time data and salary data by Shahbaz et al. (2012). Cordeiro and de Castro (2011) proposed a generalized class of distribution called the Kum-G distribution which has a cdf of the following form F K G (x) = 1 [1 G(x) a ] b and the corresponding pdf of the generalized class of distribution is as follows f Kum G (x) = abg(x)g(x) a 1 [1 G(x) a ] b 1. Shahbaz et al. (2012) substituted in the cumulative distribution function of the inverse Weibull distribution into the probability density function of the Kum-G distribution to 3

13 obtain the Kumaraswamy inverse Weibull distribution. The beta inverse Weibull distribution was proposed and studied by Khan (2010). Hanook et al. (2013) further investigated the properties of the beta inverse Weibull distribution which is given by: G (x) = B F (x)(a, b) B(a, b) where B(a, b) = Γ(a + b) Γ(a)Γ(b) and F(x) is the baseline cdf. The cdf of the BIW distribution is given by F BIW = GIW 0 t a 1 (1 t)dt. Another interesting modification of the inverse Weibull distribution is the length biased inverse Weibull distribution proposed by Kersey and Oluyede (2013). Consider the weight function ω(x) = x and the inverse Weibull distribution with the pdf given in 1.2. The general form of the length biased inverse Weibull (LBIW) pdf is given by: g ω (x, α, β) = f(x, α, β) µ F where x 0, α > 0, β > 1 and µ F is the mean. The corresponding cdf for the LBIW distribution is given by: G LBIW = βα β+1 Γ(1 1/β) x 0 t β exp( (αx) β )dt. The hazard function of the LBIW is given by: h(x, α, β) = x β exp( (αx) β ) x t β exp( (αt) β ). 4

14 Khan et al. (2008) studied the inverse Weibull distribution and ascertained its flexibility in modelling reliability data. They further discovered that the inverse Weibull (IW) distribution approaches different distributions when its shape parameter changes. New families of distributions have been formulated by compounding the Poisson distribution with other continuous distributions to provide more flexibility for lifetime data. The zero truncated version of the Poisson distribution is defined as the zero-truncated binomial distribution for the Poisson distribution. Jones (2009) advocated for the Kumaraswamy distribution on the grounds that firstly, it had closed form solutions of the distribution and quantile functions; secondly, the simplicity of the formulas for the moments and thirdly, the fact that the distribution had a simple normalizing constant. Jones further contrasted the Kumaraswamy distribution with the beta distribution. The motivation of this study arises from the advantages observed in the Kumaraswamy inverse Weibull distribution in modelling lifetime data as well as the flexibility in the hazard function which has numerous shapes such as the bathtub shape, increasing and decreasing shape. We propose and study a new class of distribution which inherits these important and desirable properties and contains several submodels with a large number of shapes. Thesis Outline The outline of this thesis is as follows: In chapter two, we present some basic information related to generalized distributions and Kumaraswamy generalized distributions. Properties of the (KIWP) distribution are derived in Chapter 3, including expansion of density, hazard function, monotonicity property, shapes, moments, reliability, mean deviations, Bonferroni and Lorenz curves. Measures of uncertainty such as Renyi entropy and Shannon entropy as well as Fisher information are presented. The mle maximum likelihood is used to estimate the parameters of the (KIWP) and related distributions. Finally, real data examples are discussed to illustrate the applicability of this class of models. 5

15 CHAPTER 2 THE KUMARASWAMY INVERSE WEIBULL POISSON DISTRIBUTION General Class of Distribution The probability distribution function (pdf) and corresponding cumulative distribution function (cdf) of the two-parameter inverse Weibull distribution are respectively given by f(x; α, β) = αβx β 1 exp( αx β ), x > 0, α > 0, β > 0 (2.1) and F (x; α, β) = exp( αx β ), x > 0, α > 0, β > 0. (2.2) Suppose that the random variable X has the inverse Weibull (IW) distribution where its pdf and cdf are given in equations 2.1 and 2.2. Given N, let X 1,..., X N be independent and identically distributed random variables from the IW distribution. Let N be distributed according to the zero truncated Poisson distribution with pdf P (N = n) = θ n e θ, n = 1, 2,..., θ > 0. n!(1 e θ ) Let X=max(Y 1,..., Y N ), then the cdf of X N = n is given by G X N=n (x) = exp( nαx β ) x > 0, α > 0, β > 0, 6

16 which is the inverse Weibull distribution. The inverse Weibull-Poisson distribution denoted by IWP(α, β, θ) is defined by the marginal cdf of X, that is, G IW P (x; α, β, θ) = 1 exp[θe αx β ] (2.3) 1 e θ for x > 0, β > 0, θ > 0. The graph of the distribution function for various values of the parameters α, β, θ, a and b is given in Figure 1. Figure 1. Plot of the cdf for different parameters. The IWP density function is given by g IW P (x; α, β, θ) = θαβx β 1 e αx β exp[θe αx β ] e θ 1 (2.4) 7

17 for x > 0, β > 0, θ > 0, where α is the scale parameter and β is the shape parameter.the graph of the density function for various values of the parameters α, β, θ, a and b is given in Figure 2. Figure 2. Plot of KIWP density for different parameters. Expansion of the Density Function The expansion of the pdf of KIWP distribution is presented in this section. For b > 0 a real non-integer, we use the series representation (1 z) a 1 = ( ) a 1 ( 1) j z j ; a > 0, z < 1 (2.5) j j=0 8

18 We thus have: f KIW P (x) = abg(x)[g(x)] a 1 [1 G(x) a ] b 1 ( ) b 1 = ( 1) j abg IW P (x)[g IW P (x)] aj+a 1. j j=0 f KIW P (x) = ( ) b 1 ( 1) j abαβθx β 1 e αx β exp[θe αx β ] j=0 j [ 1 exp(θe αx β ) 1 e θ ] aj+a 1 e θ 1 (2.6) f KIW P (x) = ( ) b 1 ( 1) aj+a+j 1 abαβθx β 1 e αx β exp[θe αx β ] j j=0 [ 1 exp(θe αx β ) ] aj+a 1 (e θ 1) aj+a = j=0 k=0 exp ( )( ) b 1 aj + a 1 ( 1) aj+a+j+k 1 abαβθx β 1 e αx β j ] [θ(k + 1)e αx β. k (e θ 1) aj+a Notice that e t t m = m!. m=0 9

19 Applying the above identity to the last part of 2.6 yields: f KIW P (x) = j=0 k=0 m=0 ( )( b 1 aj + a 1 j k ) ( 1) aj+a+j+k 1 abθ m+1 (k + 1) m (e θ 1) aj+a m! αβx β 1 e (m+1)αx β ( )( ) b 1 aj + a 1 ( 1) aj+a+j+k 1 abθ m+1 (k + 1) m = j k (e θ 1) aj+a (m + 1)! j=0 k=0 m=0 αβ(m + 1)x β 1 e (m+1)αx β = ω j,k,m (a, b, θ)g(x; α[m + 1], β), (2.7) j,k,m=0 where ( )( ) b 1 aj + a 1 ( 1) aj+a+j+k 1 abθ m+1 (k + 1) m ω j,k,m (a, b, θ) = j k (e θ 1) aj+a (m + 1)! (2.8) are the weights and g(x; α(m + 1), β) is the inverse Weibull pdf with scale parameter α(m+1) and shape parameter β. Consequently, the KIWP density can be written as a linear combination of inverse Weibull density functions. The mathematical properties of the KIWP follow directly from those of the IW distribution. Monotonicity Properties Let The monotonicity properties of the KIWP distribution are discussed in this section. V (x) = ( ) 1 exp[θe ( αx) β ] 1 e θ (2.9) From equation (2.9) we can rewrite the KIWP pdf as f KIW P (x; α, β, θ, a, b) = abαβθx β 1 e αx β [1 V (x)(1 e θ )] e θ 1 V (x) a 1 (1 V (x) a ) b 1 10

20 for x > 0, α > 0, β > 0, θ > 0, a > 0, b > 0. It follows that log f KIW P (x) = log(abαβθ) log x β 1 αx β + log[1 V (x) + V (x)e θ ], log(e θ 1) + (a 1) log V (x) + (b 1) log(1 V (x) a ) and d log f KIW P (x) dx = β + 1 x + V (x)(e θ 1) 1 V (x) + V (x)e + (a 1)V (x) θ V (x) a(b 1) V (x)a 1 V (x) 1 V (x) a. ( ) 1 Substituting V (x) = dv (x)/dx = αβθx β 1 e αx β exp[θe αx β ] into 1 e θ equation 2.10, we have d log f KIW P (x) dx = β + 1 x + βαx β 1 + V (x)(e θ 1) 1 V (x) + V (x)e θ + (a 1)V (x) V (x), a(b 1) V (x)v (x) a 1 (x) 1 V (x) a. Taking the 2nd partial derivative gives: d 2 log f KIW P (x) dx = β + 1 x 2 αβ(β + 1)x β 2, + V (x)(e θ 1)(1 V (x) + V (x)e θ ) V (x) (e θ 1)(V (x)e θ 1) (1 V (x) + V (x)e θ ) 2, +(a 1) V (x)v (x) (V (x)) 2 V (x) 2, a(b 1) (a 1)V (x)a 2 (V (x)) 2 (1 V (x) a ) av (x) a 1 (V (x)) 2 V (x) a 1 (1 V (x) a ) 2 11

21 Thus: d 2 log f KIW P (x) dx = β + 1 x 2 αβ(β + 1)x β 2 + q(x) where q(x) = V (x)(e θ 1)(1 V (x) + V (x)e θ ) V (x)(e θ 1)(V (x) e θ 1) (1 V (x) + V (x)e θ ) 2, +(a 1) V (x)v (x) (V (x)) 2 V (x) 2, a(b 1) (a 1)V (x)a 2 (V (x)) 2 (1 V (x) a ) av (x) a 1 (V (x) ) 2 V (x) a 1 (1 V (x) a ) 2. Since α > 0, β > 0, θ > 0, a > 0 and b > 0, we have V (x) = dv (x) dx = ( ) 1 exp[θe ( αx) β ] αβθ(αx) β 1 exp[θe αx β ], 1 e θ 1 e θ for all x > 0. If x 0, then V (x) = ( ) 1 exp[θe ( αx) β ] 1. 1 e θ If x, then V (x) = ( ) 1 exp[θe ( αx) β ] 0. 1 e θ Since lim x + exp[θe ( αx) β ] = 1 then lim x + (1 exp[θe ( αx) β ]) = 0. Thus V (x) is monotonically increasing from 0 to 1. Note that, since 0 < V (x) < 1,0 < V a 1 (x) < 1, for all a > 1, 0 < 1 V a 1 (x) < 1, for all a > 0 and V (x) > 0, we have 0 < [1 V a (x)] b 1 < 1. for all a > 1, and b > 1. If α > 0, f KIW P (x; α, β, θ, a, b) could attain a maximum, a minimum 12

22 or a point of inflection according to whether d 2 log f KIW P (x) dx 2 < 0, d 2 log f KIW P (x) dx 2 > 0 or d 2 log f KIW P (x) dx 2 = 0. Survival, Hazard and Reverse Hazard Functions The hazard and reverse hazard functions for the KIWP distribution will be presented in this section. The survival function of the KIWP distribution is given by: S(x) = F (x) = 1 F KIW P (x) = [ 1 ( ) 1 exp[θe (αx) β a ] b ]. 1 e θ The hazard and reverse hazard functions of the KIWP are given respectively by f KIW P (x; α, β, θ, a, b) h(x) = 1 F KIW P (x; α, β, θ, a, b) [ ] = abθαβx β 1 e (αx) β exp[θe (αx) β ] 1 exp[θe (αx) β a 1 ] e θ 1 1 e θ [ ( ) 1 exp[θe (αx) β a ] 1 ] 1 1 e θ ( ) a 1 abθαβx β 1 e (αx) β exp[θe (αx) β ] exp[θe (αx) β ] 1 = ( a ( ) a, e θ 1) exp[θe (αx) β ] 1 and τ(x) = f KIW P (x; β, θ, a, b) F KIW P (x; β, θ, a, b) ( = abθαβx β 1 e (αx) β exp[θe (αx) β ] 1 exp[θe (αx) β ] e θ 1 1 e θ [ ( ) 1 exp[θe (αx) β a ] b 1 ] 1 1 e θ { [ ( ) 1 exp[θe (αx) β a ] b } 1 ] 1 1, 1 e θ 13 ) a 1

23 for x > 0, α, β > 0, θ > 0, a > 0 and b > 0. The graph of the hazard function for various values of the parameters α, β, θ, a and b is given in Figure 3. Figure 3. Plot of the hazard function for different parameter values. The graph of the hazard function for different values of the parameters exhibits various shapes such as monotonically increasing, bathtub shape, increasing-decreasing, monotone decreasing and upside down bathtub shapes. This is an attractive feature that renders the KIWP distribution suitable for monotonic and non-monotonic hazard behaviors which are more likely to be encountered in real life situations. 14

24 Moments, Moment Generating Function and Conditional Moments In this section, we present the moments, moment generating function and conditional moments of the KIWP distribution. Moments are necessary and important in any statistical analysis, especially in applications. They can be used to study the most important features and characteristics of a distribution such as tendency, dispersion, skewness and kurtosis. Quantile Function The quantile function of the KIWP distribution is obtained by solving the equation F (x) = u, where 0 < u < 1. We therefore have 1 [ 1 ( ) 1 exp[θe αx β a ] b ] = u. 1 e θ Hence isolating the term exp(θe αx β ) yields: 1 exp(θe αx β ) 1 e θ = [ 1 (1 u) 1/b ] 1/a. Thus, [ ] 1/a exp(θe αx β ) = 1 (1 e θ ) 1 (1 u) 1/b, Taking natural logarithms both sides and dividing by θ yields: e αx β = 1 ) 1/a } {1 θ log (1 e θ ) (1 (1 u) 1/b 15

25 Thus, [ { ) 1/a }] 1 αx β = log θ log 1 (1 e θ ) (1 (1 u) 1/b Dividing both sides by α x β = 1 [ { ) 1/a }] 1 α log θ log 1 (1 e θ ) (1 (1 u) 1/b. The quantile function of the KIWP distribution is obtained by solving for x in the cdf to obtain x = ( [ { ( ) 1/a }]) 1/β 1 1 α log θ log 1 (1 e θ ) 1 (1 u) 1/b. Moments In this section, moments and related measures including coefficients of variation, skewness and kurtosis for the KIWP distribution are presented. A table of values for mean, standard deviation, coefficient of variation (CV), coefficient of skewness (CS) and coefficient of kurtosis (CK) is also presented. The r th moment of a random variable X following the KIWP distribution, denoted by E(X r ) = µ r is given by E(X r ) = = = 0 j,k,m=0 x r f KIW P (x; α, β, θ, a, b)dx ω j,k,m (a, b, θ)αβ(m + 1) ω j,k,m (a, b, θ) j,k,m=0 0 0 x r β 1 e (m+1)αx β dx x r e (m+1)αx β αβ(m + 1)x β 1 dx. 16

26 By letting u = (m + 1)αx β, we have du = αβ(m + 1)x β 1 dx and x = u 1/β [α(m + 1)] 1/β. Thus 0 x r e (m+1)αx β αβ(m + 1)x β 1 dx = [α(m + 1)] r β u (1 r β ) 1 e u du 0( = [α(m + 1)] r/β Γ 1 r ), β where β > r. Hence, E(X r ) is given by µ r = j,k,m=0 ( ω j,k,m (a, b, θ)[α(m + 1)] r β Γ 1 r ), (2.10) β where β > r and ω j,k,m (a, b, θ) is defined in equation 2.8.The mean of the KIWP distribution is µ = µ 1 = j,k,m=0 ( ω j,k,m (a, b, θ)[α(m + 1)] 1/β Γ 1 1 ), β for β > 1. The variance, CV, CS, and CK are given by σ 2 = µ 2 µ 2, CV = σ µ = µ 2 µ 2 µ = µ 2 µ 2 1, CS = E [(X µ)3 ] [E(X µ) 2 ] 3/2 = µ 3 3µµ 2 + 2µ 3 (µ 2 µ 2 ) 3/2, and CK = E [(X µ)4 ] [E(X µ) 2 ] 2 = µ 4 4µµ 3 + 6µ 2 µ 2 3µ 4 (µ 2 µ 2 ) 2, respectively. Table 1 lists the first six moments of the KIWP distribution for selected values of the parameters obtained by fixing α = 1.5, β = 6.5 and θ = 1.5. Table 2 lists the first six 17

27 moments of the KIWP distribution for selected values of the parameters obtained by fixing a = 1.0 and b = 1.0. These values can be determined numerically using R and MATLAB. Table 1. Moments of the KIWP Distribution for α = 1.5, β = 6.5 and θ = 1.5. µ s a = 1.0, b = 1.5 a = 1.0, b = 3.5 a = 1.0, b = 1.0 a = 2.5, b = 1.0 µ µ µ µ µ µ SD CV CS CK Table 2. Moments of the KIWP Distribution for a = 1.0 and b = 1.0. µ s α = 0.8, β = 6.5 α = 1.3, β = 7.5 α = 2.0, β = 8.5 α = 1.5, β = 7.0 µ µ µ µ µ µ SD CV CS CK

28 Moment Generating Function The moment generating function of the KIWP distribution can be obtained from the r th moment which is given as follows: E(e tx ) = = i=0 t i i! E(Xi ) i=0 j,k,m=0 ( ω j,k,m (a, b, θ)[α(m + 1)] i/β Γ 1 i ), (2.11) β β > i where ω j,k,m (a, b, θ) is defined by equation 2.8 Conditional Moments For income and lifetime distributions, it is of interest to obtain the conditional moments and mean residual life function. The r th conditional moment for KIWP distribution is given by E(X r X > t) = = 1 F KIW P (t) 1 F KIW P (t) t abθm+1 (k + 1) m (m + 1)! x r f KIW P (x)dx j=0 k=0 m=0 t ( )( ) b 1 aj + a 1 ( 1) aj+a+j+k 1 j k (e θ 1) aj+a αβ(m + 1)x r β 1 e (m+1)αx β dx. (2.12) Considering the integral part, we thus have = t t αβ(m + 1)x r β 1 e (m+1)αx β dx x r e (m+1)αx β αβ(m + 1)x β 1 dx. 19

29 By letting u = (m + 1)αx β, we have du = αβ(m + 1)x β 1 dx and x = u 1/β [α(m + 1)] 1/β. When x = t, u = α(m + 1)t β and when x =, u = 0. We therefore have = t 0 x r e (m+1)αx β αβ(m + 1)x β 1 dx (m+1)αt β [α(m + 1)] r β u r β e u du = [α(m + 1)] r β 0 ( = [α(m + 1)] r β γ (m+1)αt β u r β e u du 1 r β, α(m + 1)t β ), after applying the lower incomplete gamma function γ(s, a) = a 0 t s 1 e t dt. The r th conditional moment for the KIWP distribution is given by E(X r X > t) = [ 1 ( ) 1 exp[θe αt β a ] b ] 1 e θ j,k,m=0 [α(m + 1)] r β γ ( 1 r β, α(m + 1)t β ), ω j,k,m (a, b, θ) where ω j,k,m (a, b, θ) is defined by 2.8. The mean residual life function is E(X r X > t) t. Order Statistics Order statistics are used to find the characteristics such as minimum and maximum time to failure of electronic components for example lightbulbs in reliability theory. This may be achieved by testing n lightbulbs and simultaneously putting them on test and collecting the time to failure as the successive failures occur. The observations may then be ordered 20

30 as X 1, X 2,..., X n whereby X 1 denotes the minimum time to failure and X n denotes the maximum time to failure. The trials are independent and identically distributed. The pdf of the k th order statistic from the KIWP distribution is f k:n (x) = n!f KIW P (x) (k 1)!(n k)! [F KIW P (x)] k 1 [1 F KIW P (x)] n k. Using the identity (1 z) a 1 = ( ) a 1 ( 1) p z p, p p=0 we have = = n!f KIW P (x) ( ) n k ( 1) p [F KIW P (x)] p+k 1 (k 1)!(n k)! p p=0 n!f KIW P (x) ( ) n k ( 1) p (k 1)!(n k)! p p=0 { [ ( ) 1 exp[θe αx β a ] b } p+k 1 ] e θ n!f KIW P (x) ( )( ) n k p + k 1 ( 1) p+q (k 1)!(n k)! p q p=0 q=0 [ ( ) 1 exp[θe αx β a ] bq ] 1. 1 e θ 21

31 Substituting the KIWP pdf into the above equation gives n! ( )( ) n k p + k 1 = ( 1) p+q (k 1)!(n k)! p q p=0 q=0 [ abθαβx β 1 e αx β exp[θe αx β ] 1 exp[θe αx β ] e θ 1 1 e θ [ ( ) 1 exp[θe αx β a ] bq+b 1 ] 1 1 e θ ] a 1 n! ( )( ) n k p + k 1 = ( 1) p+q+s (k 1)!(n k)! p q p=0 q=0 s=0 ( ) bq + b 1 abθαβx β 1 e αx β exp[θe αx β ] s e θ 1 [ ] 1 exp[θe αx β as+a 1 ] 1 e θ n! = ( 1) p+q+s+a+as 1 (k 1)!(n k)! p=0 q=0 s=0 ( )( )( ) n k p + k 1 bq + b 1 p q s [ ] abθαβx β 1 e αx β exp[θe αx β as+a 1 ] 1 exp(θe αx β ) (e θ 1) as+a n! = ( 1) p+q+s+a+as+i 1 (k 1)!(n k)! p=0 q=0 s=0 i=0 ( )( )( )( ) n k p + k 1 bq + b 1 as + a 1 p q s i abθαβx β 1 e αx β exp[θ(i + 1)e αx β ]. (e θ 1) as+a Using the expansion e x = t=0 x t t!, 22

32 e αx β exp[θ(i + 1)e αx β ] = = [θ(i + 1)] j e α(j+1)x β j=0 j! [θ(i + 1)] j (j + 1)e α(j+1)x β. j!(j + 1) j=0 Thus, f k:n (x) = = n! (k 1)!(n k)! p=0 q=0 s=0 i=0 j=0 ( )( )( )( ) n k p + k 1 bq + b 1 as + a 1 p q s i abθ j+1 (i + 1) j (j + 1)!(e θ 1) αβ(j + as+a 1)x β 1 e α(j+1)x β Q(p, q, s, i, j)f IW (x; α(j + 1), β), p,q,s,i,j=0 ( 1) p+q+s+a+as+i 1 where Q(p, q, s, i, j) = n! ( 1) p+q+s+a+as+i 1 abθ j+1 (i + 1) j (k 1)!(n k)! (j + 1)!(e θ 1) ( )( )( )( as+a ) n k p + k 1 bq + b 1 as + a 1. p q s i Thus, the pdf of the k th order statistic from the KIWP distribution is clearly a linear combination of inverse Weibull pdfs with parameters α(j + 1), β > 0. The r th moment of the distribution of the k th order statistic is given by E(X r k:n) = = p,q,s,i,j=0 p,q,s,i,j=0 Q(p, q, s, i, j) 0 x r f IW (x; α(j + 1), β)dx ( Q(p, q, s, i, j)[α(j + 1)] r β Γ 1 r ), β where β > r. 23

33 Sub-Models of the KIWP distribution In this section, some sub-models of the KIWP distribution for selected values of the parameters α, β, θ, a and b are presented. (1) a = b = 1 When a = b = 1, we obtain the inverse Weibull Poisson (IWP) distribution whose cumulaltive distribution function (cdf) and probability density function (pdf) are respectively given by F (x, α, β) = exp( αx β ), x > 0, α > 0, β > 0, f(x, α, β) = αβx β 1 e α x β. (2) b = 1 When b = 1, we obtain the exponentiated inverse Weibull Poisson (EIWP) distribution which belongs to the resilience parameter family and whose cdf is given by [ ] 1 exp(θe αx β a ) F (x; α, β, θ, a) =, 1 e θ with corresponding pdf [ ] f(x; α, β, θ, a) = abθαβx β 1 e αx β exp(θe αx β ) 1 exp(θe αx β a 1 ), e θ 1 1 e θ for x > 0, α > 0, β > 0, θ, a > 0. (3) a = 1 When a = 1, we obtain the new lifetime distribution belonging to the frailty parameter family with cdf 24

34 [ 1 1 ( )] 1 exp[θe αx β b ], 1 e θ and pdf [ ( f(x) = bθαβx β 1 e αx β exp[θe αx β ] 1 exp[θe αx β ] 1 e θ 1 1 e θ )] b 1 for x > 0, α, β > 0, θ > 0, a > 0, b > 0. (4) β = 2 When β = 2, we obtain the Kumaraswamy inverse Rayleigh Poisson (KIRP) distribution whose cdf is F (x; α, θ, a, b) = 1 [ 1 ( ) 1 exp[θe αx 2 a ] b ], 1 e θ with corresponding pdf [ f(x) = 2abθαx 3 e αx 2 exp[θe αx 2 ] 1 exp[θe αx 2 ] e θ 1 1 e θ [ ( ) 1 exp[θe αx 2 a ] b 1 ] 1, 1 e θ ] a 1 for x > 0, α > 0, θ, a > 0, b > 0. (5) β = 1 When β = 1, we obtain the Kumaraswamy inverse exponential Poisson (KIEP) distribution whose cdf is 25

35 with corresponding pdf [ F (x; α, θ, a, b) = 1 1 ( ) 1 exp[θe αx 1 a ] b ] 1 e θ [ f(x) = abθαx 2 e αx 1 exp[θe αx 1 ] 1 exp[θe αx 1 ] e θ 1 1 e θ [ ( ) 1 exp[θe αx 1 a ] b 1 ] 1, 1 e θ ] a 1 for x > 0, α > 0, θ, a > 0, b > 0. (6) α = 1 When α = 1, we obtain the Kumaraswamy Frechet Poisson (KFP) distribution whose cdf is 1 [ 1 ( ) 1 exp[θe x β a ] b ] 1 e θ with corresponding pdf [ f KF P (x) = abθβx β 1 e x β exp[θe x β ] 1 exp[θe x β ] e θ 1 1 e θ [ ( ) 1 exp[θe x β a ] b 1 ] 1, 1 e θ ] a 1 for x > 0, β > 0, θ > 0, a > 0, b > 0. (7) α = b = 1 When α = b = 1, we get the exponentiated Frechet Poisson (EFP) distribution whose cdf is F EF P (x; a, β, θ) = ( ) a 1 exp[θe x β ]. 1 e θ 26

36 The corresponding pdf is aθβx β 1 e x β exp[θe x ] e θ 1 [ 1 exp[θe x β ] 1 e θ ] a 1 (2.13) for x > 0, a > 0, β, θ > 0. (8) α = a = b = 1 When α = a = b = 1, we get the Frechet Poisson (FP) distribution whose cdf is F F P (x; a, β, θ) = 1 exp[θe x β ]. 1 e θ The corresponding pdf is f F P (x) = θαβx β 1 e x β exp[θe αx ] e θ 1 for x > 0, θ > 0. (9) β = b = 1 When β = b = 1, we get the exponentiated inverse exponential Poisson (EIEP) distribution whose cdf is F EIEP (x; a, β, θ) = ( ) a 1 exp[θe x 1 ]. 1 e θ 27

37 The corresponding pdf is [ f EIEP (x) = aθαx 2 e αx 1 exp[θe αx ] 1 exp[θe αx 1 ] e θ 1 1 e θ ] a 1 for x > 0, a > 0, β > 0, θ > 0. (10) β = a = b = 1 When β = a = b = 1, we get the inverse exponential Poisson (IEP) distribution whose cdf is F EIRP (x; a, β, θ) = 1 exp[θe x 1 ]. 1 e θ The corresponding pdf is f EIRP (x) = θαx 2 e αx 1 exp[θe αx ] e θ 1. for x > 0, θ > 0. (11) β = 2, b = 1 When β = 2, b = 1, we get the exponentiated inverse Rayleigh Poisson (EIRP) distribution whose cdf is F EIRP (x; a, β, θ) = ( ) a 1 exp[θe x 2 ]. 1 e θ 28

38 The corresponding pdf is [ f EIRP (x) = 2aθαx 3 e αx 2 exp[θe αx ] 1 exp[θe αx 2 ] e θ 1 1 e θ ] a 1 for x > 0, a > 0, α > 0, θ > 0. (12) β = 2, a = b = 1 When β = 2, a = b = 1, we get the inverse Rayleigh Poisson (IRP) distribution whose cdf is F IRP (x; α, θ) = 1 exp[θe αx 2 ]. 1 e θ The corresponding pdf is f IRP (x) = 2θαx 3 e αx 2 exp[θe αx ] e θ 1 for x > 0, a > 0, α > 0, θ > 0. (13) a = b = 1 and θ 0 +, we get the (IW) inverse Weibull distribution. (14) b = 1 and θ 0 +, we get the (EIW) exponentiated inverse Weibull distribution. (15) β = 2 and θ 0 +, we get the (KIR) Kum-inverse Rayleigh distribution. (16) β = 1 and θ 0 +, we get the (KIE) Kum-inverse exponential distribution. (17) α = 1 and θ 0 +, we get the (KF) Kum-Frechet distribution. 29

39 (18) α = b = 1 and θ 0 +, we get the (EF) exponentiated-frechet distribution. (19) β = b = 1 and θ 0 +, we get the (EIE) exponentiated-inverse exponential distribution. (20) β = a = b = 1 and θ 0 +, we get the (IE) inverse exponential distribution. (21) β = 2, b = 1 and θ 0 +, we get the (EIR) exponentiated inverse Rayleigh distribution. (22) β = 2, a = b = 1 and θ 0 +, we get the (IR) inverse Rayleigh distribution. Mean and Median Deviations The amount of dispersion in a population can be measured by the totality of deviations from the mean and the median. If X has the KIWP distribution, we can derive the mean deviation about the mean µ = E(X) and the mean deviation about the median M = Median(X) = F 1 (1/2) from δ 1 = = µ 0 µ 0 = µf (µ) x µ f KIW P (x)dx f(x)dx = 2µF (µ) 2µ + 2 = 2µF (µ) 2µ µ 0 xf(x)dx + xf(x)dx + 2 µ j,k,m=0 µ xf(x)dx µ xf(x)dx µ xf(x)dx µ [1 F (µ)] µ f(x)dx ω j,k,m (a, b, θ)[α(m + 1)] 1 β γ ( 1 1 β, α(m + 1)µ β ), (2.14) 30

40 and δ 2 = = 0 M = M 0 M x M f KIW P (x)dx (M x)f(x)dx + 0 = MF (M) f(x)dx 0 M 0 M (x M)f(x)dx xf(x)dx + xf(x)dx + 2 = 2MF (M) M µ + 2 M M M xf(x)dx = 2M 1 2 M µ + 2 xf(x)dx = µ + 2 = µ + 2 by using equation M j,k,m=0 xf(x)dx M xf(x)dx M M xf(x)dx M [1 F (M)] f(x)dx ω j,k,m (a, b, θ)[α(m + 1)] 1 β γ ( 1 1 β, α(m + 1)M β ), Bonferroni and Lorenz curves Lorenz curves are income inequality measures that are also useful and applicable to other areas including reliability, demography, medicine and insurance. Lorenz and Bonferroni curves are given by L(F (x)) = x tf(t)dt 0, and B(F (x)) = E(X) L(F (x)), F (x) or L(p) = 1 µ q 0 tf(t)dt, and B(p) = 1 q tf(t)dt, pµ 0 31

41 respectively, where q = F 1 (p). Now using equation 2.14, we can re-write the Lorenz and Bonferroni curves as and q B(p) = 1 tf(t)dt pµ 0 = 1 [ ] xf(x)dx xf(x)dx pµ 0 q [ = 1 µ ω j,k,m (a, b, θ)[α(m + 1)] 1 β γ (1 1 ) ] pµ β, α(m + 1)q β. j,k,m=0 q L(p) = 1 tf(t)dt µ 0 = 1 [ ] xf(x)dx xf(x)dx µ 0 q [ = 1 µ ω j,k,m (a, b, θ)[α(m + 1)] 1 β γ (1 1 ) ] µ β, α(m + 1)q β, j,k,m=0 where ω j,k,m (a, b, θ) is defined in equation

42 CHAPTER 3 MEASURES OF UNCERTAINTY Rényi Entropy In this section we present some measures of uncertainty. The concept of entropy plays a vital role in information theory. The entropy of a random variable is defined in terms of its probability distribution and can be shown to be a good measure of randomness or uncertainty. Rényi entropy is an extension of Shannon entropy. Rényi entropy is defined to be I R (v) = 1 ( ) 1 v log [f KIW P (x; a, b, α, β, θ)] v dx, v 1, v > 0. 0 Rényi entropy tends to Shannon entropy as v 1. Notice that [f(x)] v = [ ] v [ abαβθ 1 exp[θe x βv v αx β ] e αvx β exp[θve αx β ] e θ 1 1 e θ [ ( ) 1 exp[θe αx β a ] bv v ] 1. 1 e θ ] av v We thus have = = = [ 1 ( ) 1 exp[θe αx β a ] bv v [ ] 1 exp[θe αx β ] 1 e θ 1 e θ ( [ ] bv v 1 exp[θe )( 1) i αx β ai+av v ] i 1 e θ i=0 ( ) [ bv v ( 1) i+ai+av v 1 exp(θe αx β ) i (e θ 1) ai+av v i=0 ( )( ) bv v ai + av v ( 1) i+j+ai+av v i=0 j=0 i j ] av v ] ai+av v (e θ exp(θje αx β). 1) ai+av v 33

43 Thus [ ] v abαβθ ( )( ) bv v ai + av v ( 1) [f(x)] v i+j+ai+av v = (e θ 1) a i j (e θ 1) ai i=0 j=0 ) x βv v exp (θ(j + v)e αx β e αvx β. Using series expansions, we obtain: [f(x)] v = [ ] v abαβθ ( )( ) bv v ai + av v ( 1) i+j+ai+av v θ k (j + v) k (e θ 1) a i j k!(e θ 1) ai x βv v e α(v+k)x β. i=0 j=0 k=0 Let u = α(v + k)x β du = αβ(v + k)x β 1 dx, and x = [α(v + k)] 1 β u 1 β. We have 0 x βv v e α(v+k)x β dx = = = 1 αβ(v + k) 0 [α(v + k)]1 v+ αβ(v + k) 1 v vβ [α(v + k)] β β x βv+β v+1 e α(v+k)x β αβ(v + k)x β 1 dx 1 v β 0 u v 1 v 1+ β ( vβ + v 1 Γ β du ). Therefore the Rényi entropy for the KIWP distribution is given by I R (v) = {[ ] v 1 abαβθ 1 v log (e θ 1) a i=0 ( 1)i+j+ai+av v θ k (j + v) k k!(e θ 1) ( )} ai vβ + v 1 Γ. β 34 j=0 k=0 ( )( ) bv v ai + av v 1 v vβ [α(v + k)] β β i j

44 Shannon Entropy Shannon entropy is a crucial concept in information theory. It measures the uncertainty associated with a random variable or the expected value of information encoded in a message in information theory. Shannon entropy is defined as H [f KIW P ] = E [ log(f KIW P (X; α, β, θ, a, b))]. Thus we have [ ] e θ 1 H [f KIW P (X)] = log + (β + 1)E [log(x)] + αe [ x β] θe abαβθ [ { }] 1 exp(θe αx β ) (a 1)E log 1 e θ [ {[ ( ) a ]}] 1 exp[θe αx β ] (b 1)E log 1. 1 e θ [e αx β] Note that log(x + 1) = ( 1) q+1 x q, 1 < x 1. q q=1 E [ log { }] 1 exp(θe αx β ) 1 e θ = log(1 e θ ) = log(1 e θ ) E q=1 q=1 s=0 [ ] exp(θqe αx β ) q ] (qθ) s E [e αsx β qs! = log(1 e θ ) ( 1) u θ s q s 1 (αs) u E[X βu ]. s!u! q=1 s=0 u=0 35

45 E [ log {[ 1 ( ) a ]}] 1 exp[θe αx β ] 1 e θ = = b=1 b=1 1 b E ( ) ab 1 exp(θe αx β ) 1 e θ { [ ] } ab ( 1) ab E 1 exp(θe αx β ). b(e θ 1) ab Using the identity (1 z) a 1 = ( ) a 1 ( 1) i, i i=0 we have b=1 ( 1) ab [1 exp(θe αx β )] ab b(e θ 1) ab = = ( ) ab ( 1) ab+c [ ] c b(e θ 1) E exp(θe αx β ) ab c=0 ( 1) ab+c+g (θc) d (αd) g E [ X βg]. bd!g!(e θ 1) ab b=1 b=1 c=0 d=0 g=0 Also, E [ e αx β] = ( 1) l α l E [ X βl]. l! l=0 36

46 E [log(x)] = E [log(1 + (x 1))] ( 1) p+1 (x 1) p = p p=1 p ( ) p ( 1) p+1 ( 1) p w E [X p ] = w p p=1 w=0 p ( ) p ( 1) 2p+1 w E [X p ] = w p p=1 w=0 p ( ) p ( 1) w E [X p ] =. w p p=1 w=0 Using results from the r th moment in 2.10, the Shannon entropy of the KIWP distribution is given by H [f(x)] = log [ ] e θ 1 abαβθ (β + 1) [α(m + 1)] p β Γ ( 1 p β p j,k,m=0 p=1 w=0 ) + j,k,m=0 ( ) p ( 1) w ω j,k,m (a, b, θ) w p ω j,k,m (a, b, θ) m + 1 θ j,k,m=0 l=0 { + (a 1) log(1 e θ ) + } Γ(1 + u) [α(m + 1)] u { + (b 1) Γ(1 + g) [α(m + 1)] g ( 1) l α l ω j,k,m (a, b, θ) l![α(m + 1)] l Γ(1 + l) j,k,m=0 b=1 c=0 d=0 g=0 }. j,k,m=0 q=1 s=0 u=0 ω j,k,m (a, b, θ)( 1) u θ s q s 1 (αs) u s!u! ω j,k,m (a, b, θ)( 1) ab+c+g (θc) d (αd) g bd!g!(e θ 1) ab Note that Γ(a + 1) = a! 37

47 Reliability Reliability may be defined as the probability that a piece of equipment operated under specific conditions performs to a specified standard for a particular period of time. It is a number between 0 and 1. It focuses on maximizing the inherent repeatability or consistency in an experiment. To maintain reliability internally, a researcher will use as many repeat sample groups as possible. It is a crucial part of any research design. If any test is not reliable, then its validity is questionable thus making the entire experiment a waste of time. Reliability R when X and Y have independent KIW P (α 1, β 1, θ 1, a 1, b 1 ) and KIW P (α 2, β 2, θ 2, a 2, b 2 ) is given by: R = P (X > Y ) = = 0 0 f x (x, α 1, β 1, θ 1, a 1, b 1 )F Y (x, α 2, β 2, θ 2, a 2, b 2 )dx a 1 b 1 θ 1 α 1, β 1 x (β+1) e α 1x β 1 exp(θ 1 e α 1x β 1 ) exp(θ 1e α 1x β1 1) [ 1 { e θ 1 1 a 1 1 [ 1 1 exp(θ 2e α 2x β2 ) e θ exp(θ 1e α 1x β1 1) e θ 1 1 a 2 }] b2. a 1 ] (b1 1) Using the binomial expansion (1 z) b = i=0 ( b i) ( 1) i (z) i, [ 1 exp(θ 1e α 1x β1 1) e θ 1 1 a 1 ] (b1 1) = i=0 ( 1) i ( b1 1 i ) exp(θ1 e α 1x β 1 1) 1 e θ 1 a 1 i. Thus [ 1 { 1 1 exp(θ 2e α 2x β2 ) e θ 2 1 a 2 }] b2 = j=0 { ( ) b2 ( 1) j j 1 1 exp(θ 2e α 2x β2 ) 1 e θ 2 a 2 } j. 38

48 { 1 Therefore, [ 1 { ( 1 exp(θ 2 e α 2x β 2 a 2 )} j ) = 1 e θ exp(θ 2e α 2x β2 ) e θ 2 1 k=0 a 2 }] b2 = ( ( ) j )( 1) k 1 exp(θ 2 e α 2x β a2 2 ) k 1 e θ 2 l=0 l k k=0 j=0 ( j k )( b2 j )( a2 k (1 e θ 2 ) a 2k exp(θ 2 e α 2x β 2 ) l. l ) exp(( 1) j+k+1 We further expand the expression by applying series expansions ( ) exp(θ 1 e α 1x β a1 i 1 1) 1 e θ 1 = ( 1) a 1i+m = ( 1) a 1i+m m=0 m=0 ( ) a1 { } m exp(θ 1 me α 1x β 1 ) (1 e θ m 2) a 1i ( ) a1 i exp(θ 1 me α 1x β 1 )(1 e θ m 1) a1i. Expanding further by binomial expansions yields: ( 1 1 exp(θ 1e α 1x β1 e θ 1 1 ) a1 1 = (e θ 1 1) 1 a 1 ( ) a1 1 ( 1) n exp(θ 1 ne α 1x β 1 ). n n=0 Thus we have R = a 1 b 1 θ 1 α 1 β 1 n m l k j n=0 m=0 l=0 k=0 j=0 i=0 ( )( (e θ 1 1) 1 a 1 (1 e θ 2) a 2k a 1 i a1 1 b1 1 n i 0 x (β+1) ( 1) a 1 1+i+n+ai+m+j+k+l )( a1 i m )( j k )( b2 exp(θ 1 ne α 1x β 1 ) exp(θ 1 me α 1x β 1 ) exp(θ 2 le α 2x β 2 )dx. j )( ) a2 k l Notice that e t t m = m!. m=0 39

49 Thus, applying the above identity to exp(θ 1 ze α 2x β 2 ) yields exp(θ 1 ze α 2x β 2 ) = θ1z p p e α 1px β 1. p! p=0 Likewise, we can apply the same identity to the expressions for exp(θ 1 e α 1x β 1 ), exp(θ 1 me α 1x β 1 ) and exp(θ 2 le α 1x β 2 ). Thus, Thus, let 0 f x (x, α 1, β 1, θ 1, a 1, b 1 )F Y (x, α 2, β 2, θ 2, a 2, b 2 )dx be denoted 0 f x (.)F Y (.)dx 0 f x (.)F Y (.) = a 1 b 1 θ 1 α 1 β 1 i,j,k,l,m,n=0 p,q,r=0 ( 1) a 1(1+j)+i+j+k+m+n+l 1 ( )( )( )( ) a1 1 b1 1 b2 l i j l m ( ) am (e θ 2 1) am n (e θ 1 1) 1 a 1 a 1 j θ v 2 0 θ p+q+r 1 z p m r n v p!q!r!v! x (β+1) e α 1(1+p+q+r)x β 1 e α 2vx β 2 dx. By invoking the identity e t = t m m=0, we can express exp( α m! 2vx β 2 ) as exp( α 2 vx β 2 ) = ( α 2 ) t v t x tβ 2 t=0 t! = ( 1) t (α 2 ) t v t x tβ 2. t! t=0 R = a 1 b 1 θ 1 α 1 β 1 ( b1 1 0 j )( b2 l i,j,k,l,m,n=0 p,q,r=0 )( l m )( am n θ p+q+r x β 1 tβ 2 1 e α(1+p+q+r)x β 1 dx. 1 z p m r n v p!q!r!v!t! ) (e θ 2 1) am (e θ 1 1) 1 a 1 a 1 j θ2 v ( 1) a 1(1+j)+i+j+k+m+n+l 1 ( ) a1 1 i 40

50 For the integration of the expression 0 x β 1 tβ 2 1 e α 1(1+p+q+r)x β 1 dx, we will let u = α 1 (1 + p + q + r)x β 1, and so du = α 1 β 1 (1 + p + q + r)x β 1 1 dx. Therefore dx = 1 α 1 β 1 (1 + p + q + r) xβ 1+1 du = (1 + p + q + r) 1 1 β β 1 α u ( tβ 2 β 1 +1) 1 e u du. Therefore, we arrive at the expression below for the reliability: R = a 1 b 1 θ 1 α 1 β 1 ( )( a1 1 b1 1 i j ( ) 1 β α 1 tβ2 1 Γ + 1. β 1 i,j,k,l,m,n=0 p,q,r,t=0 )( b2 l )( l m θ v 2θ p+q+r 1 )( am n α t v t z p m r l v ( 1) a 1(1+j)+i+j+k+m+n+l 1 v!q!p!r!t! ) (e θ 2 1) am (e θ 1 1) 1 a 1 a 1 j (1 + p + q + r) 1 β 1 41

51 Maximum Likelihood Estimation In this section, the maximum likelihood method is used in estimating the parameters a, b, α, β, θ. The pdf of the KIWP distribution can be rewritten as [ ] f(x) = abθαβx β 1 e αx β exp(θe αx β ) exp(θe αx β a 1 ) 1 e θ 1 e θ 1 [ ( ) exp(θe αx β a ] b 1 ) 1 1 e θ 1 [ ] [ ] a 1 abθαβ = x β 1 e αx β exp(θe αx β ) exp(θe αx β ) 1 (e θ 1) ab [ ( ) a ] b 1 (e θ 1) a exp[θe αx β ] 1 [ ] [ ] abθαβ = x β 1 e αx β exp(θe αx β )V a 1 (x) (e θ 1) a V a (x), (e θ 1) ab where V (x) = exp(θe αx β ) 1. Let x 1,, x n be a random sample from the KIWP distribution. The log-likelihood function is given by L = n log(a) + n log(b) + n log(α) + n log(β) + n log(θ) nab log(e θ 1) n n n n (β + 1) log(x i ) α + θ e αx β i + (a 1) log[v (x i )] + (b 1) i=1 n i=1 i=1 The associated score function is given by x β i i=1 { } log (e θ 1) a V a (x i ). (3.1) i=1 U(Θ) = ( L a, L b, L α, L β, L ). (3.2) θ 42

52 The components of the score function are the partial derivatives with respect to each of the parameters. The maximum likelihood estimates of Θ can be obtained by solving the non-linear system of equations U n (Θ) = 0. Since the equations 3.1 and 3.2 are not in closed form, the solutions can be found by use of a numerical method such as the Newton Rhapson method. The elements of the score vector are given by; L a = n a nb log(eθ 1) + + (b 1) i=1 n log[v (x i )] i=1 n { } (e θ 1) a log(e θ 1) V a (x) log[v (x i )], (e θ 1) a [V (x i )] a L b = n b na log(eθ 1) + n i=1 { } log (e θ 1) a V a (x i ), L α = n n α i=1 (b 1) x β i n i=1 θ n i=1 x β i a[v (x i )] a 1 V (x i ) α (e θ 1) a [V (x i )] a, exp( αx β i ) + (a 1) n i=1 V (x i ) α V (x i ) L β = n n β log(x i ) + α i=1 + (a 1) n i=1 n i=1 x β i log(x i ) + αθ n i=1 x β i V (x i ) n β V (x i ) (b 1) a[v (x i )] a 1 V (x i ) β (e θ 1) a [V (x i )] a i=1 log(x i ) exp( αx β i ) and 43

53 L θ = n n θ nabeθ e θ 1 + exp( αx β i ) + (a 1) + a(b 1) n i=1 i=1 n i=1 e θ (e θ 1) a 1 [V (x i )] a 1 V (x i ) θ (e θ 1) a [V (x i )] a. V (x i ) θ V (x i ) The mixed partial derivatives are given by: 2 L = n n a 2 a + 2 i=1 V (x i ) V (x i ), θ 2 L a b = n log(e θ 1) + + i=1 n log[v (x i )] i=1 n { } (e θ 1) a log(e θ 1) V a (x) log[v (x i )], (e θ 1) a [V (x i )] a 2 L n ][ {[(e a α = (b 1) θ 1) a [V (x i )] a a[v (x i )] a 1 log[v (x i )] V (x i) α i=1 + [V (x i )] a 1 V (x ] [ ] i) (b 1) (e θ 1) a log(e θ 1) V a (x) log[v (x i )] α [ a[v (x i )] a 1 V (x i) α ]}[ (e θ 1) a [V (x i )]] 2 + n i=1 V (x i ) α V (x i ), 2 L n ][ {[(e a β = (b 1) θ 1) a [V (x i )] a a[v (x i )] a 1 log[v (x i )] V (x i) β i=1 + [V (x i )] a 1 V (x ] [ ] i) (b 1) (e θ 1) a log(e θ 1) V a (x) log[v (x i )] β [ a[v (x i )] a 1 V (x i) β ]}[ (e θ 1) a [V (x i )]] n i=1 V (x i ) β V (x i ),

54 2 L a θ = V (x i ) θ V (x i ) eθ nb + (b 1) n e θ 1 i=1 [(eθ 1) a V a (x i )] A [(e θ 1) a [V a (x i )] 2 (eθ 1) log(e θ 1) V a (x i ) log[v (x i )] a(e θ 1) a 1 e θ av a 1 (x i ) V (x i) θ, [(e θ 1) a [V a (x i )] 2 where A is denoted by A = a(e θ 1) log(e θ 1) + (e θ 1) a + (e θ e θ 1) e θ 1 av a 1 (x i ) V (x i) log[v (x i )]. θ V (x i ) V a β (x i ) V (x i ) 2 L = n b 2 b 2, 2 L n b α = a[v (x i )] a 1 V (x i ) α (e θ 1) a [V (x i )], a i=1 2 L n b β = a[v (x i )] a 1 V (x i ) β (e θ 1) a [V (x i )], a i=1 2 L b θ = naeθ e θ 1 + a n i=1 (e θ 1) a 1 e θ [V (x i )] a 1 V (x i ) θ (e θ 1) a [V (x i )] a, 45

55 2 L = n α 2 α 2 θ n a(b 1) i=1 n i=1 x 2β i exp( αx β i ) + (a 1) n V (x i ) 2 V (x i ) α 2 ( ) 2 V (x i ) α [V (x i=1 i )] 2 ){ ( V {[((e θ 1) a [V (x i )] a (a 1)[V (x i )] a 2 (xi ) α } ( ) 2 ] + [V (x i )] a 1 2 V (x i ) V + a 2 [V (x α 2 i )] 2a 2 (xi ) α [ ] 2 } (e θ 1) a [V (x i )] a ), ) 2 2 L = n α 2 α 2 θ n a(b 1) i=1 n i=1 x 2β i exp( αx β i ) + (a 1) n V (x i ) 2 V (x i ) α 2 ( ) 2 V (x i ) α [V (x i=1 i )] 2 ){ ( V {[[V (x i )] a 1 ((e θ 1) a [V (x i )] a (a 1)[V (x i )] 1 (xi ) α } ( ) 2 ] + 2 V (x i ) V + a 2 [V (x α 2 i )] 1 (xi ) α [ ] 2 } (e θ 1) a [V (x i )] a ), ) 2 2 L n α β = [log(x i )] i=1 n i=1 x β i + (a 1) n i=1 2 V (x i ) V (x α β i) V (x i) α [V (x i )] 2 ] V (x i ) β n (a 1) V a 2 (x) V (x i) V (x i ) + V a 1 (x) 2 V (x i ) β α α β [(e θ 1) a [V (x i=1 i )]] n V a 1 (x) V (x i ) [a(e θ 1) a 1 e θ av a 1 (x α i ) V (x i) ] θ a(b 1), [(e θ 1) a [V (x i )]] 2 i=1 46

56 2 L α θ = n i=1 x β i exp( αx β i ) + (a 1) n V (x i ) 2 V (x i ) i=1 V (x i) α θ α V (x i ) θ [V (x i )] 2, n n α x β i [log(x i )] 2 + αθ i=1 i=1 ( ) n V (x i ) 2 V (x i ) L β + a 1) 2 β, [V (x)] 2 2 L = n β 2 β 2 i=1 x β i [log(x i )] 2 exp( αx β i )(α 1) 2 L β θ = α n i=1 x β i exp( αx β i ) log(x i ) + (a 1) n i=1 V (x i ) 2 V (x i ) β θ [V (x)] 2 V V β θ n (a 1)V a 2 (x i ) V V θ a(b 1) + [V (x β i)] a 1 2 V β θ [(eθ 1) a V (x i ) a ] [(e θ 1) a [V (x i=1 i )] a ] [ ] a(e θ 1) a 1 e θ av a 1 (x i ) V ) av a 1 (x θ i ) V β [(e θ 1) a [V (x i )] a ] and 2 L = n θ 2 α + nab e θ n 2 (e θ 1) + (a 1) 2 i=1 2 V (x i ) θ 2 V (x i ) V V θ θ [V (x)] 2 + a(a 1)(eθ 1) a 2 e 2θ + a(e θ 1)e θ a(a 1)V a 2 (x i ) ( ) V 2 θ av a 1 (x i ) 2 V θ 2. [(e θ 1) a [V (x i )] a ] 2 47

57 For the one variable partials, we obtain V (x i ) a V (x i ) b V (x i ) α V (x i ) θ V (x i ) β = 0, = 0, = exp[θe αx β ]θx β exp[ αx β ], = e αx β exp[θe αx β ], = θαβx β 1 e αx β exp[θe αx β ] Fisher Information Matrix In this section, we present a measure for the amount of information. This information can be used for interval estimation and hypothesis testing for the model parameters a, b, α, β, θ. Let X be a random variable with the KIWP density f KIW P (.; Θ), where Θ = (θ 1, θ 2, θ 3, θ 4, θ 5 ) T = (a, b, α, β, θ) T. The Fisher information matrix (FIM) is the 5 5 symmetric matrix denoted as follows: 2 L a 2 2 L b a 2 L α a 2 L β a 2 L θ a 2 L a b 2 L b 2 2 L α b 2 L β b 2 L θ b 2 L a α 2 L b α 2 L α 2 2 L βα 2 L θ α 2 L a β 2 L b β 2 L α β 2 L β 2 2 L θ β 2 L a θ 2 L b θ 2 L α θ 2 L β θ 2 L θ 2 48

58 The elements [ ] 2 log(f KIW P (x, Θ)) I ij (Θ) = E θ θ i θ j of the Fisher information matrix (FIM) can be obtained by considering the second order partial derivatives of the log likelihood function in equation 3.1. The elements can be obtained numerically by utilizing MATLAB software. The total FIM I n (Θ) can be approximated by: [ J n ( ˆΘ) 2 log L θ i j When dealing with actual real world data, the matrix above is obtained after convergence of the Newton Rhapson method in MATLAB. θ=ˆθ ]. 5 5 Asymptotic Confidence Intervals In this section, we present the asymptotic confidence intervals for the parameters of the KIWP distribution. The expectations in the Fisher s information Matrix (FIM) can be obtained numerically. Let ˆΘ = (â, ˆb, ˆα, ˆβ, ˆθ) be the maximum likelihood estimate of Θ = (a, b, α, β, θ). Under the usual regularity conditions and that the parameters are in the interior of the parameter space, but not on the boundary, we thus have n( ˆΘ Θ) d N 5 (0, I 1 (Θ)) where I(Θ) is the expected Fisher information matrix. The asymptotic behavior is still valid if I(Θ) is replaced by the observed information matrix evaluated at ˆΘ, that is J( ˆΘ). The multivariate normal distribution with mean vector (0) = (0, 0, 0, 0, 0) T and covariance matrix I 1 (Θ) can be used to construct confidence intervals for the model 49

The Lindley Power Series Class of Distributions: Model, Properties and Applications

The Lindley Power Series Class of Distributions: Model, Properties and Applications Indiana University of Pennsylvania Knowledge Repository @ IUP Theses and Dissertations (All) 5-215 The Lindley Power Series Class of Distributions: Model, Properties and Applications Gayan Jeewapriya DeAlwis

More information

Weighted Inverse Weibull Distribution: Statistical Properties and Applications

Weighted Inverse Weibull Distribution: Statistical Properties and Applications Theoretical Mathematics & Applications, vol.4, no.2, 214, 1-3 ISSN: 1792-9687 print), 1792-979 online) Scienpress Ltd, 214 Weighted Inverse Weibull Distribution: Statistical Properties and Applications

More information

The Beta Lindley-Poisson Distribution with Applications

The Beta Lindley-Poisson Distribution with Applications Georgia Southern University Digital Commons@Georgia Southern Mathematical Sciences Faculty Publications Department of Mathematical Sciences 26 The Beta Lindley-Poisson Distribution with Applications Broderick

More information

The Marshall-Olkin Flexible Weibull Extension Distribution

The Marshall-Olkin Flexible Weibull Extension Distribution The Marshall-Olkin Flexible Weibull Extension Distribution Abdelfattah Mustafa, B. S. El-Desouky and Shamsan AL-Garash arxiv:169.8997v1 math.st] 25 Sep 216 Department of Mathematics, Faculty of Science,

More information

A New Class of Generalized Power Lindley. Distribution with Applications to Lifetime Data

A New Class of Generalized Power Lindley. Distribution with Applications to Lifetime Data Theoretical Mathematics & Applications, vol.5, no.1, 215, 53-96 ISSN: 1792-9687 (print, 1792-979 (online Scienpress Ltd, 215 A New Class of Generalized Power Lindley Distribution with Applications to Lifetime

More information

Theoretical Properties of Weighted Generalized. Rayleigh and Related Distributions

Theoretical Properties of Weighted Generalized. Rayleigh and Related Distributions Theoretical Mathematics & Applications, vol.2, no.2, 22, 45-62 ISSN: 792-9687 (print, 792-979 (online International Scientific Press, 22 Theoretical Properties of Weighted Generalized Rayleigh and Related

More information

PROPERTIES AND DATA MODELLING APPLICATIONS OF THE KUMARASWAMY GENERALIZED MARSHALL-OLKIN-G FAMILY OF DISTRIBUTIONS

PROPERTIES AND DATA MODELLING APPLICATIONS OF THE KUMARASWAMY GENERALIZED MARSHALL-OLKIN-G FAMILY OF DISTRIBUTIONS Journal of Data Science 605-620, DOI: 10.6339/JDS.201807_16(3.0009 PROPERTIES AND DATA MODELLING APPLICATIONS OF THE KUMARASWAMY GENERALIZED MARSHALL-OLKIN-G FAMILY OF DISTRIBUTIONS Subrata Chakraborty

More information

THE WEIBULL GENERALIZED FLEXIBLE WEIBULL EXTENSION DISTRIBUTION

THE WEIBULL GENERALIZED FLEXIBLE WEIBULL EXTENSION DISTRIBUTION Journal of Data Science 14(2016), 453-478 THE WEIBULL GENERALIZED FLEXIBLE WEIBULL EXTENSION DISTRIBUTION Abdelfattah Mustafa, Beih S. El-Desouky, Shamsan AL-Garash Department of Mathematics, Faculty of

More information

The Compound Family of Generalized Inverse Weibull Power Series Distributions

The Compound Family of Generalized Inverse Weibull Power Series Distributions British Journal of Applied Science & Technology 14(3): 1-18 2016 Article no.bjast.23215 ISSN: 2231-0843 NLM ID: 101664541 SCIENCEDOMAIN international www.sciencedomain.org The Compound Family of Generalized

More information

An Extended Weighted Exponential Distribution

An Extended Weighted Exponential Distribution Journal of Modern Applied Statistical Methods Volume 16 Issue 1 Article 17 5-1-017 An Extended Weighted Exponential Distribution Abbas Mahdavi Department of Statistics, Faculty of Mathematical Sciences,

More information

Inferences about Parameters of Trivariate Normal Distribution with Missing Data

Inferences about Parameters of Trivariate Normal Distribution with Missing Data Florida International University FIU Digital Commons FIU Electronic Theses and Dissertations University Graduate School 7-5-3 Inferences about Parameters of Trivariate Normal Distribution with Missing

More information

Logistic-Modified Weibull Distribution and Parameter Estimation

Logistic-Modified Weibull Distribution and Parameter Estimation International Journal of Contemporary Mathematical Sciences Vol. 13, 2018, no. 1, 11-23 HIKARI Ltd, www.m-hikari.com https://doi.org/10.12988/ijcms.2018.71135 Logistic-Modified Weibull Distribution and

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

Linear model A linear model assumes Y X N(µ(X),σ 2 I), And IE(Y X) = µ(x) = X β, 2/52

Linear model A linear model assumes Y X N(µ(X),σ 2 I), And IE(Y X) = µ(x) = X β, 2/52 Statistics for Applications Chapter 10: Generalized Linear Models (GLMs) 1/52 Linear model A linear model assumes Y X N(µ(X),σ 2 I), And IE(Y X) = µ(x) = X β, 2/52 Components of a linear model The two

More information

The Gamma Generalized Pareto Distribution with Applications in Survival Analysis

The Gamma Generalized Pareto Distribution with Applications in Survival Analysis International Journal of Statistics and Probability; Vol. 6, No. 3; May 217 ISSN 1927-732 E-ISSN 1927-74 Published by Canadian Center of Science and Education The Gamma Generalized Pareto Distribution

More information

The Lindley-Poisson distribution in lifetime analysis and its properties

The Lindley-Poisson distribution in lifetime analysis and its properties The Lindley-Poisson distribution in lifetime analysis and its properties Wenhao Gui 1, Shangli Zhang 2 and Xinman Lu 3 1 Department of Mathematics and Statistics, University of Minnesota Duluth, Duluth,

More information

A New Burr XII-Weibull-Logarithmic Distribution for Survival and Lifetime Data Analysis: Model, Theory and Applications

A New Burr XII-Weibull-Logarithmic Distribution for Survival and Lifetime Data Analysis: Model, Theory and Applications Article A New Burr XII-Weibull-Logarithmic Distribution for Survival and Lifetime Data Analysis: Model, Theory and Applications Broderick O. Oluyede 1,, Boikanyo Makubate 2, Adeniyi F. Fagbamigbe 2,3 and

More information

Brief Review of Probability

Brief Review of Probability Maura Department of Economics and Finance Università Tor Vergata Outline 1 Distribution Functions Quantiles and Modes of a Distribution 2 Example 3 Example 4 Distributions Outline Distribution Functions

More information

Estimation of Quantiles

Estimation of Quantiles 9 Estimation of Quantiles The notion of quantiles was introduced in Section 3.2: recall that a quantile x α for an r.v. X is a constant such that P(X x α )=1 α. (9.1) In this chapter we examine quantiles

More information

INVERTED KUMARASWAMY DISTRIBUTION: PROPERTIES AND ESTIMATION

INVERTED KUMARASWAMY DISTRIBUTION: PROPERTIES AND ESTIMATION Pak. J. Statist. 2017 Vol. 33(1), 37-61 INVERTED KUMARASWAMY DISTRIBUTION: PROPERTIES AND ESTIMATION A. M. Abd AL-Fattah, A.A. EL-Helbawy G.R. AL-Dayian Statistics Department, Faculty of Commerce, AL-Azhar

More information

Different methods of estimation for generalized inverse Lindley distribution

Different methods of estimation for generalized inverse Lindley distribution Different methods of estimation for generalized inverse Lindley distribution Arbër Qoshja & Fatmir Hoxha Department of Applied Mathematics, Faculty of Natural Science, University of Tirana, Albania,, e-mail:

More information

The Kumaraswamy-Burr Type III Distribution: Properties and Estimation

The Kumaraswamy-Burr Type III Distribution: Properties and Estimation British Journal of Mathematics & Computer Science 14(2): 1-21, 2016, Article no.bjmcs.19958 ISSN: 2231-0851 SCIENCEDOMAIN international www.sciencedomain.org The Kumaraswamy-Burr Type III Distribution:

More information

THE MODIFIED EXPONENTIAL DISTRIBUTION WITH APPLICATIONS. Department of Statistics, Malayer University, Iran

THE MODIFIED EXPONENTIAL DISTRIBUTION WITH APPLICATIONS. Department of Statistics, Malayer University, Iran Pak. J. Statist. 2017 Vol. 33(5), 383-398 THE MODIFIED EXPONENTIAL DISTRIBUTION WITH APPLICATIONS Mahdi Rasekhi 1, Morad Alizadeh 2, Emrah Altun 3, G.G. Hamedani 4 Ahmed Z. Afify 5 and Munir Ahmad 6 1

More information

The Inverse Weibull Inverse Exponential. Distribution with Application

The Inverse Weibull Inverse Exponential. Distribution with Application International Journal of Contemporary Mathematical Sciences Vol. 14, 2019, no. 1, 17-30 HIKARI Ltd, www.m-hikari.com https://doi.org/10.12988/ijcms.2019.913 The Inverse Weibull Inverse Exponential Distribution

More information

Probability and Distributions

Probability and Distributions Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated

More information

Step-Stress Models and Associated Inference

Step-Stress Models and Associated Inference Department of Mathematics & Statistics Indian Institute of Technology Kanpur August 19, 2014 Outline Accelerated Life Test 1 Accelerated Life Test 2 3 4 5 6 7 Outline Accelerated Life Test 1 Accelerated

More information

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015

Part IA Probability. Definitions. Based on lectures by R. Weber Notes taken by Dexter Chua. Lent 2015 Part IA Probability Definitions Based on lectures by R. Weber Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after lectures.

More information

The Gamma-Generalized Inverse Weibull. Distribution with Applications to. Pricing and Lifetime Data

The Gamma-Generalized Inverse Weibull. Distribution with Applications to. Pricing and Lifetime Data Journal of Computations & Modelling, vol.7, no.2, 2017, 1-28 ISSN: 1792-7625 (print), 1792-8850 (online) Scienpress Ltd, 2017 The Gamma-Generalized Inverse Weibull Distribution with Applications to Pricing

More information

Review 1: STAT Mark Carpenter, Ph.D. Professor of Statistics Department of Mathematics and Statistics. August 25, 2015

Review 1: STAT Mark Carpenter, Ph.D. Professor of Statistics Department of Mathematics and Statistics. August 25, 2015 Review : STAT 36 Mark Carpenter, Ph.D. Professor of Statistics Department of Mathematics and Statistics August 25, 25 Support of a Random Variable The support of a random variable, which is usually denoted

More information

Weighted Exponential Distribution and Process

Weighted Exponential Distribution and Process Weighted Exponential Distribution and Process Jilesh V Some generalizations of exponential distribution and related time series models Thesis. Department of Statistics, University of Calicut, 200 Chapter

More information

STATISTICAL INFERENCE IN ACCELERATED LIFE TESTING WITH GEOMETRIC PROCESS MODEL. A Thesis. Presented to the. Faculty of. San Diego State University

STATISTICAL INFERENCE IN ACCELERATED LIFE TESTING WITH GEOMETRIC PROCESS MODEL. A Thesis. Presented to the. Faculty of. San Diego State University STATISTICAL INFERENCE IN ACCELERATED LIFE TESTING WITH GEOMETRIC PROCESS MODEL A Thesis Presented to the Faculty of San Diego State University In Partial Fulfillment of the Requirements for the Degree

More information

A Marshall-Olkin Gamma Distribution and Process

A Marshall-Olkin Gamma Distribution and Process CHAPTER 3 A Marshall-Olkin Gamma Distribution and Process 3.1 Introduction Gamma distribution is a widely used distribution in many fields such as lifetime data analysis, reliability, hydrology, medicine,

More information

1 Review of Probability and Distributions

1 Review of Probability and Distributions Random variables. A numerically valued function X of an outcome ω from a sample space Ω X : Ω R : ω X(ω) is called a random variable (r.v.), and usually determined by an experiment. We conventionally denote

More information

Weighted Inverse Weibull and Beta-Inverse Weibull Distribution

Weighted Inverse Weibull and Beta-Inverse Weibull Distribution Georgia Southern University Digital Commons@Georgia Southern Electronic Theses & Dissertations Graduate Studies, Jack N. Averitt College of Spring 21 Weighted Inverse Weibull and Beta-Inverse Weibull Distribution

More information

Noninformative Priors for the Ratio of the Scale Parameters in the Inverted Exponential Distributions

Noninformative Priors for the Ratio of the Scale Parameters in the Inverted Exponential Distributions Communications for Statistical Applications and Methods 03, Vol. 0, No. 5, 387 394 DOI: http://dx.doi.org/0.535/csam.03.0.5.387 Noninformative Priors for the Ratio of the Scale Parameters in the Inverted

More information

For iid Y i the stronger conclusion holds; for our heuristics ignore differences between these notions.

For iid Y i the stronger conclusion holds; for our heuristics ignore differences between these notions. Large Sample Theory Study approximate behaviour of ˆθ by studying the function U. Notice U is sum of independent random variables. Theorem: If Y 1, Y 2,... are iid with mean µ then Yi n µ Called law of

More information

Three hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER.

Three hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER. Three hours To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER EXTREME VALUES AND FINANCIAL RISK Examiner: Answer QUESTION 1, QUESTION

More information

Final Exam # 3. Sta 230: Probability. December 16, 2012

Final Exam # 3. Sta 230: Probability. December 16, 2012 Final Exam # 3 Sta 230: Probability December 16, 2012 This is a closed-book exam so do not refer to your notes, the text, or any other books (please put them on the floor). You may use the extra sheets

More information

Slides 8: Statistical Models in Simulation

Slides 8: Statistical Models in Simulation Slides 8: Statistical Models in Simulation Purpose and Overview The world the model-builder sees is probabilistic rather than deterministic: Some statistical model might well describe the variations. An

More information

The Distributions of Sums, Products and Ratios of Inverted Bivariate Beta Distribution 1

The Distributions of Sums, Products and Ratios of Inverted Bivariate Beta Distribution 1 Applied Mathematical Sciences, Vol. 2, 28, no. 48, 2377-2391 The Distributions of Sums, Products and Ratios of Inverted Bivariate Beta Distribution 1 A. S. Al-Ruzaiza and Awad El-Gohary 2 Department of

More information

STAT 3610: Review of Probability Distributions

STAT 3610: Review of Probability Distributions STAT 3610: Review of Probability Distributions Mark Carpenter Professor of Statistics Department of Mathematics and Statistics August 25, 2015 Support of a Random Variable Definition The support of a random

More information

SOLUTIONS TO MATH68181 EXTREME VALUES AND FINANCIAL RISK EXAM

SOLUTIONS TO MATH68181 EXTREME VALUES AND FINANCIAL RISK EXAM SOLUTIONS TO MATH68181 EXTREME VALUES AND FINANCIAL RISK EXAM Solutions to Question A1 a) The marginal cdfs of F X,Y (x, y) = [1 + exp( x) + exp( y) + (1 α) exp( x y)] 1 are F X (x) = F X,Y (x, ) = [1

More information

On Weighted Exponential Distribution and its Length Biased Version

On Weighted Exponential Distribution and its Length Biased Version On Weighted Exponential Distribution and its Length Biased Version Suchismita Das 1 and Debasis Kundu 2 Abstract In this paper we consider the weighted exponential distribution proposed by Gupta and Kundu

More information

Applied Statistics and Probability for Engineers. Sixth Edition. Chapter 4 Continuous Random Variables and Probability Distributions.

Applied Statistics and Probability for Engineers. Sixth Edition. Chapter 4 Continuous Random Variables and Probability Distributions. Applied Statistics and Probability for Engineers Sixth Edition Douglas C. Montgomery George C. Runger Chapter 4 Continuous Random Variables and Probability Distributions 4 Continuous CHAPTER OUTLINE Random

More information

Chapter 4 Continuous Random Variables and Probability Distributions

Chapter 4 Continuous Random Variables and Probability Distributions Applied Statistics and Probability for Engineers Sixth Edition Douglas C. Montgomery George C. Runger Chapter 4 Continuous Random Variables and Probability Distributions 4 Continuous CHAPTER OUTLINE 4-1

More information

The Gamma Modified Weibull Distribution

The Gamma Modified Weibull Distribution Chilean Journal of Statistics Vol. 6, No. 1, April 2015, 37 48 Distribution theory Research Paper The Gamma Modified Weibull Distribution Gauss M. Cordeiro, William D. Aristizábal, Dora M. Suárez and Sébastien

More information

Dagum-Poisson Distribution: Model, Properties and Application

Dagum-Poisson Distribution: Model, Properties and Application Georgia Southern University Digital Commons@Georgia Southern Mathematical Sciences Faculty Publications Mathematical Sciences, Department of 4-26-2016 Dagum-Poisson Distribution: Model, Properties and

More information

STA 2201/442 Assignment 2

STA 2201/442 Assignment 2 STA 2201/442 Assignment 2 1. This is about how to simulate from a continuous univariate distribution. Let the random variable X have a continuous distribution with density f X (x) and cumulative distribution

More information

MARSHALL-OLKIN EXTENDED POWER FUNCTION DISTRIBUTION I. E. Okorie 1, A. C. Akpanta 2 and J. Ohakwe 3

MARSHALL-OLKIN EXTENDED POWER FUNCTION DISTRIBUTION I. E. Okorie 1, A. C. Akpanta 2 and J. Ohakwe 3 MARSHALL-OLKIN EXTENDED POWER FUNCTION DISTRIBUTION I. E. Okorie 1, A. C. Akpanta 2 and J. Ohakwe 3 1 School of Mathematics, University of Manchester, Manchester M13 9PL, UK 2 Department of Statistics,

More information

THE ODD LINDLEY BURR XII DISTRIBUTION WITH APPLICATIONS

THE ODD LINDLEY BURR XII DISTRIBUTION WITH APPLICATIONS Pak. J. Statist. 2018 Vol. 34(1), 15-32 THE ODD LINDLEY BURR XII DISTRIBUTION WITH APPLICATIONS T.H.M. Abouelmagd 1&3, Saeed Al-mualim 1&2, Ahmed Z. Afify 3 Munir Ahmad 4 and Hazem Al-Mofleh 5 1 Management

More information

Examining the accuracy of the normal approximation to the poisson random variable

Examining the accuracy of the normal approximation to the poisson random variable Eastern Michigan University DigitalCommons@EMU Master's Theses and Doctoral Dissertations Master's Theses, and Doctoral Dissertations, and Graduate Capstone Projects 2009 Examining the accuracy of the

More information

Actuarial Science Exam 1/P

Actuarial Science Exam 1/P Actuarial Science Exam /P Ville A. Satopää December 5, 2009 Contents Review of Algebra and Calculus 2 2 Basic Probability Concepts 3 3 Conditional Probability and Independence 4 4 Combinatorial Principles,

More information

Ching-Han Hsu, BMES, National Tsing Hua University c 2015 by Ching-Han Hsu, Ph.D., BMIR Lab. = a + b 2. b a. x a b a = 12

Ching-Han Hsu, BMES, National Tsing Hua University c 2015 by Ching-Han Hsu, Ph.D., BMIR Lab. = a + b 2. b a. x a b a = 12 Lecture 5 Continuous Random Variables BMIR Lecture Series in Probability and Statistics Ching-Han Hsu, BMES, National Tsing Hua University c 215 by Ching-Han Hsu, Ph.D., BMIR Lab 5.1 1 Uniform Distribution

More information

Graduate Econometrics I: Maximum Likelihood II

Graduate Econometrics I: Maximum Likelihood II Graduate Econometrics I: Maximum Likelihood II Yves Dominicy Université libre de Bruxelles Solvay Brussels School of Economics and Management ECARES Yves Dominicy Graduate Econometrics I: Maximum Likelihood

More information

BMIR Lecture Series on Probability and Statistics Fall, 2015 Uniform Distribution

BMIR Lecture Series on Probability and Statistics Fall, 2015 Uniform Distribution Lecture #5 BMIR Lecture Series on Probability and Statistics Fall, 2015 Department of Biomedical Engineering and Environmental Sciences National Tsing Hua University s 5.1 Definition ( ) A continuous random

More information

Problem 1. Problem 2. Problem 3. Problem 4

Problem 1. Problem 2. Problem 3. Problem 4 Problem Let A be the event that the fungus is present, and B the event that the staph-bacteria is present. We have P A = 4, P B = 9, P B A =. We wish to find P AB, to do this we use the multiplication

More information

TABLE OF CONTENTS CHAPTER 1 COMBINATORIAL PROBABILITY 1

TABLE OF CONTENTS CHAPTER 1 COMBINATORIAL PROBABILITY 1 TABLE OF CONTENTS CHAPTER 1 COMBINATORIAL PROBABILITY 1 1.1 The Probability Model...1 1.2 Finite Discrete Models with Equally Likely Outcomes...5 1.2.1 Tree Diagrams...6 1.2.2 The Multiplication Principle...8

More information

Further results involving Marshall Olkin log logistic distribution: reliability analysis, estimation of the parameter, and applications

Further results involving Marshall Olkin log logistic distribution: reliability analysis, estimation of the parameter, and applications DOI 1.1186/s464-16-27- RESEARCH Open Access Further results involving Marshall Olkin log logistic distribution: reliability analysis, estimation of the parameter, and applications Arwa M. Alshangiti *,

More information

Practice Exam 1. (A) (B) (C) (D) (E) You are given the following data on loss sizes:

Practice Exam 1. (A) (B) (C) (D) (E) You are given the following data on loss sizes: Practice Exam 1 1. Losses for an insurance coverage have the following cumulative distribution function: F(0) = 0 F(1,000) = 0.2 F(5,000) = 0.4 F(10,000) = 0.9 F(100,000) = 1 with linear interpolation

More information

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY

EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY EXAMINATIONS OF THE ROYAL STATISTICAL SOCIETY GRADUATE DIPLOMA, 2016 MODULE 1 : Probability distributions Time allowed: Three hours Candidates should answer FIVE questions. All questions carry equal marks.

More information

THE FIVE PARAMETER LINDLEY DISTRIBUTION. Dept. of Statistics and Operations Research, King Saud University Saudi Arabia,

THE FIVE PARAMETER LINDLEY DISTRIBUTION. Dept. of Statistics and Operations Research, King Saud University Saudi Arabia, Pak. J. Statist. 2015 Vol. 31(4), 363-384 THE FIVE PARAMETER LINDLEY DISTRIBUTION Abdulhakim Al-Babtain 1, Ahmed M. Eid 2, A-Hadi N. Ahmed 3 and Faton Merovci 4 1 Dept. of Statistics and Operations Research,

More information

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER.

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER. Two hours MATH38181 To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER EXTREME VALUES AND FINANCIAL RISK Examiner: Answer any FOUR

More information

Two Weighted Distributions Generated by Exponential Distribution

Two Weighted Distributions Generated by Exponential Distribution Journal of Mathematical Extension Vol. 9, No. 1, (2015), 1-12 ISSN: 1735-8299 URL: http://www.ijmex.com Two Weighted Distributions Generated by Exponential Distribution A. Mahdavi Vali-e-Asr University

More information

Some Results on Moment of Order Statistics for the Quadratic Hazard Rate Distribution

Some Results on Moment of Order Statistics for the Quadratic Hazard Rate Distribution J. Stat. Appl. Pro. 5, No. 2, 371-376 (2016) 371 Journal of Statistics Applications & Probability An International Journal http://dx.doi.org/10.18576/jsap/050218 Some Results on Moment of Order Statistics

More information

t x 1 e t dt, and simplify the answer when possible (for example, when r is a positive even number). In particular, confirm that EX 4 = 3.

t x 1 e t dt, and simplify the answer when possible (for example, when r is a positive even number). In particular, confirm that EX 4 = 3. Mathematical Statistics: Homewor problems General guideline. While woring outside the classroom, use any help you want, including people, computer algebra systems, Internet, and solution manuals, but mae

More information

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued

Chapter 3 sections. SKIP: 3.10 Markov Chains. SKIP: pages Chapter 3 - continued Chapter 3 sections Chapter 3 - continued 3.1 Random Variables and Discrete Distributions 3.2 Continuous Distributions 3.3 The Cumulative Distribution Function 3.4 Bivariate Distributions 3.5 Marginal Distributions

More information

The Kumaraswamy Gompertz distribution

The Kumaraswamy Gompertz distribution Journal of Data Science 13(2015), 241-260 The Kumaraswamy Gompertz distribution Raquel C. da Silva a, Jeniffer J. D. Sanchez b, F abio P. Lima c, Gauss M. Cordeiro d Departamento de Estat ıstica, Universidade

More information

Beta-Linear Failure Rate Distribution and its Applications

Beta-Linear Failure Rate Distribution and its Applications JIRSS (2015) Vol. 14, No. 1, pp 89-105 Beta-Linear Failure Rate Distribution and its Applications A. A. Jafari, E. Mahmoudi Department of Statistics, Yazd University, Yazd, Iran. Abstract. We introduce

More information

Northwestern University Department of Electrical Engineering and Computer Science

Northwestern University Department of Electrical Engineering and Computer Science Northwestern University Department of Electrical Engineering and Computer Science EECS 454: Modeling and Analysis of Communication Networks Spring 2008 Probability Review As discussed in Lecture 1, probability

More information

P n. This is called the law of large numbers but it comes in two forms: Strong and Weak.

P n. This is called the law of large numbers but it comes in two forms: Strong and Weak. Large Sample Theory Large Sample Theory is a name given to the search for approximations to the behaviour of statistical procedures which are derived by computing limits as the sample size, n, tends to

More information

Robust Parameter Estimation in the Weibull and the Birnbaum-Saunders Distribution

Robust Parameter Estimation in the Weibull and the Birnbaum-Saunders Distribution Clemson University TigerPrints All Theses Theses 8-2012 Robust Parameter Estimation in the Weibull and the Birnbaum-Saunders Distribution Jing Zhao Clemson University, jzhao2@clemson.edu Follow this and

More information

Midterm Examination. STA 215: Statistical Inference. Due Wednesday, 2006 Mar 8, 1:15 pm

Midterm Examination. STA 215: Statistical Inference. Due Wednesday, 2006 Mar 8, 1:15 pm Midterm Examination STA 215: Statistical Inference Due Wednesday, 2006 Mar 8, 1:15 pm This is an open-book take-home examination. You may work on it during any consecutive 24-hour period you like; please

More information

March 10, 2017 THE EXPONENTIAL CLASS OF DISTRIBUTIONS

March 10, 2017 THE EXPONENTIAL CLASS OF DISTRIBUTIONS March 10, 2017 THE EXPONENTIAL CLASS OF DISTRIBUTIONS Abstract. We will introduce a class of distributions that will contain many of the discrete and continuous we are familiar with. This class will help

More information

Parameter Estimation

Parameter Estimation Parameter Estimation Chapters 13-15 Stat 477 - Loss Models Chapters 13-15 (Stat 477) Parameter Estimation Brian Hartman - BYU 1 / 23 Methods for parameter estimation Methods for parameter estimation Methods

More information

International Journal of Scientific & Engineering Research, Volume 7, Issue 8, August ISSN MM Double Exponential Distribution

International Journal of Scientific & Engineering Research, Volume 7, Issue 8, August ISSN MM Double Exponential Distribution International Journal of Scientific & Engineering Research, Volume 7, Issue 8, August-216 72 MM Double Exponential Distribution Zahida Perveen, Mubbasher Munir Abstract: The exponential distribution is

More information

Topic 4: Continuous random variables

Topic 4: Continuous random variables Topic 4: Continuous random variables Course 3, 216 Page Continuous random variables Definition (Continuous random variable): An r.v. X has a continuous distribution if there exists a non-negative function

More information

Random Variables and Their Distributions

Random Variables and Their Distributions Chapter 3 Random Variables and Their Distributions A random variable (r.v.) is a function that assigns one and only one numerical value to each simple event in an experiment. We will denote r.vs by capital

More information

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R

Random Variables. Random variables. A numerically valued map X of an outcome ω from a sample space Ω to the real line R In probabilistic models, a random variable is a variable whose possible values are numerical outcomes of a random phenomenon. As a function or a map, it maps from an element (or an outcome) of a sample

More information

Truncated Weibull-G More Flexible and More Reliable than Beta-G Distribution

Truncated Weibull-G More Flexible and More Reliable than Beta-G Distribution International Journal of Statistics and Probability; Vol. 6 No. 5; September 217 ISSN 1927-732 E-ISSN 1927-74 Published by Canadian Center of Science and Education Truncated Weibull-G More Flexible and

More information

The Lomax-Exponential Distribution, Some Properties and Applications

The Lomax-Exponential Distribution, Some Properties and Applications Statistical Research and Training Center دوره ی ١٣ شماره ی ٢ پاییز و زمستان ١٣٩۵ صص ١۵٣ ١٣١ 131 153 (2016): 13 J. Statist. Res. Iran DOI: 10.18869/acadpub.jsri.13.2.131 The Lomax-Exponential Distribution,

More information

arxiv: v1 [stat.ap] 12 Jun 2014

arxiv: v1 [stat.ap] 12 Jun 2014 On Lindley-Exponential Distribution: Properties and Application Deepesh Bhati 1 and Mohd. Aamir Malik 2 arxiv:146.316v1 [stat.ap] 12 Jun 214 1 Assistant Professor, Department of Statistics, Central University

More information

Parameters Estimation for a Linear Exponential Distribution Based on Grouped Data

Parameters Estimation for a Linear Exponential Distribution Based on Grouped Data International Mathematical Forum, 3, 2008, no. 33, 1643-1654 Parameters Estimation for a Linear Exponential Distribution Based on Grouped Data A. Al-khedhairi Department of Statistics and O.R. Faculty

More information

A New Class of Generalized Dagum. Distribution with Applications. to Income and Lifetime Data

A New Class of Generalized Dagum. Distribution with Applications. to Income and Lifetime Data Journal of Statistical and Econometric Methods, vol.3, no.2, 2014, 125-151 ISSN: 1792-6602 (print), 1792-6939 (online) Scienpress Ltd, 2014 A New Class of Generalized Dagum Distribution with Applications

More information

A GENERALIZED CLASS OF EXPONENTIATED MODI ED WEIBULL DISTRIBUTION WITH APPLICATIONS

A GENERALIZED CLASS OF EXPONENTIATED MODI ED WEIBULL DISTRIBUTION WITH APPLICATIONS Journal of Data Science 14(2016), 585-614 A GENERALIZED CLASS OF EXPONENTIATED MODI ED WEIBULL DISTRIBUTION WITH APPLICATIONS Shusen Pu 1 ; Broderick O. Oluyede 2, Yuqi Qiu 3 and Daniel Linder 4 Abstract:

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

Package LindleyR. June 23, 2016

Package LindleyR. June 23, 2016 Type Package Package LindleyR June 23, 2016 Title The Lindley Distribution and Its Modifications Version 1.1.0 License GPL (>= 2) Date 2016-05-22 Author Josmar Mazucheli, Larissa B. Fernandes and Ricardo

More information

Basics of Stochastic Modeling: Part II

Basics of Stochastic Modeling: Part II Basics of Stochastic Modeling: Part II Continuous Random Variables 1 Sandip Chakraborty Department of Computer Science and Engineering, INDIAN INSTITUTE OF TECHNOLOGY KHARAGPUR August 10, 2016 1 Reference

More information

A New Method for Generating Distributions with an Application to Exponential Distribution

A New Method for Generating Distributions with an Application to Exponential Distribution A New Method for Generating Distributions with an Application to Exponential Distribution Abbas Mahdavi & Debasis Kundu Abstract Anewmethodhasbeenproposedtointroduceanextraparametertoafamilyofdistributions

More information

1. Fisher Information

1. Fisher Information 1. Fisher Information Let f(x θ) be a density function with the property that log f(x θ) is differentiable in θ throughout the open p-dimensional parameter set Θ R p ; then the score statistic (or score

More information

Statistics. Lecture 2 August 7, 2000 Frank Porter Caltech. The Fundamentals; Point Estimation. Maximum Likelihood, Least Squares and All That

Statistics. Lecture 2 August 7, 2000 Frank Porter Caltech. The Fundamentals; Point Estimation. Maximum Likelihood, Least Squares and All That Statistics Lecture 2 August 7, 2000 Frank Porter Caltech The plan for these lectures: The Fundamentals; Point Estimation Maximum Likelihood, Least Squares and All That What is a Confidence Interval? Interval

More information

IE 230 Probability & Statistics in Engineering I. Closed book and notes. 120 minutes.

IE 230 Probability & Statistics in Engineering I. Closed book and notes. 120 minutes. Closed book and notes. 10 minutes. Two summary tables from the concise notes are attached: Discrete distributions and continuous distributions. Eight Pages. Score _ Final Exam, Fall 1999 Cover Sheet, Page

More information

Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama

Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama Instructions This exam has 7 pages in total, numbered 1 to 7. Make sure your exam has all the pages. This exam will be 2 hours

More information

On the Comparison of Fisher Information of the Weibull and GE Distributions

On the Comparison of Fisher Information of the Weibull and GE Distributions On the Comparison of Fisher Information of the Weibull and GE Distributions Rameshwar D. Gupta Debasis Kundu Abstract In this paper we consider the Fisher information matrices of the generalized exponential

More information

Mathematical statistics

Mathematical statistics October 1 st, 2018 Lecture 11: Sufficient statistic Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation

More information

Point and Interval Estimation for Gaussian Distribution, Based on Progressively Type-II Censored Samples

Point and Interval Estimation for Gaussian Distribution, Based on Progressively Type-II Censored Samples 90 IEEE TRANSACTIONS ON RELIABILITY, VOL. 52, NO. 1, MARCH 2003 Point and Interval Estimation for Gaussian Distribution, Based on Progressively Type-II Censored Samples N. Balakrishnan, N. Kannan, C. T.

More information

Mathematical statistics

Mathematical statistics October 4 th, 2018 Lecture 12: Information Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation Chapter

More information

LIST OF FORMULAS FOR STK1100 AND STK1110

LIST OF FORMULAS FOR STK1100 AND STK1110 LIST OF FORMULAS FOR STK1100 AND STK1110 (Version of 11. November 2015) 1. Probability Let A, B, A 1, A 2,..., B 1, B 2,... be events, that is, subsets of a sample space Ω. a) Axioms: A probability function

More information

Generalized Transmuted -Generalized Rayleigh Its Properties And Application

Generalized Transmuted -Generalized Rayleigh Its Properties And Application Journal of Experimental Research December 018, Vol 6 No 4 Email: editorinchief.erjournal@gmail.com editorialsecretary.erjournal@gmail.com Received: 10th April, 018 Accepted for Publication: 0th Nov, 018

More information

Chapter 2. Discrete Distributions

Chapter 2. Discrete Distributions Chapter. Discrete Distributions Objectives ˆ Basic Concepts & Epectations ˆ Binomial, Poisson, Geometric, Negative Binomial, and Hypergeometric Distributions ˆ Introduction to the Maimum Likelihood Estimation

More information

Probability Distributions for Continuous Variables. Probability Distributions for Continuous Variables

Probability Distributions for Continuous Variables. Probability Distributions for Continuous Variables Probability Distributions for Continuous Variables Probability Distributions for Continuous Variables Let X = lake depth at a randomly chosen point on lake surface If we draw the histogram so that the

More information