Exact Linear Likelihood Inference for Laplace

Similar documents
Conditional independence of blocked ordered data

Estimation of Parameters of the Weibull Distribution Based on Progressively Censored Data

Point and Interval Estimation for Gaussian Distribution, Based on Progressively Type-II Censored Samples

Hybrid Censoring; An Introduction 2

Estimation for Mean and Standard Deviation of Normal Distribution under Type II Censoring

Exact Inference for the Two-Parameter Exponential Distribution Under Type-II Hybrid Censoring

Best linear unbiased and invariant reconstructors for the past records

Statistical Inference Using Progressively Type-II Censored Data with Random Scheme

STEP STRESS TESTS AND SOME EXACT INFERENTIAL RESULTS N. BALAKRISHNAN. McMaster University Hamilton, Ontario, Canada. p.

A New Two Sample Type-II Progressive Censoring Scheme

Hybrid Censoring Scheme: An Introduction

Problem 1 (20) Log-normal. f(x) Cauchy

RELATIONS FOR MOMENTS OF PROGRESSIVELY TYPE-II RIGHT CENSORED ORDER STATISTICS FROM ERLANG-TRUNCATED EXPONENTIAL DISTRIBUTION

A Skewed Look at Bivariate and Multivariate Order Statistics

Analysis of Type-II Progressively Hybrid Censored Data

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER. 21 June :45 11:45

Step-Stress Models and Associated Inference

INFERENCE FOR BIRNBAUM-SAUNDERS, LAPLACE AND SOME RELATED DISTRIBUTIONS UNDER CENSORED DATA

Continuous Univariate Distributions

Estimation of Stress-Strength Reliability for Kumaraswamy Exponential Distribution Based on Upper Record Values

A Study of Five Parameter Type I Generalized Half Logistic Distribution

Statistics - Lecture One. Outline. Charlotte Wickham 1. Basic ideas about estimation

Some Theoretical Properties and Parameter Estimation for the Two-Sided Length Biased Inverse Gaussian Distribution

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

The Complementary Exponential-Geometric Distribution Based On Generalized Order Statistics

t x 1 e t dt, and simplify the answer when possible (for example, when r is a positive even number). In particular, confirm that EX 4 = 3.

INVERTED KUMARASWAMY DISTRIBUTION: PROPERTIES AND ESTIMATION

1. Point Estimators, Review

STAT 6385 Survey of Nonparametric Statistics. Order Statistics, EDF and Censoring

arxiv: v1 [stat.ap] 31 May 2016

Bayesian Analysis of Simple Step-stress Model under Weibull Lifetimes

Inference on reliability in two-parameter exponential stress strength model

ISI Web of Knowledge (Articles )

STAT 512 sp 2018 Summary Sheet

FULL LIKELIHOOD INFERENCES IN THE COX MODEL

PROD. TYPE: COM ARTICLE IN PRESS. Computational Statistics & Data Analysis ( )

ESTIMATOR IN BURR XII DISTRIBUTION

Burr Type X Distribution: Revisited

The comparative studies on reliability for Rayleigh models

The Inverse Weibull Inverse Exponential. Distribution with Application

ON THE FAILURE RATE ESTIMATION OF THE INVERSE GAUSSIAN DISTRIBUTION

A Quasi Gamma Distribution

Fitting the generalized Pareto distribution to data using maximum goodness-of-fit estimators

Probability and Stochastic Processes

Constant Stress Partially Accelerated Life Test Design for Inverted Weibull Distribution with Type-I Censoring

International Journal of Scientific & Engineering Research, Volume 6, Issue 2, February-2015 ISSN

MS&E 226: Small Data. Lecture 11: Maximum likelihood (v2) Ramesh Johari

Probability and Estimation. Alan Moses

Estimation in an Exponentiated Half Logistic Distribution under Progressively Type-II Censoring

Estimation Under Multivariate Inverse Weibull Distribution

Elements of statistics (MATH0487-1)

Reconstruction of Order Statistics in Exponential Distribution

Statistics and Data Analysis

Estimation for generalized half logistic distribution based on records

High Breakdown Analogs of the Trimmed Mean

Exponential Families

Learning Objectives for Stat 225

Inference for the Pareto, half normal and related. distributions

Chapter 5 continued. Chapter 5 sections

Three hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER.

Probability Density Functions

Generalized Linear Models Introduction

Mathematical statistics

Testing for a unit root in an ar(1) model using three and four moment approximations: symmetric distributions

Define characteristic function. State its properties. State and prove inversion theorem.

Part 4: Multi-parameter and normal models

Stat 5101 Lecture Notes

REFERENCES AND FURTHER STUDIES

Analysis of variance and linear contrasts in experimental design with generalized secant hyperbolic distribution

STATISTICAL INFERENCE IN ACCELERATED LIFE TESTING WITH GEOMETRIC PROCESS MODEL. A Thesis. Presented to the. Faculty of. San Diego State University

Department of Statistical Science FIRST YEAR EXAM - SPRING 2017

Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama

Mathematical statistics

Review. December 4 th, Review

Estimation of P (X > Y ) for Weibull distribution based on hybrid censored samples

Using Simulation Procedure to Compare between Estimation Methods of Beta Distribution Parameters

Terminology Suppose we have N observations {x(n)} N 1. Estimators as Random Variables. {x(n)} N 1

Parametric Techniques Lecture 3

BAYESIAN PREDICTION OF WEIBULL DISTRIBUTION BASED ON FIXED AND RANDOM SAMPLE SIZE. A. H. Abd Ellah

COMPARISON OF THE ESTIMATORS OF THE LOCATION AND SCALE PARAMETERS UNDER THE MIXTURE AND OUTLIER MODELS VIA SIMULATION

Problem Selected Scores

Analysis of Progressive Type-II Censoring. in the Weibull Model for Competing Risks Data. with Binomial Removals

Lecture 3. Inference about multivariate normal distribution

BTRY 4090: Spring 2009 Theory of Statistics

Parametric Techniques

p y (1 p) 1 y, y = 0, 1 p Y (y p) = 0, otherwise.

STATISTICAL METHODS FOR SIGNAL PROCESSING c Alfred Hero

inferences on stress-strength reliability from lindley distributions

Analysis of incomplete data in presence of competing risks

Distribution Theory. Comparison Between Two Quantiles: The Normal and Exponential Cases

Review of Probabilities and Basic Statistics

f (1 0.5)/n Z =

Reliability of Coherent Systems with Dependent Component Lifetimes

Multivariate Distributions

Double Gamma Principal Components Analysis

Introduction to Maximum Likelihood Estimation

TAMS39 Lecture 2 Multivariate normal distribution

Two hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER.

Estimation of parametric functions in Downton s bivariate exponential distribution

Non-parametric Inference and Resampling

Transcription:

Exact Linear Likelihood Inference for Laplace Prof. N. Balakrishnan McMaster University, Hamilton, Canada bala@mcmaster.ca p. 1/52

Pierre-Simon Laplace 1749 1827 p. 2/52

Laplace s Biography Born: On March 23, 1749, in Normandy, France p. 3/52

Laplace s Biography Born: On March 23, 1749, in Normandy, France Died: On March 5, 1827, in Paris, France p. 3/52

Laplace s Biography Born: On March 23, 1749, in Normandy, France Died: On March 5, 1827, in Paris, France Nationality: French p. 3/52

Laplace s Biography Born: On March 23, 1749, in Normandy, France Died: On March 5, 1827, in Paris, France Nationality: French Alma mater: University of Caen, France p. 3/52

Laplace s Biography Born: On March 23, 1749, in Normandy, France Died: On March 5, 1827, in Paris, France Nationality: French Alma mater: University of Caen, France Advisors: Jean d Alembert, Christophe Gadbled p. 3/52

Laplace s Biography Born: On March 23, 1749, in Normandy, France Died: On March 5, 1827, in Paris, France Nationality: French Alma mater: University of Caen, France Advisors: Jean d Alembert, Christophe Gadbled Student: Siméon Denis Poisson p. 3/52

Laplace s Biography Born: On March 23, 1749, in Normandy, France Died: On March 5, 1827, in Paris, France Nationality: French Alma mater: University of Caen, France Advisors: Jean d Alembert, Christophe Gadbled Student: Siméon Denis Poisson p. 3/52

In collaboration with Colleen Cutler (Univ. of Waterloo) G. Iliopoulos (Univ. of Piraeus) Aaron Childs (McMaster Univ.) X. Zhu (Xian-Jiaotong Liverpool Univ) R. Ambagaspitiya (Univ. of Calgary) Kai Liu (McMaster Univ.) p. 4/52

Goals 1. To provide historical details on Laplace distribution; p. 5/52

Goals 1. To provide historical details on Laplace distribution; 2. To describe results on order statistics; p. 5/52

Goals 1. To provide historical details on Laplace distribution; 2. To describe results on order statistics; 3. To present results for Type-II censoring; p. 5/52

Goals 1. To provide historical details on Laplace distribution; 2. To describe results on order statistics; 3. To present results for Type-II censoring; 4. To present results on outlier-models and robustness; p. 5/52

Goals 1. To provide historical details on Laplace distribution; 2. To describe results on order statistics; 3. To present results for Type-II censoring; 4. To present results on outlier-models and robustness; 5. To present results for Type-I censoring; p. 5/52

Goals 1. To provide historical details on Laplace distribution; 2. To describe results on order statistics; 3. To present results for Type-II censoring; 4. To present results on outlier-models and robustness; 5. To present results for Type-I censoring; 6. To present some numerical results and examples; p. 5/52

Goals 1. To provide historical details on Laplace distribution; 2. To describe results on order statistics; 3. To present results for Type-II censoring; 4. To present results on outlier-models and robustness; 5. To present results for Type-I censoring; 6. To present some numerical results and examples; 7. To mention briefly some other work carried out recently. p. 5/52

Historical Details! p. 6/52

Distribution The Laplace distribution has its pdf as f(x) = 1 2σ e x µ /σ, < x <, where µ and σ are the location and scale parameters. p. 7/52

Distribution The Laplace distribution has its pdf as f(x) = 1 2σ e x µ /σ, < x <, where µ and σ are the location and scale parameters. It is also known as Double Exponential Distribution. p. 7/52

Distribution The Laplace distribution has its pdf as f(x) = 1 2σ e x µ /σ, < x <, where µ and σ are the location and scale parameters. It is also known as Double Exponential Distribution. The distribution is symmetric about µ. p. 7/52

Distribution The Laplace distribution has its pdf as f(x) = 1 2σ e x µ /σ, < x <, where µ and σ are the location and scale parameters. It is also known as Double Exponential Distribution. The distribution is symmetric about µ. The corresponding cdf is 1 2 F(x) = e(x µ)/σ, x µ 1 1 2 e (x µ)/σ, x > µ. p. 7/52

Distribution The quantile function is µ+σln(2u), 0 < u 1 2 Q(u) = 1 µ σln{2(1 u)}, < u < 1. 2 p. 8/52

Distribution The quantile function is µ+σln(2u), 0 < u 1 2 Q(u) = 1 µ σln{2(1 u)}, < u < 1. 2 Chapter 24 of Johnson, Kotz and Balakrishnan (1995) and the book of Kotz, Kozubowski and Podgórski (2001) both provide detailed overviews of various developments on theory, methods and applications of the Laplace distribution and its generalizations. p. 8/52

ML Estimation Let x 1,,x n be a random sample from the Laplace distribution. p. 9/52

ML Estimation Let x 1,,x n be a random sample from the Laplace distribution. Then, the likelihood function is { n L = f(x i ) = 1 2 n σ exp 1 n σ i=1 n i=1 x i µ }. p. 9/52

ML Estimation Let x 1,,x n be a random sample from the Laplace distribution. Then, the likelihood function is { n L = f(x i ) = 1 2 n σ exp 1 n σ i=1 n i=1 x i µ }. Evidently, L is maximized with respect to µ when n i=1 x i µ is minimized. p. 9/52

ML Estimation Let x 1,,x n be a random sample from the Laplace distribution. Then, the likelihood function is { n L = f(x i ) = 1 2 n σ exp 1 n σ i=1 n i=1 x i µ }. Evidently, L is maximized with respect to µ when n i=1 x i µ is minimized. This happens when µ is the central most value (if n is odd), or any value in the interval of the two middle most values (if n is even). p. 9/52

ML Estimation Thus, the MLE of µ is taken to be the sample median x m+1:n, n = 2m+1 ˆµ = 1 (x 2 m:n +x m+1:n ), n = 2m. p. 10/52

ML Estimation Thus, the MLE of µ is taken to be the sample median x m+1:n, n = 2m+1 ˆµ = 1 (x 2 m:n +x m+1:n ), n = 2m. Next, upon maximizing L with respect to σ, we obtain the MLE of σ to be ˆσ = 1 n n i=1 x i Median. p. 10/52

ML Estimation Thus, the MLE of µ is taken to be the sample median x m+1:n, n = 2m+1 ˆµ = 1 (x 2 m:n +x m+1:n ), n = 2m. Next, upon maximizing L with respect to σ, we obtain the MLE of σ to be ˆσ = 1 n n i=1 x i Median. Note that the MLEs in this case naturally turn out to be linear estimators, meaning they are linear functions of order statistics. p. 10/52

Historical Note In fact, the Laplace distribution was discovered by Laplace (1774) as the distribution form for which the likelihood function is maximized by setting the location parameter µ equal to the median of the observed values of an odd number of i.i.d. observations. p. 11/52

Historical Note In fact, the Laplace distribution was discovered by Laplace (1774) as the distribution form for which the likelihood function is maximized by setting the location parameter µ equal to the median of the observed values of an odd number of i.i.d. observations. Furthermore, Laplace went on by replacing the median by the arithmetic mean as the value maximizing the likelihood function and derived the corresponding distribution function to be the normal distribution; see Stigler (1975). p. 11/52

Here comes censoring! p. 12/52

Symmetric Censoring Suppose we have available only a Type-II symmetrically censored sample of the form x r+1:n < < x n r:n, with the smallest r and the largest r order statistics having been censored. p. 13/52

Symmetric Censoring Suppose we have available only a Type-II symmetrically censored sample of the form x r+1:n < < x n r:n, with the smallest r and the largest r order statistics having been censored. Then, the corresponding likelihood function is L = n! (r!) 2 [F(x r+1:n){1 F(x n r:n )}] r n r i=r+1 f(x i:n ). p. 13/52

Symmetric Censoring Suppose we have available only a Type-II symmetrically censored sample of the form x r+1:n < < x n r:n, with the smallest r and the largest r order statistics having been censored. Then, the corresponding likelihood function is L = n! (r!) 2 [F(x r+1:n){1 F(x n r:n )}] r n r i=r+1 f(x i:n ). Now, let us first maximize L with respect to µ. p. 13/52

Symmetric Censoring 1. Case µ < x r+1:n : In this case, the likelihood is L = [ C 1 σ n 2r 2 e(µ x n r:n)/σ 1 4 e(2µ x r+1:n x n r:n)/σ [e { ] n r i=r+1 x i:n+(n 2r)µ}/σ. ] r p. 14/52

Symmetric Censoring 1. Case µ < x r+1:n : In this case, the likelihood is L = [ C 1 σ n 2r 2 e(µ x n r:n)/σ 1 4 e(2µ x r+1:n x n r:n)/σ [e { ] n r i=r+1 x i:n+(n 2r)µ}/σ. ] r It is easy to show that the second and third terms are both increasing functions of µ for < µ < x r+1:n. p. 14/52

Symmetric Censoring 1. Case µ < x r+1:n : In this case, the likelihood is L = [ C 1 σ n 2r 2 e(µ x n r:n)/σ 1 4 e(2µ x r+1:n x n r:n)/σ [e { ] n r i=r+1 x i:n+(n 2r)µ}/σ. ] r It is easy to show that the second and third terms are both increasing functions of µ for < µ < x r+1:n. So, L is increasing over the range < µ < x r+1:n. p. 14/52

Symmetric Censoring 2. Case µ > x n r:n : In this case, the likelihood is L = C σ n 2r [ 1 2 e(x r+1:n µ)/σ 1 4 e(x r+1:n+x n r:n 2µ)/σ [e { n r i=r+1 x i:n (n 2r)µ}/σ ]. ] r p. 15/52

Symmetric Censoring 2. Case µ > x n r:n : In this case, the likelihood is L = C σ n 2r [ 1 2 e(x r+1:n µ)/σ 1 4 e(x r+1:n+x n r:n 2µ)/σ [e { n r i=r+1 x i:n (n 2r)µ}/σ ]. ] r In this case, it is easy to show that second and the third terms are both decreasing functions of µ for x n r:n < µ <. p. 15/52

Symmetric Censoring 2. Case µ > x n r:n : In this case, the likelihood is L = C σ n 2r [ 1 2 e(x r+1:n µ)/σ 1 4 e(x r+1:n+x n r:n 2µ)/σ [e { n r i=r+1 x i:n (n 2r)µ}/σ ]. ] r In this case, it is easy to show that second and the third terms are both decreasing functions of µ for x n r:n < µ <. So, L is decreasing over the range x n r:n < µ <. p. 15/52

Symmetric Censoring 3. Case x r+1: µ x n r:n : In this case, L = C n r:n x r+1:n)/σ e n r σ n 2re r(x i=r+1 xi:n µ /σ. p. 16/52

Symmetric Censoring 3. Case x r+1: µ x n r:n : In this case, L = C n r:n x r+1:n)/σ e n r σ n 2re r(x i=r+1 xi:n µ /σ. From this expression, it is clear that the MLE of µ is once again the sample median given by x m+1:n, n = 2m+1 ˆµ = any value in [x m:n,x m+1:n ], n = 2m. p. 16/52

Symmetric Censoring 3. Case x r+1: µ x n r:n : In this case, L = C n r:n x r+1:n)/σ e n r σ n 2re r(x i=r+1 xi:n µ /σ. From this expression, it is clear that the MLE of µ is once again the sample median given by x m+1:n, n = 2m+1 ˆµ = any value in [x m:n,x m+1:n ], n = 2m. So, the MLE of µ in the case of symmetric censoring is exactly the same as that based on complete sample. p. 16/52

Symmetric Censoring Next, upon maximizing L with respect to σ, we obtain p. 17/52

Symmetric Censoring Next, upon maximizing L with respect to σ, we obtain [ 2m+1 r 1 ˆσ = x i:2m+1 +rx 2m+1 r:2m+1 2m+1 2r i=m+2 ] m x i:2m+1 rx r+1:2m+1, n = 2m+1; ˆσ = i=r+1 1 2m 2r m i=r+1 [ 2m r i=m+1 x i:2m +rx 2m r:2m x i:2m rx r+1:2m ], n = 2m. p. 17/52

Symmetric Censoring Next, upon maximizing L with respect to σ, we obtain [ 2m+1 r 1 ˆσ = x i:2m+1 +rx 2m+1 r:2m+1 2m+1 2r i=m+2 ] m x i:2m+1 rx r+1:2m+1, n = 2m+1; ˆσ = i=r+1 1 2m 2r m i=r+1 [ 2m r i=m+1 x i:2m +rx 2m r:2m x i:2m rx r+1:2m ], n = 2m. Interestingly, the MLEs of µ and σ in this case are also linear estimators [Balakrishnan and Cutler (1996)]. p. 17/52

Right Censoring Let us now consider a Type-II right censored sample of the form x 1:n < < x n r:n, with largest r order statistics having been censored. p. 18/52

Right Censoring Let us now consider a Type-II right censored sample of the form x 1:n < < x n r:n, with largest r order statistics having been censored. Then, the corresponding likelihood function is L = n! r! {1 F(x n r:n)} r n r i=1 f(x i:n ). p. 18/52

Right Censoring Let us now consider a Type-II right censored sample of the form x 1:n < < x n r:n, with largest r order statistics having been censored. Then, the corresponding likelihood function is L = n! r! {1 F(x n r:n)} r n r i=1 f(x i:n ). In this case, the MLEs depend on whether r n 2 or r > n 2. p. 18/52

Right Censoring 1. Case r n : In this case, note that the median of the 2 sample is available. So, the MLE of µ is once again the sample median. p. 19/52

Right Censoring 1. Case r n : In this case, note that the median of the 2 sample is available. So, the MLE of µ is once again the sample median. 2. Case r > n : In this case, the median of the sample 2 is not available. p. 19/52

Right Censoring 1. Case r n : In this case, note that the median of the 2 sample is available. So, the MLE of µ is once again the sample median. 2. Case r > n : In this case, the median of the sample 2 is not available. It is easy to show in this case that the likelihood function is increasing over µ in the interval (,x 1:n ) and also increasing in the interval (x 1:n,x n r:n ]. p. 19/52

Right Censoring 1. Case r n : In this case, note that the median of the 2 sample is available. So, the MLE of µ is once again the sample median. 2. Case r > n : In this case, the median of the sample 2 is not available. It is easy to show in this case that the likelihood function is increasing over µ in the interval (,x 1:n ) and also increasing in the interval (x 1:n,x n r:n ]. So, the MLE of µ must be in the interval (x n r:n, ). p. 19/52

Right Censoring The likelihood function in this case is { 1 1 2 e(x n r:n µ)/σ L = C σ n r } r e n r i=1 (µ x i:n)/σ. p. 20/52

Right Censoring The likelihood function in this case is { 1 1 2 e(x n r:n µ)/σ L = C σ n r } r e n r i=1 (µ x i:n)/σ. When this function is maximized for µ, we determine the MLE of µ (when r > n 2 ) as ˆµ = x n r:n + ˆσ ( log n/2 ). n r p. 20/52

Right Censoring The likelihood function in this case is { 1 1 2 e(x n r:n µ)/σ L = C σ n r } r e n r i=1 (µ x i:n)/σ. When this function is maximized for µ, we determine the MLE of µ (when r > n 2 ) as ˆµ = x n r:n + ˆσ ( log n/2 ). n r Substituting these expressions of MLEs for µ in L and then maximizing it with respect to σ, we get expressions for the MLE of σ. p. 20/52

Right Censoring For example, in the case when r n, we obtain the 2 MLE of σ as follows: p. 21/52

Right Censoring For example, in the case when r n, we obtain the 2 MLE of σ as follows: ˆσ = ˆσ = 1 n r 1 n r [ n r i=m+2 [ n r i=m+1 x i:n x i:n ] m x i:n +rx n r:n, n = 2m+1; i=1 ] m x i:n +rx n r:n, n = 2m. i=1 p. 21/52

Right Censoring For example, in the case when r n, we obtain the 2 MLE of σ as follows: ˆσ = ˆσ = 1 n r 1 n r [ n r i=m+2 [ n r i=m+1 x i:n x i:n ] m x i:n +rx n r:n, n = 2m+1; i=1 ] m x i:n +rx n r:n, n = 2m. i=1 Similar expressions for ˆσ can be presented for the case when r > n by using the MLE of µ presented earlier 2 [Balakrishnan and Cutler (1996)]. p. 21/52

How about order statistics and moments? p. 22/52

Order Statistics Let X 1:n < < X n:n be the order statistics from a standard Laplace distribution. p. 23/52

Order Statistics Let X 1:n < < X n:n be the order statistics from a standard Laplace distribution. Let Y 1:n < < Y n:n be the order statistics from the folded distribution with pdf and cdf as p(x) = 2f(x) and P(x) = 2F(x) 1, x > 0. p. 23/52

Order Statistics Let X 1:n < < X n:n be the order statistics from a standard Laplace distribution. Let Y 1:n < < Y n:n be the order statistics from the folded distribution with pdf and cdf as p(x) = 2f(x) and P(x) = 2F(x) 1, x > 0. In other words, these order statistics are from the standard exponential distribution. p. 23/52

Order Statistics Let X 1:n < < X n:n be the order statistics from a standard Laplace distribution. Let Y 1:n < < Y n:n be the order statistics from the folded distribution with pdf and cdf as p(x) = 2f(x) and P(x) = 2F(x) 1, x > 0. In other words, these order statistics are from the standard exponential distribution. Let (µ (k) i:n,µ i,j:n) and (ν (k) i:n,ν i,j:n) denote the single and product moments of order statistics from Laplace and exponential distributions, respectively. p. 23/52

Order Statistics Then, Govindarajulu (1963) established the following relationships between these two sets of moments: p. 24/52

Order Statistics Then, Govindarajulu (1963) established the following relationships between these two sets of moments: { µ (k) i:n = 1 i 1 2 n r=0 ( ) n ν (k) i r:n r r +( 1)k n r=i ( ) } n ν (k) r i+1:r r ; p. 24/52

Order Statistics Then, Govindarajulu (1963) established the following relationships between these two sets of moments: { µ (k) i:n = 1 i 1 2 n r=0 { µ i,j:n = 1 i 1 2 n ( ) n ν (k) i r:n r r +( 1)k r=0 j 1 r=i n r=i ( ) n ν i r,j r:n r + r r=j } )ν r i+1:r ν j r:n r ( n r n ( ) } n ν (k) r i+1:r r ( ) n ν r j+1,r i+1:r r. ; p. 24/52

Order Statistics Then, Govindarajulu (1963) established the following relationships between these two sets of moments: { µ (k) i:n = 1 i 1 2 n r=0 { µ i,j:n = 1 i 1 2 n ( ) n ν (k) i r:n r r +( 1)k r=0 j 1 r=i n r=i ( ) n ν i r,j r:n r + r r=j } )ν r i+1:r ν j r:n r ( n r n ( ) } n ν (k) r i+1:r r ( ) n ν r j+1,r i+1:r r The proof provided by Govindarajulu (1963) is purely an algebraic one through integration. p. 24/52. ;

Order Statistics A simple and elegant probabilistic proof was given by Balakrishnan, Govindarajulu and Balasubramanian (1993). p. 25/52

Order Statistics A simple and elegant probabilistic proof was given by Balakrishnan, Govindarajulu and Balasubramanian (1993). Let D denote the number of X i s that are 0. p. 25/52

Order Statistics A simple and elegant probabilistic proof was given by Balakrishnan, Govindarajulu and Balasubramanian (1993). Let D denote the number of X i s that are 0. Then, the following properties hold: p. 25/52

Order Statistics A simple and elegant probabilistic proof was given by Balakrishnan, Govindarajulu and Balasubramanian (1993). Let D denote the number of X i s that are 0. Then, the following properties hold: 1. D Bin ( n, 2) 1 ; p. 25/52

Order Statistics A simple and elegant probabilistic proof was given by Balakrishnan, Govindarajulu and Balasubramanian (1993). Let D denote the number of X i s that are 0. Then, the following properties hold: 1. D Bin ( n, 2) 1 ; 2. Given D = i 1, X i:n < < X n:n are distributed exactly as order statistics from a sample of size n i+1 from exponential distribution; p. 25/52

Order Statistics 3. Also, given D = i 1, X i 1:n < < X 1:n are distributed exactly as order statistics from a sample of size i 1 from exponential distribution; p. 26/52

Order Statistics 3. Also, given D = i 1, X i 1:n < < X 1:n are distributed exactly as order statistics from a sample of size i 1 from exponential distribution; 4. Furthermore, given D = i 1, these two sets of order statistics are mutually independent. p. 26/52

Order Statistics 3. Also, given D = i 1, X i 1:n < < X 1:n are distributed exactly as order statistics from a sample of size i 1 from exponential distribution; 4. Furthermore, given D = i 1, these two sets of order statistics are mutually independent. Then, the above four properties readily yield the relations presented earlier; in addition, this approach leads to some generalizations of Govindarajulu s results as well. p. 26/52

What about BLUEs? p. 27/52

Bias, MSE & JRE BLUEs of µ and σ were tabulated by Govindarajulu (1966) for Type-II symmetrically censored samples. p. 28/52

Bias, MSE & JRE BLUEs of µ and σ were tabulated by Govindarajulu (1966) for Type-II symmetrically censored samples. The MLEs ˆµ and ˆσ are symmetric and skew-symmetric estimators, respectively. p. 28/52

Bias, MSE & JRE BLUEs of µ and σ were tabulated by Govindarajulu (1966) for Type-II symmetrically censored samples. The MLEs ˆµ and ˆσ are symmetric and skew-symmetric estimators, respectively. The same is true for the BLUEs µ and σ. p. 28/52

Bias, MSE & JRE BLUEs of µ and σ were tabulated by Govindarajulu (1966) for Type-II symmetrically censored samples. The MLEs ˆµ and ˆσ are symmetric and skew-symmetric estimators, respectively. The same is true for the BLUEs µ and σ. So, both pairs of estimators are uncorrelated. p. 28/52

Bias, MSE & JRE BLUEs of µ and σ were tabulated by Govindarajulu (1966) for Type-II symmetrically censored samples. The MLEs ˆµ and ˆσ are symmetric and skew-symmetric estimators, respectively. The same is true for the BLUEs µ and σ. So, both pairs of estimators are uncorrelated. While ˆµ, µ and σ are all unbiased, ˆσ is biased. p. 28/52

Bias, MSE & JRE BLUEs of µ and σ were tabulated by Govindarajulu (1966) for Type-II symmetrically censored samples. The MLEs ˆµ and ˆσ are symmetric and skew-symmetric estimators, respectively. The same is true for the BLUEs µ and σ. So, both pairs of estimators are uncorrelated. While ˆµ, µ and σ are all unbiased, ˆσ is biased. So, we may define Joint Relative Efficiency as JRE = 100 Var(µ )+Var(σ ) Var(ˆµ)+MSE(ˆσ). p. 28/52

Bias, MSE & JRE n r V(ˆµ)/σ 2 B(ˆσ)/σ MSE(ˆσ)/σ 2 JRE 5 0 0.3512 0.1354 0.1895 100.96 1 0.2361 0.3117 114.06 10 0 0.1452 0.0551 0.0790 109.77 1 0.0689 0.1127 106.59 2 0.0921 0.1618 106.06 3 0.1488 0.2434 114.44 15 0 0.0963 0.0392 0.0445 111.72 1 0.0452 0.0614 106.79 2 0.0534 0.0799 104.37 3 0.0654 0.1031 103.66 4 0.0853 0.1363 105.12 5 0.1258 0.1935 110.63 6 0.2441 0.3259 131.62 p. 29/52

Bias, MSE & JRE Bias in ˆσ decreases as n increases, for fixed r. p. 30/52

Bias, MSE & JRE Bias in ˆσ decreases as n increases, for fixed r. (ˆµ, ˆσ) is jointly more efficient than (µ, σ ) for small censored sample sizes considered here. p. 30/52

Bias, MSE & JRE Bias in ˆσ decreases as n increases, for fixed r. (ˆµ, ˆσ) is jointly more efficient than (µ, σ ) for small censored sample sizes considered here. Furthermore, unlike the BLUEs µ and σ, the MLEs ˆµ and ˆσ are explicit linear estimators. p. 30/52

Bias, MSE & JRE Bias in ˆσ decreases as n increases, for fixed r. (ˆµ, ˆσ) is jointly more efficient than (µ, σ ) for small censored sample sizes considered here. Furthermore, unlike the BLUEs µ and σ, the MLEs ˆµ and ˆσ are explicit linear estimators. JRE of the MLEs increases in general as the censoring proportion increases. p. 30/52

Outliers? No problem!! p. 31/52

Outlier-Model Let X 1:n < < X n:n denote the order statistics from a single-outlier Laplace model, with the outlier having a different scale parameter α. p. 32/52

Outlier-Model Let X 1:n < < X n:n denote the order statistics from a single-outlier Laplace model, with the outlier having a different scale parameter α. Let Y 1:n < < Y n:n denote the order statistics from a single-outlier exponential model, with the outlier having the same scale parameter α. p. 32/52

Outlier-Model Let X 1:n < < X n:n denote the order statistics from a single-outlier Laplace model, with the outlier having a different scale parameter α. Let Y 1:n < < Y n:n denote the order statistics from a single-outlier exponential model, with the outlier having the same scale parameter α. Let Z 1:n < < Z n:n denote the order statistics from the standard exponential distribution. p. 32/52

Outlier-Model Let X 1:n < < X n:n denote the order statistics from a single-outlier Laplace model, with the outlier having a different scale parameter α. Let Y 1:n < < Y n:n denote the order statistics from a single-outlier exponential model, with the outlier having the same scale parameter α. Let Z 1:n < < Z n:n denote the order statistics from the standard exponential distribution. Let (µ (k) i:n,µ i,j:n), (ν (k) i:n,ν i,j:n) and (ν (k) i:n,ν i,j:n) denote the single and product moments of these three sets of order statistics, respectively. p. 32/52

Outlier-Model Then, Balakrishnan (1989) established the following generalized relations between these three sets of moments of order statistics: p. 33/52

Outlier-Model Then, Balakrishnan (1989) established the following generalized relations between these three sets of moments of order statistics: µ (k) i:n = 1 2 n + { i 1 r=1 i 1 r=0 ( n 1 ) r 1 ( n 1 ) r ν (k) i r:n r +( 1)k ν (k) i r:n r +( 1)k n 1 r=i ( n 1 ) r n ( n 1 ) r=i r 1 ν (k) r i+1:r ν (k) r i+1:r } ; p. 33/52

Outlier-Model Then, Balakrishnan (1989) established the following generalized relations between these three sets of moments of order statistics: µ (k) i:n = µ (k) i,j:n = 1 2 n + 1 2 n { i 1 r=1 ( n 1 ) r 1 ν (k) i r:n r +( 1)k n 1 r=i ( n 1 ) i 1 ( n 1 ) n ( + ν (k) n 1 ) i r:n r r +( 1)k r 1 r=0 r=i i 1 ( n 1 n 1 )νi r,j r:n r + ( n 1 ) j 1 r=i i 1 r=0 r=1 r 1 r=j ( n 1 ) νj r:n r r 1 ν r i+1:r ( n 1 ) ν i r,j r:n r + r j 1 r=i r n ( n 1 ) r=j r 1 r ν (k) r i+1:r ν (k) r i+1:r } ν r j+1,r i+1:r ( n 1 ) νr i+1:r r ν j r:n r ν r j+1,r i+1:r ;. p. 33/52

Outlier-Model These results were used by Balakrishnan and Ambagaspitiya (1988) to study the robustness features of various linear estimators of both location and scale parameters of the Laplace distribution with respect to the presence of a scale-outlier in the sample. p. 34/52

Outlier-Model These results were used by Balakrishnan and Ambagaspitiya (1988) to study the robustness features of various linear estimators of both location and scale parameters of the Laplace distribution with respect to the presence of a scale-outlier in the sample. Of course, these developments can also be extended to the case of multiple outliers. p. 34/52

Outlier-Model These results were used by Balakrishnan and Ambagaspitiya (1988) to study the robustness features of various linear estimators of both location and scale parameters of the Laplace distribution with respect to the presence of a scale-outlier in the sample. Of course, these developments can also be extended to the case of multiple outliers. They compared various estimators of location and scale parameters in the presence of a single-outlier with its scale parameter as ασ, and these results are presented next for the case when n = 10. p. 34/52

Outlier-Model 1 σ 2 (Variance of ˆµ), when n = 10 ˆµ 1.0 2.0 4.0 6.0 8.0 10.0 Mean 0.2000 0.2600 0.5000 0.9000 1.4600 2.1800 Median 0.1452 0.1630 0.1751 0.1797 0.1822 0.1837 BLUE(0) 0.1399 0.1581 0.1717 0.1773 0.1805 0.1825 BLUE(1) 0.1399 0.1580 0.1716 0.1771 0.1801 0.1819 BLUE(2) 0.1399 0.1580 0.1714 0.1768 0.1797 0.1815 TrimMean(1) 0.1617 0.1889 0.2191 0.2344 0.2435 0.2495 TrimMean(2) 0.1463 0.1670 0.1845 0.1921 0.1962 0.1989 WinsMean(1) 0.1430 0.1632 0.1806 0.1882 0.1926 0.1953 WinsMean(2) 0.1611 0.1857 0.2086 0.2189 0.2247 0.2284 LinWMean(1) 0.1430 0.1632 0.1806 0.1882 0.1926 0.1953 LinWMean(2) 0.1401 0.1585 0.1724 0.1780 0.1811 0.1830 α GastMean 0.1420 0.1605 0.1743 0.1798 0.1828 0.1847 p. 35/52

Outlier-Model 1 σ 2 (MSE of ˆσ), when n = 10 α ˆσ 1.0 2.0 4.0 6.0 8.0 10.0 BLUE(0) 0.1062 0.1500 0.3735 0.7768 1.3613 2.1266 BLUE(1) 0.1350 0.1342 0.2091 0.2342 0.2498 0.2604 BLUE(2) 0.1857 0.1804 0.2520 0.2682 0.2776 0.2836 RSE(0) 0.1064 0.1511 0.3823 0.8025 1.4118 2.2102 RSE(1) 0.1378 0.1675 0.2097 0.2332 0.2478 0.2576 RSE(2) 0.1908 0.2250 0.2625 0.2807 0.2912 0.2980 RSE(3) 0.3053 0.3501 0.3887 0.4054 0.4146 0.4204 p. 36/52

Exact Inference for Type-II Right Censoring p. 37/52

Exact Inference 1. Case r n : In this case, the joint MGF of (ˆµ, ˆσ) can 2 be derived as follows [Balakrishnan and Zhu (2016)]: p. 38/52

Exact Inference 1. Case r n : In this case, the joint MGF of (ˆµ, ˆσ) can 2 be derived as follows [Balakrishnan and Zhu (2016)]: E ( e t 1ˆµ+t 2ˆσ ) = r 1 j=0 + r 1 j n r l=0 l=0 p 1 e t 1µ (1 s 2 σ) j (1+s 2 σ) (r 1 j) { 1 (s 1 s 2 l)σ r(n r +1+l) } 1 p 2 e t 1µ (1 s 2 σ) (r 1) ( 1+ t 1σ l+r ) 1, p. 38/52

Exact Inference 1. Case r n : In this case, the joint MGF of (ˆµ, ˆσ) can 2 be derived as follows [Balakrishnan and Zhu (2016)]: E ( e t 1ˆµ+t 2ˆσ ) = r 1 j=0 + r 1 j n r l=0 l=0 p 1 e t 1µ (1 s 2 σ) j (1+s 2 σ) (r 1 j) { 1 (s 1 s 2 l)σ r(n r +1+l) } 1 p 2 e t 1µ (1 s 2 σ) (r 1) ( 1+ t 1σ l+r ) 1, where p. 38/52

Exact Inference p 1 p 2 = ( 1) l c jr 2 n r 1 j (n r +1+l) 1, l = ( 1) l c jr 2 r l n r (l+r) 1, l c jr = c jr = s 1 = n! j!(r 1 j)!(n r)!, n! (r 1)!(n r)!, ( r 1 ln n ) r 2r +1 t 1 + r 1 r t 2, s 2 = 1 r ln n 2r t 1 + 1 r t 2. p. 39/52

Exact Inference p 1 p 2 = ( 1) l c jr 2 n r 1 j (n r +1+l) 1, l = ( 1) l c jr 2 r l n r (l+r) 1, l c jr = c jr = s 1 = n! j!(r 1 j)!(n r)!, n! (r 1)!(n r)!, ( r 1 ln n ) r 2r +1 t 1 + r 1 r t 2, s 2 = 1 r ln n 2r t 1 + 1 r t 2. Analogous results for other cases have also been derived by Balakrishnan and Zhu (2016). p. 39/52

Exact Inference From the joint MGF of (ˆµ, ˆσ), the Bias and MSE of the estimators ˆµ and ˆσ can be easily obtained. p. 40/52

Exact Inference From the joint MGF of (ˆµ, ˆσ), the Bias and MSE of the estimators ˆµ and ˆσ can be easily obtained. Also, the correlation between the estimators ˆµ and ˆσ can be obtained. p. 40/52

Exact Inference From the joint MGF of (ˆµ, ˆσ), the Bias and MSE of the estimators ˆµ and ˆσ can be easily obtained. Also, the correlation between the estimators ˆµ and ˆσ can be obtained. From the joint MGF of (ˆµ, ˆσ), upon setting t 2 = 0 and t 1 = 0, respectively, the marginal MGFs of ˆµ and ˆσ can be deduced. p. 40/52

Exact Inference From the joint MGF of (ˆµ, ˆσ), the Bias and MSE of the estimators ˆµ and ˆσ can be easily obtained. Also, the correlation between the estimators ˆµ and ˆσ can be obtained. From the joint MGF of (ˆµ, ˆσ), upon setting t 2 = 0 and t 1 = 0, respectively, the marginal MGFs of ˆµ and ˆσ can be deduced. Upon inverting these marginal MGFs, the exact distributions of the estimators ˆµ and ˆσ can be derived. p. 40/52

Exact Inference From the joint MGF of (ˆµ, ˆσ), the Bias and MSE of the estimators ˆµ and ˆσ can be easily obtained. Also, the correlation between the estimators ˆµ and ˆσ can be obtained. From the joint MGF of (ˆµ, ˆσ), upon setting t 2 = 0 and t 1 = 0, respectively, the marginal MGFs of ˆµ and ˆσ can be deduced. Upon inverting these marginal MGFs, the exact distributions of the estimators ˆµ and ˆσ can be derived. In particular, it can be shown that p. 40/52

Exact Inference ˆµ ˆσ r 1 d = µ+ d = + r 1 j=0 + j=0 n r l=0 r 1 j l=0 n r l=0 r 1 j l=0 p 2 [Γ p 1 [Γ p 2 Γ p 1 [Γ ( j, bσ r ( r 1, bσ r ) )+Γ ( r 1 j, bσ r ( )] σ +E, l+r ) ( )] r +b(r l 1)σ +E r(n r+l+1) ( j, σ ) +Γ ( r 1 j, σ ) ( )] (r l 1)σ +E r r r(n r +l+1) ( r 1, σ ) ; r p. 41/52

Exact Inference ˆµ ˆσ r 1 d = µ+ d = + r 1 j=0 j=0 n r l=0 r 1 j l=0 r 1 j l=0 p 2 [Γ p 1 [Γ p 1 [Γ ( j, bσ r ( r 1, bσ r ) )+Γ ( r 1 j, bσ r ( )] σ +E, l+r ) ( )] r +b(r l 1)σ +E r(n r+l+1) ( j, σ ) +Γ ( r 1 j, σ ) ( )] (r l 1)σ +E r r r(n r +l+1) + n r l=0 p 2 Γ ( r 1, σ ) ; r in the above, Γ and E denote Gamma and Exponential random variables, Γ and E denote the negative of these variables, and Γ(0, σ r ), Γ (0, σ r ), E(0), E (0) are all degenerate at the point 0. p. 41/52

Example The data below, given by Mann and Fertig (1973), are the lifetimes of 13 aeroplane components with the last 3 components having been censored: p. 42/52

Example The data below, given by Mann and Fertig (1973), are the lifetimes of 13 aeroplane components with the last 3 components having been censored: 0.22 0.50 0.88 1.00 1.32 1.33 1.54 1.76 2.50 3.00 p. 42/52

Example The data below, given by Mann and Fertig (1973), are the lifetimes of 13 aeroplane components with the last 3 components having been censored: 0.22 0.50 0.88 1.00 1.32 1.33 1.54 1.76 2.50 3.00 Here, we analyze these data by assuming a Laplace distribution. p. 42/52

Example The data below, given by Mann and Fertig (1973), are the lifetimes of 13 aeroplane components with the last 3 components having been censored: 0.22 0.50 0.88 1.00 1.32 1.33 1.54 1.76 2.50 3.00 Here, we analyze these data by assuming a Laplace distribution. We computed the MLEs based on this Type-II censored sample, their mean square errors (MSE) and correlation, and the 95% confidence intervals based on the exact formulae presented earlier. p. 42/52

Example MLEs of µ and σ and their MSEs and correlation coefficient r ˆµ ˆσ MSE(ˆµ) MSE(ˆσ) Corr(ˆµ,ˆσ) 10 1.5400 1.1010 0.1379 0.1182 0.0039 p. 43/52

Example MLEs of µ and σ and their MSEs and correlation coefficient r ˆµ ˆσ MSE(ˆµ) MSE(ˆσ) Corr(ˆµ,ˆσ) 10 1.5400 1.1010 0.1379 0.1182 0.0039 Exact and simulated 95% CIs for µ and σ Exact results Simulated results r µ σ µ σ 10 (0.7828, 2.2972) (0.4827, 1.7936) (0.7843, 2.2920) (0.4837, 1.7893) p. 43/52

How about Type-I Censoring? p. 44/52

Likelihood Function Let x 1:n < x 2:n < < x n:n denote the ordered lifetimes of n units under a life-test. p. 45/52

Likelihood Function Let x 1:n < x 2:n < < x n:n denote the ordered lifetimes of n units under a life-test. Suppose the life-test is terminated at a fixed time T. p. 45/52

Likelihood Function Let x 1:n < x 2:n < < x n:n denote the ordered lifetimes of n units under a life-test. Suppose the life-test is terminated at a fixed time T. Then, the data observed will be (x 1:n < < x D:n ), where D is the random number of failures until T. p. 45/52

Likelihood Function Let x 1:n < x 2:n < < x n:n denote the ordered lifetimes of n units under a life-test. Suppose the life-test is terminated at a fixed time T. Then, the data observed will be (x 1:n < < x D:n ), where D is the random number of failures until T. The corresponding likelihood function is d L = C d f(x i:n ){1 F(T)} n d, x 1:n < < x d:n < T. i=1 p. 45/52

Likelihood Function Let x 1:n < x 2:n < < x n:n denote the ordered lifetimes of n units under a life-test. Suppose the life-test is terminated at a fixed time T. Then, the data observed will be (x 1:n < < x D:n ), where D is the random number of failures until T. The corresponding likelihood function is d L = C d f(x i:n ){1 F(T)} n d, x 1:n < < x d:n < T. i=1 The MLEs of µ and σ exist only when D 1, and so all inferential results are based on this condition of at least one failure. p. 45/52

MLEs By maximizing L, the MLEs of (µ,σ) have been derived by Zhu and Balakrishnan (2016) as follows: p. 46/52

MLEs By maximizing L, the MLEs of (µ,σ) have been derived by Zhu and Balakrishnan (2016) as follows: ˆµ = ˆσ = [X m:n,x m+1:n ], n = 2m,d m+1; X m+1:n, n = 2m+1,d m+1; [X m:n,t], n = 2m,d = m; T + ˆσlog( n 2d ), d < n 2 ; [ 1 (n d)t + d d i=m+1 X i:n ] m i=1 X i:n, n = 2m,d m; [ 1 (n d)t + d d i=m+2 X i:n ] m i=1 X i:n, n = 2m+1,d m+1; 1 di=1 (T X d i:n ), d < n 2. p. 46/52

MLEs By maximizing L, the MLEs of (µ,σ) have been derived by Zhu and Balakrishnan (2016) as follows: ˆµ = ˆσ = [X m:n,x m+1:n ], n = 2m,d m+1; X m+1:n, n = 2m+1,d m+1; [X m:n,t], n = 2m,d = m; T + ˆσlog( n 2d ), d < n 2 ; [ 1 (n d)t + d d i=m+1 X i:n ] m i=1 X i:n, n = 2m,d m; [ 1 (n d)t + d d i=m+2 X i:n ] m i=1 X i:n, n = 2m+1,d m+1; 1 di=1 (T X d i:n ), d < n 2. They have then derived the exact conditional joint MGF, marginal MGF and marginal distributions of the MLEs ˆµ and ˆσ. p. 46/52

Example Let us again consider the data of Mann and Fertig (1973), but with the life-test terminating at time T = 2.75: p. 47/52

Example Let us again consider the data of Mann and Fertig (1973), but with the life-test terminating at time T = 2.75: 0.22 0.50 0.88 1.00 1.32 1.33 1.54 1.76 2.50 p. 47/52

Example Let us again consider the data of Mann and Fertig (1973), but with the life-test terminating at time T = 2.75: 0.22 0.50 0.88 1.00 1.32 1.33 1.54 1.76 2.50 Let us analyze these data by assuming a Laplace distribution. p. 47/52

Example Let us again consider the data of Mann and Fertig (1973), but with the life-test terminating at time T = 2.75: 0.22 0.50 0.88 1.00 1.32 1.33 1.54 1.76 2.50 Let us analyze these data by assuming a Laplace distribution. We computed the MLEs based on this Type-I censored sample, their mean square errors (MSE) and covariance, and the 95% confidence intervals based on the exact results. p. 47/52

Example MLEs of µ and σ and their MSEs and covariance T ˆµ ˆσ MSE(ˆµ) MSE(ˆσ) Cov(ˆµ,ˆσ) 2.75 1.5400 1.1122 0.1407 0.1164 0.0021 p. 48/52

Example MLEs of µ and σ and their MSEs and covariance T ˆµ ˆσ MSE(ˆµ) MSE(ˆσ) Cov(ˆµ,ˆσ) 2.75 1.5400 1.1122 0.1407 0.1164 0.0021 Exact and simulated 90% CIs for µ and σ Exact results Simulated results T µ σ µ σ 2.75 (0.9228, 2.1572) (0.6053, 1.7073) (0.9188, 2.1583) (0.6066, 1.6872) p. 48/52

What else? p. 49/52

Other Work Results for Type-II symmetric censoring developed by Iliopoulos and Balakrishnan (2011) are simpler; p. 50/52

Other Work Results for Type-II symmetric censoring developed by Iliopoulos and Balakrishnan (2011) are simpler; The results on Type-II and Type-I censoring have been generalized to hybrid Type-II and Type-I censoring; p. 50/52

Other Work Results for Type-II symmetric censoring developed by Iliopoulos and Balakrishnan (2011) are simpler; The results on Type-II and Type-I censoring have been generalized to hybrid Type-II and Type-I censoring; From the joint MGF of the MLEs, exact inference has been developed for reliability, quantile and cumulative hazard functions, and likelihood predictors; p. 50/52

Other Work Results for Type-II symmetric censoring developed by Iliopoulos and Balakrishnan (2011) are simpler; The results on Type-II and Type-I censoring have been generalized to hybrid Type-II and Type-I censoring; From the joint MGF of the MLEs, exact inference has been developed for reliability, quantile and cumulative hazard functions, and likelihood predictors; Exact inference under progressive Type-II censoring based on BLUEs has been developed by Liu, Zhu and Balakrishnan (2018). p. 50/52

Other Work Results for Type-II symmetric censoring developed by Iliopoulos and Balakrishnan (2011) are simpler; The results on Type-II and Type-I censoring have been generalized to hybrid Type-II and Type-I censoring; From the joint MGF of the MLEs, exact inference has been developed for reliability, quantile and cumulative hazard functions, and likelihood predictors; Exact inference under progressive Type-II censoring based on BLUEs has been developed by Liu, Zhu and Balakrishnan (2018). Exact predictive likelihood inference has been developed by Zhu and Balakrishnan (2018). p. 50/52

References Balakrishnan, N. (1989). Ann. Inst. Stat. Math., 41. Balakrishnan, N. and Ambagaspitiya, R.S. (1988). Commun. Stat. - Theor. Meth., 17. Balakrishnan, N. and Cramer, E. (2014). The Art of Progressive Censoring, Birkhäuser. Balakrishnan, N. and Cutler, C.D. (1996). Festschrift Volume for H.A. David, Springer. Balakrishnan, N., Govindarajulu, Z. and Balasubramanian, K. (1993). Ann. Inst. Stat. Math., 17. Balakrishnan, N. and Zhu, X. (2016). J. Stat. Comp. Simul., 86. Govindarajulu, Z. (1963). Technometrics, 5. Govindarajulu, Z. (1966). J. Amer. Stat. Assoc., 61. Iliopoulos, G. and Balakrishnan, N. (2011). J. Stat. Plann. Inf., 141. Johnson, N., Kotz, S. and Balakrishnan, N. (1995). Continuous Univariate Distributions Vol. 2, Wiley. p. 51/52

References Kotz, S., Kozubowski, T.J. and Podgórski, K. (2001). The Laplace Distribution and Generalizations, Birkhäuser. Laplace, P.S. (1774). Mémories de mathématique et de physique presentés à l Academie royale des scienes, 6. Liu, K., Zhu, X. and Balakrishnan, N. (2018). Metrika, 81. Mann, N.R. and Fertig, K.W. (1973). Technometrics, 15. Stigler, S.M. (1975). Biometrika, 62. Zhu, X. and Balakrishnan, N. (2016). IEEE Trans. Reliab., 65. p. 52/52