Exact Linear Likelihood Inference for Laplace
|
|
- Gladys Rogers
- 5 years ago
- Views:
Transcription
1 Exact Linear Likelihood Inference for Laplace Prof. N. Balakrishnan McMaster University, Hamilton, Canada p. 1/52
2 Pierre-Simon Laplace p. 2/52
3 Laplace s Biography Born: On March 23, 1749, in Normandy, France p. 3/52
4 Laplace s Biography Born: On March 23, 1749, in Normandy, France Died: On March 5, 1827, in Paris, France p. 3/52
5 Laplace s Biography Born: On March 23, 1749, in Normandy, France Died: On March 5, 1827, in Paris, France Nationality: French p. 3/52
6 Laplace s Biography Born: On March 23, 1749, in Normandy, France Died: On March 5, 1827, in Paris, France Nationality: French Alma mater: University of Caen, France p. 3/52
7 Laplace s Biography Born: On March 23, 1749, in Normandy, France Died: On March 5, 1827, in Paris, France Nationality: French Alma mater: University of Caen, France Advisors: Jean d Alembert, Christophe Gadbled p. 3/52
8 Laplace s Biography Born: On March 23, 1749, in Normandy, France Died: On March 5, 1827, in Paris, France Nationality: French Alma mater: University of Caen, France Advisors: Jean d Alembert, Christophe Gadbled Student: Siméon Denis Poisson p. 3/52
9 Laplace s Biography Born: On March 23, 1749, in Normandy, France Died: On March 5, 1827, in Paris, France Nationality: French Alma mater: University of Caen, France Advisors: Jean d Alembert, Christophe Gadbled Student: Siméon Denis Poisson p. 3/52
10 In collaboration with Colleen Cutler (Univ. of Waterloo) G. Iliopoulos (Univ. of Piraeus) Aaron Childs (McMaster Univ.) X. Zhu (Xian-Jiaotong Liverpool Univ) R. Ambagaspitiya (Univ. of Calgary) Kai Liu (McMaster Univ.) p. 4/52
11 Goals 1. To provide historical details on Laplace distribution; p. 5/52
12 Goals 1. To provide historical details on Laplace distribution; 2. To describe results on order statistics; p. 5/52
13 Goals 1. To provide historical details on Laplace distribution; 2. To describe results on order statistics; 3. To present results for Type-II censoring; p. 5/52
14 Goals 1. To provide historical details on Laplace distribution; 2. To describe results on order statistics; 3. To present results for Type-II censoring; 4. To present results on outlier-models and robustness; p. 5/52
15 Goals 1. To provide historical details on Laplace distribution; 2. To describe results on order statistics; 3. To present results for Type-II censoring; 4. To present results on outlier-models and robustness; 5. To present results for Type-I censoring; p. 5/52
16 Goals 1. To provide historical details on Laplace distribution; 2. To describe results on order statistics; 3. To present results for Type-II censoring; 4. To present results on outlier-models and robustness; 5. To present results for Type-I censoring; 6. To present some numerical results and examples; p. 5/52
17 Goals 1. To provide historical details on Laplace distribution; 2. To describe results on order statistics; 3. To present results for Type-II censoring; 4. To present results on outlier-models and robustness; 5. To present results for Type-I censoring; 6. To present some numerical results and examples; 7. To mention briefly some other work carried out recently. p. 5/52
18 Historical Details! p. 6/52
19 Distribution The Laplace distribution has its pdf as f(x) = 1 2σ e x µ /σ, < x <, where µ and σ are the location and scale parameters. p. 7/52
20 Distribution The Laplace distribution has its pdf as f(x) = 1 2σ e x µ /σ, < x <, where µ and σ are the location and scale parameters. It is also known as Double Exponential Distribution. p. 7/52
21 Distribution The Laplace distribution has its pdf as f(x) = 1 2σ e x µ /σ, < x <, where µ and σ are the location and scale parameters. It is also known as Double Exponential Distribution. The distribution is symmetric about µ. p. 7/52
22 Distribution The Laplace distribution has its pdf as f(x) = 1 2σ e x µ /σ, < x <, where µ and σ are the location and scale parameters. It is also known as Double Exponential Distribution. The distribution is symmetric about µ. The corresponding cdf is 1 2 F(x) = e(x µ)/σ, x µ e (x µ)/σ, x > µ. p. 7/52
23 Distribution The quantile function is µ+σln(2u), 0 < u 1 2 Q(u) = 1 µ σln{2(1 u)}, < u < 1. 2 p. 8/52
24 Distribution The quantile function is µ+σln(2u), 0 < u 1 2 Q(u) = 1 µ σln{2(1 u)}, < u < 1. 2 Chapter 24 of Johnson, Kotz and Balakrishnan (1995) and the book of Kotz, Kozubowski and Podgórski (2001) both provide detailed overviews of various developments on theory, methods and applications of the Laplace distribution and its generalizations. p. 8/52
25 ML Estimation Let x 1,,x n be a random sample from the Laplace distribution. p. 9/52
26 ML Estimation Let x 1,,x n be a random sample from the Laplace distribution. Then, the likelihood function is { n L = f(x i ) = 1 2 n σ exp 1 n σ i=1 n i=1 x i µ }. p. 9/52
27 ML Estimation Let x 1,,x n be a random sample from the Laplace distribution. Then, the likelihood function is { n L = f(x i ) = 1 2 n σ exp 1 n σ i=1 n i=1 x i µ }. Evidently, L is maximized with respect to µ when n i=1 x i µ is minimized. p. 9/52
28 ML Estimation Let x 1,,x n be a random sample from the Laplace distribution. Then, the likelihood function is { n L = f(x i ) = 1 2 n σ exp 1 n σ i=1 n i=1 x i µ }. Evidently, L is maximized with respect to µ when n i=1 x i µ is minimized. This happens when µ is the central most value (if n is odd), or any value in the interval of the two middle most values (if n is even). p. 9/52
29 ML Estimation Thus, the MLE of µ is taken to be the sample median x m+1:n, n = 2m+1 ˆµ = 1 (x 2 m:n +x m+1:n ), n = 2m. p. 10/52
30 ML Estimation Thus, the MLE of µ is taken to be the sample median x m+1:n, n = 2m+1 ˆµ = 1 (x 2 m:n +x m+1:n ), n = 2m. Next, upon maximizing L with respect to σ, we obtain the MLE of σ to be ˆσ = 1 n n i=1 x i Median. p. 10/52
31 ML Estimation Thus, the MLE of µ is taken to be the sample median x m+1:n, n = 2m+1 ˆµ = 1 (x 2 m:n +x m+1:n ), n = 2m. Next, upon maximizing L with respect to σ, we obtain the MLE of σ to be ˆσ = 1 n n i=1 x i Median. Note that the MLEs in this case naturally turn out to be linear estimators, meaning they are linear functions of order statistics. p. 10/52
32 Historical Note In fact, the Laplace distribution was discovered by Laplace (1774) as the distribution form for which the likelihood function is maximized by setting the location parameter µ equal to the median of the observed values of an odd number of i.i.d. observations. p. 11/52
33 Historical Note In fact, the Laplace distribution was discovered by Laplace (1774) as the distribution form for which the likelihood function is maximized by setting the location parameter µ equal to the median of the observed values of an odd number of i.i.d. observations. Furthermore, Laplace went on by replacing the median by the arithmetic mean as the value maximizing the likelihood function and derived the corresponding distribution function to be the normal distribution; see Stigler (1975). p. 11/52
34 Here comes censoring! p. 12/52
35 Symmetric Censoring Suppose we have available only a Type-II symmetrically censored sample of the form x r+1:n < < x n r:n, with the smallest r and the largest r order statistics having been censored. p. 13/52
36 Symmetric Censoring Suppose we have available only a Type-II symmetrically censored sample of the form x r+1:n < < x n r:n, with the smallest r and the largest r order statistics having been censored. Then, the corresponding likelihood function is L = n! (r!) 2 [F(x r+1:n){1 F(x n r:n )}] r n r i=r+1 f(x i:n ). p. 13/52
37 Symmetric Censoring Suppose we have available only a Type-II symmetrically censored sample of the form x r+1:n < < x n r:n, with the smallest r and the largest r order statistics having been censored. Then, the corresponding likelihood function is L = n! (r!) 2 [F(x r+1:n){1 F(x n r:n )}] r n r i=r+1 f(x i:n ). Now, let us first maximize L with respect to µ. p. 13/52
38 Symmetric Censoring 1. Case µ < x r+1:n : In this case, the likelihood is L = [ C 1 σ n 2r 2 e(µ x n r:n)/σ 1 4 e(2µ x r+1:n x n r:n)/σ [e { ] n r i=r+1 x i:n+(n 2r)µ}/σ. ] r p. 14/52
39 Symmetric Censoring 1. Case µ < x r+1:n : In this case, the likelihood is L = [ C 1 σ n 2r 2 e(µ x n r:n)/σ 1 4 e(2µ x r+1:n x n r:n)/σ [e { ] n r i=r+1 x i:n+(n 2r)µ}/σ. ] r It is easy to show that the second and third terms are both increasing functions of µ for < µ < x r+1:n. p. 14/52
40 Symmetric Censoring 1. Case µ < x r+1:n : In this case, the likelihood is L = [ C 1 σ n 2r 2 e(µ x n r:n)/σ 1 4 e(2µ x r+1:n x n r:n)/σ [e { ] n r i=r+1 x i:n+(n 2r)µ}/σ. ] r It is easy to show that the second and third terms are both increasing functions of µ for < µ < x r+1:n. So, L is increasing over the range < µ < x r+1:n. p. 14/52
41 Symmetric Censoring 2. Case µ > x n r:n : In this case, the likelihood is L = C σ n 2r [ 1 2 e(x r+1:n µ)/σ 1 4 e(x r+1:n+x n r:n 2µ)/σ [e { n r i=r+1 x i:n (n 2r)µ}/σ ]. ] r p. 15/52
42 Symmetric Censoring 2. Case µ > x n r:n : In this case, the likelihood is L = C σ n 2r [ 1 2 e(x r+1:n µ)/σ 1 4 e(x r+1:n+x n r:n 2µ)/σ [e { n r i=r+1 x i:n (n 2r)µ}/σ ]. ] r In this case, it is easy to show that second and the third terms are both decreasing functions of µ for x n r:n < µ <. p. 15/52
43 Symmetric Censoring 2. Case µ > x n r:n : In this case, the likelihood is L = C σ n 2r [ 1 2 e(x r+1:n µ)/σ 1 4 e(x r+1:n+x n r:n 2µ)/σ [e { n r i=r+1 x i:n (n 2r)µ}/σ ]. ] r In this case, it is easy to show that second and the third terms are both decreasing functions of µ for x n r:n < µ <. So, L is decreasing over the range x n r:n < µ <. p. 15/52
44 Symmetric Censoring 3. Case x r+1: µ x n r:n : In this case, L = C n r:n x r+1:n)/σ e n r σ n 2re r(x i=r+1 xi:n µ /σ. p. 16/52
45 Symmetric Censoring 3. Case x r+1: µ x n r:n : In this case, L = C n r:n x r+1:n)/σ e n r σ n 2re r(x i=r+1 xi:n µ /σ. From this expression, it is clear that the MLE of µ is once again the sample median given by x m+1:n, n = 2m+1 ˆµ = any value in [x m:n,x m+1:n ], n = 2m. p. 16/52
46 Symmetric Censoring 3. Case x r+1: µ x n r:n : In this case, L = C n r:n x r+1:n)/σ e n r σ n 2re r(x i=r+1 xi:n µ /σ. From this expression, it is clear that the MLE of µ is once again the sample median given by x m+1:n, n = 2m+1 ˆµ = any value in [x m:n,x m+1:n ], n = 2m. So, the MLE of µ in the case of symmetric censoring is exactly the same as that based on complete sample. p. 16/52
47 Symmetric Censoring Next, upon maximizing L with respect to σ, we obtain p. 17/52
48 Symmetric Censoring Next, upon maximizing L with respect to σ, we obtain [ 2m+1 r 1 ˆσ = x i:2m+1 +rx 2m+1 r:2m+1 2m+1 2r i=m+2 ] m x i:2m+1 rx r+1:2m+1, n = 2m+1; ˆσ = i=r+1 1 2m 2r m i=r+1 [ 2m r i=m+1 x i:2m +rx 2m r:2m x i:2m rx r+1:2m ], n = 2m. p. 17/52
49 Symmetric Censoring Next, upon maximizing L with respect to σ, we obtain [ 2m+1 r 1 ˆσ = x i:2m+1 +rx 2m+1 r:2m+1 2m+1 2r i=m+2 ] m x i:2m+1 rx r+1:2m+1, n = 2m+1; ˆσ = i=r+1 1 2m 2r m i=r+1 [ 2m r i=m+1 x i:2m +rx 2m r:2m x i:2m rx r+1:2m ], n = 2m. Interestingly, the MLEs of µ and σ in this case are also linear estimators [Balakrishnan and Cutler (1996)]. p. 17/52
50 Right Censoring Let us now consider a Type-II right censored sample of the form x 1:n < < x n r:n, with largest r order statistics having been censored. p. 18/52
51 Right Censoring Let us now consider a Type-II right censored sample of the form x 1:n < < x n r:n, with largest r order statistics having been censored. Then, the corresponding likelihood function is L = n! r! {1 F(x n r:n)} r n r i=1 f(x i:n ). p. 18/52
52 Right Censoring Let us now consider a Type-II right censored sample of the form x 1:n < < x n r:n, with largest r order statistics having been censored. Then, the corresponding likelihood function is L = n! r! {1 F(x n r:n)} r n r i=1 f(x i:n ). In this case, the MLEs depend on whether r n 2 or r > n 2. p. 18/52
53 Right Censoring 1. Case r n : In this case, note that the median of the 2 sample is available. So, the MLE of µ is once again the sample median. p. 19/52
54 Right Censoring 1. Case r n : In this case, note that the median of the 2 sample is available. So, the MLE of µ is once again the sample median. 2. Case r > n : In this case, the median of the sample 2 is not available. p. 19/52
55 Right Censoring 1. Case r n : In this case, note that the median of the 2 sample is available. So, the MLE of µ is once again the sample median. 2. Case r > n : In this case, the median of the sample 2 is not available. It is easy to show in this case that the likelihood function is increasing over µ in the interval (,x 1:n ) and also increasing in the interval (x 1:n,x n r:n ]. p. 19/52
56 Right Censoring 1. Case r n : In this case, note that the median of the 2 sample is available. So, the MLE of µ is once again the sample median. 2. Case r > n : In this case, the median of the sample 2 is not available. It is easy to show in this case that the likelihood function is increasing over µ in the interval (,x 1:n ) and also increasing in the interval (x 1:n,x n r:n ]. So, the MLE of µ must be in the interval (x n r:n, ). p. 19/52
57 Right Censoring The likelihood function in this case is { e(x n r:n µ)/σ L = C σ n r } r e n r i=1 (µ x i:n)/σ. p. 20/52
58 Right Censoring The likelihood function in this case is { e(x n r:n µ)/σ L = C σ n r } r e n r i=1 (µ x i:n)/σ. When this function is maximized for µ, we determine the MLE of µ (when r > n 2 ) as ˆµ = x n r:n + ˆσ ( log n/2 ). n r p. 20/52
59 Right Censoring The likelihood function in this case is { e(x n r:n µ)/σ L = C σ n r } r e n r i=1 (µ x i:n)/σ. When this function is maximized for µ, we determine the MLE of µ (when r > n 2 ) as ˆµ = x n r:n + ˆσ ( log n/2 ). n r Substituting these expressions of MLEs for µ in L and then maximizing it with respect to σ, we get expressions for the MLE of σ. p. 20/52
60 Right Censoring For example, in the case when r n, we obtain the 2 MLE of σ as follows: p. 21/52
61 Right Censoring For example, in the case when r n, we obtain the 2 MLE of σ as follows: ˆσ = ˆσ = 1 n r 1 n r [ n r i=m+2 [ n r i=m+1 x i:n x i:n ] m x i:n +rx n r:n, n = 2m+1; i=1 ] m x i:n +rx n r:n, n = 2m. i=1 p. 21/52
62 Right Censoring For example, in the case when r n, we obtain the 2 MLE of σ as follows: ˆσ = ˆσ = 1 n r 1 n r [ n r i=m+2 [ n r i=m+1 x i:n x i:n ] m x i:n +rx n r:n, n = 2m+1; i=1 ] m x i:n +rx n r:n, n = 2m. i=1 Similar expressions for ˆσ can be presented for the case when r > n by using the MLE of µ presented earlier 2 [Balakrishnan and Cutler (1996)]. p. 21/52
63 How about order statistics and moments? p. 22/52
64 Order Statistics Let X 1:n < < X n:n be the order statistics from a standard Laplace distribution. p. 23/52
65 Order Statistics Let X 1:n < < X n:n be the order statistics from a standard Laplace distribution. Let Y 1:n < < Y n:n be the order statistics from the folded distribution with pdf and cdf as p(x) = 2f(x) and P(x) = 2F(x) 1, x > 0. p. 23/52
66 Order Statistics Let X 1:n < < X n:n be the order statistics from a standard Laplace distribution. Let Y 1:n < < Y n:n be the order statistics from the folded distribution with pdf and cdf as p(x) = 2f(x) and P(x) = 2F(x) 1, x > 0. In other words, these order statistics are from the standard exponential distribution. p. 23/52
67 Order Statistics Let X 1:n < < X n:n be the order statistics from a standard Laplace distribution. Let Y 1:n < < Y n:n be the order statistics from the folded distribution with pdf and cdf as p(x) = 2f(x) and P(x) = 2F(x) 1, x > 0. In other words, these order statistics are from the standard exponential distribution. Let (µ (k) i:n,µ i,j:n) and (ν (k) i:n,ν i,j:n) denote the single and product moments of order statistics from Laplace and exponential distributions, respectively. p. 23/52
68 Order Statistics Then, Govindarajulu (1963) established the following relationships between these two sets of moments: p. 24/52
69 Order Statistics Then, Govindarajulu (1963) established the following relationships between these two sets of moments: { µ (k) i:n = 1 i 1 2 n r=0 ( ) n ν (k) i r:n r r +( 1)k n r=i ( ) } n ν (k) r i+1:r r ; p. 24/52
70 Order Statistics Then, Govindarajulu (1963) established the following relationships between these two sets of moments: { µ (k) i:n = 1 i 1 2 n r=0 { µ i,j:n = 1 i 1 2 n ( ) n ν (k) i r:n r r +( 1)k r=0 j 1 r=i n r=i ( ) n ν i r,j r:n r + r r=j } )ν r i+1:r ν j r:n r ( n r n ( ) } n ν (k) r i+1:r r ( ) n ν r j+1,r i+1:r r. ; p. 24/52
71 Order Statistics Then, Govindarajulu (1963) established the following relationships between these two sets of moments: { µ (k) i:n = 1 i 1 2 n r=0 { µ i,j:n = 1 i 1 2 n ( ) n ν (k) i r:n r r +( 1)k r=0 j 1 r=i n r=i ( ) n ν i r,j r:n r + r r=j } )ν r i+1:r ν j r:n r ( n r n ( ) } n ν (k) r i+1:r r ( ) n ν r j+1,r i+1:r r The proof provided by Govindarajulu (1963) is purely an algebraic one through integration. p. 24/52. ;
72 Order Statistics A simple and elegant probabilistic proof was given by Balakrishnan, Govindarajulu and Balasubramanian (1993). p. 25/52
73 Order Statistics A simple and elegant probabilistic proof was given by Balakrishnan, Govindarajulu and Balasubramanian (1993). Let D denote the number of X i s that are 0. p. 25/52
74 Order Statistics A simple and elegant probabilistic proof was given by Balakrishnan, Govindarajulu and Balasubramanian (1993). Let D denote the number of X i s that are 0. Then, the following properties hold: p. 25/52
75 Order Statistics A simple and elegant probabilistic proof was given by Balakrishnan, Govindarajulu and Balasubramanian (1993). Let D denote the number of X i s that are 0. Then, the following properties hold: 1. D Bin ( n, 2) 1 ; p. 25/52
76 Order Statistics A simple and elegant probabilistic proof was given by Balakrishnan, Govindarajulu and Balasubramanian (1993). Let D denote the number of X i s that are 0. Then, the following properties hold: 1. D Bin ( n, 2) 1 ; 2. Given D = i 1, X i:n < < X n:n are distributed exactly as order statistics from a sample of size n i+1 from exponential distribution; p. 25/52
77 Order Statistics 3. Also, given D = i 1, X i 1:n < < X 1:n are distributed exactly as order statistics from a sample of size i 1 from exponential distribution; p. 26/52
78 Order Statistics 3. Also, given D = i 1, X i 1:n < < X 1:n are distributed exactly as order statistics from a sample of size i 1 from exponential distribution; 4. Furthermore, given D = i 1, these two sets of order statistics are mutually independent. p. 26/52
79 Order Statistics 3. Also, given D = i 1, X i 1:n < < X 1:n are distributed exactly as order statistics from a sample of size i 1 from exponential distribution; 4. Furthermore, given D = i 1, these two sets of order statistics are mutually independent. Then, the above four properties readily yield the relations presented earlier; in addition, this approach leads to some generalizations of Govindarajulu s results as well. p. 26/52
80 What about BLUEs? p. 27/52
81 Bias, MSE & JRE BLUEs of µ and σ were tabulated by Govindarajulu (1966) for Type-II symmetrically censored samples. p. 28/52
82 Bias, MSE & JRE BLUEs of µ and σ were tabulated by Govindarajulu (1966) for Type-II symmetrically censored samples. The MLEs ˆµ and ˆσ are symmetric and skew-symmetric estimators, respectively. p. 28/52
83 Bias, MSE & JRE BLUEs of µ and σ were tabulated by Govindarajulu (1966) for Type-II symmetrically censored samples. The MLEs ˆµ and ˆσ are symmetric and skew-symmetric estimators, respectively. The same is true for the BLUEs µ and σ. p. 28/52
84 Bias, MSE & JRE BLUEs of µ and σ were tabulated by Govindarajulu (1966) for Type-II symmetrically censored samples. The MLEs ˆµ and ˆσ are symmetric and skew-symmetric estimators, respectively. The same is true for the BLUEs µ and σ. So, both pairs of estimators are uncorrelated. p. 28/52
85 Bias, MSE & JRE BLUEs of µ and σ were tabulated by Govindarajulu (1966) for Type-II symmetrically censored samples. The MLEs ˆµ and ˆσ are symmetric and skew-symmetric estimators, respectively. The same is true for the BLUEs µ and σ. So, both pairs of estimators are uncorrelated. While ˆµ, µ and σ are all unbiased, ˆσ is biased. p. 28/52
86 Bias, MSE & JRE BLUEs of µ and σ were tabulated by Govindarajulu (1966) for Type-II symmetrically censored samples. The MLEs ˆµ and ˆσ are symmetric and skew-symmetric estimators, respectively. The same is true for the BLUEs µ and σ. So, both pairs of estimators are uncorrelated. While ˆµ, µ and σ are all unbiased, ˆσ is biased. So, we may define Joint Relative Efficiency as JRE = 100 Var(µ )+Var(σ ) Var(ˆµ)+MSE(ˆσ). p. 28/52
87 Bias, MSE & JRE n r V(ˆµ)/σ 2 B(ˆσ)/σ MSE(ˆσ)/σ 2 JRE p. 29/52
88 Bias, MSE & JRE Bias in ˆσ decreases as n increases, for fixed r. p. 30/52
89 Bias, MSE & JRE Bias in ˆσ decreases as n increases, for fixed r. (ˆµ, ˆσ) is jointly more efficient than (µ, σ ) for small censored sample sizes considered here. p. 30/52
90 Bias, MSE & JRE Bias in ˆσ decreases as n increases, for fixed r. (ˆµ, ˆσ) is jointly more efficient than (µ, σ ) for small censored sample sizes considered here. Furthermore, unlike the BLUEs µ and σ, the MLEs ˆµ and ˆσ are explicit linear estimators. p. 30/52
91 Bias, MSE & JRE Bias in ˆσ decreases as n increases, for fixed r. (ˆµ, ˆσ) is jointly more efficient than (µ, σ ) for small censored sample sizes considered here. Furthermore, unlike the BLUEs µ and σ, the MLEs ˆµ and ˆσ are explicit linear estimators. JRE of the MLEs increases in general as the censoring proportion increases. p. 30/52
92 Outliers? No problem!! p. 31/52
93 Outlier-Model Let X 1:n < < X n:n denote the order statistics from a single-outlier Laplace model, with the outlier having a different scale parameter α. p. 32/52
94 Outlier-Model Let X 1:n < < X n:n denote the order statistics from a single-outlier Laplace model, with the outlier having a different scale parameter α. Let Y 1:n < < Y n:n denote the order statistics from a single-outlier exponential model, with the outlier having the same scale parameter α. p. 32/52
95 Outlier-Model Let X 1:n < < X n:n denote the order statistics from a single-outlier Laplace model, with the outlier having a different scale parameter α. Let Y 1:n < < Y n:n denote the order statistics from a single-outlier exponential model, with the outlier having the same scale parameter α. Let Z 1:n < < Z n:n denote the order statistics from the standard exponential distribution. p. 32/52
96 Outlier-Model Let X 1:n < < X n:n denote the order statistics from a single-outlier Laplace model, with the outlier having a different scale parameter α. Let Y 1:n < < Y n:n denote the order statistics from a single-outlier exponential model, with the outlier having the same scale parameter α. Let Z 1:n < < Z n:n denote the order statistics from the standard exponential distribution. Let (µ (k) i:n,µ i,j:n), (ν (k) i:n,ν i,j:n) and (ν (k) i:n,ν i,j:n) denote the single and product moments of these three sets of order statistics, respectively. p. 32/52
97 Outlier-Model Then, Balakrishnan (1989) established the following generalized relations between these three sets of moments of order statistics: p. 33/52
98 Outlier-Model Then, Balakrishnan (1989) established the following generalized relations between these three sets of moments of order statistics: µ (k) i:n = 1 2 n + { i 1 r=1 i 1 r=0 ( n 1 ) r 1 ( n 1 ) r ν (k) i r:n r +( 1)k ν (k) i r:n r +( 1)k n 1 r=i ( n 1 ) r n ( n 1 ) r=i r 1 ν (k) r i+1:r ν (k) r i+1:r } ; p. 33/52
99 Outlier-Model Then, Balakrishnan (1989) established the following generalized relations between these three sets of moments of order statistics: µ (k) i:n = µ (k) i,j:n = 1 2 n n { i 1 r=1 ( n 1 ) r 1 ν (k) i r:n r +( 1)k n 1 r=i ( n 1 ) i 1 ( n 1 ) n ( + ν (k) n 1 ) i r:n r r +( 1)k r 1 r=0 r=i i 1 ( n 1 n 1 )νi r,j r:n r + ( n 1 ) j 1 r=i i 1 r=0 r=1 r 1 r=j ( n 1 ) νj r:n r r 1 ν r i+1:r ( n 1 ) ν i r,j r:n r + r j 1 r=i r n ( n 1 ) r=j r 1 r ν (k) r i+1:r ν (k) r i+1:r } ν r j+1,r i+1:r ( n 1 ) νr i+1:r r ν j r:n r ν r j+1,r i+1:r ;. p. 33/52
100 Outlier-Model These results were used by Balakrishnan and Ambagaspitiya (1988) to study the robustness features of various linear estimators of both location and scale parameters of the Laplace distribution with respect to the presence of a scale-outlier in the sample. p. 34/52
101 Outlier-Model These results were used by Balakrishnan and Ambagaspitiya (1988) to study the robustness features of various linear estimators of both location and scale parameters of the Laplace distribution with respect to the presence of a scale-outlier in the sample. Of course, these developments can also be extended to the case of multiple outliers. p. 34/52
102 Outlier-Model These results were used by Balakrishnan and Ambagaspitiya (1988) to study the robustness features of various linear estimators of both location and scale parameters of the Laplace distribution with respect to the presence of a scale-outlier in the sample. Of course, these developments can also be extended to the case of multiple outliers. They compared various estimators of location and scale parameters in the presence of a single-outlier with its scale parameter as ασ, and these results are presented next for the case when n = 10. p. 34/52
103 Outlier-Model 1 σ 2 (Variance of ˆµ), when n = 10 ˆµ Mean Median BLUE(0) BLUE(1) BLUE(2) TrimMean(1) TrimMean(2) WinsMean(1) WinsMean(2) LinWMean(1) LinWMean(2) α GastMean p. 35/52
104 Outlier-Model 1 σ 2 (MSE of ˆσ), when n = 10 α ˆσ BLUE(0) BLUE(1) BLUE(2) RSE(0) RSE(1) RSE(2) RSE(3) p. 36/52
105 Exact Inference for Type-II Right Censoring p. 37/52
106 Exact Inference 1. Case r n : In this case, the joint MGF of (ˆµ, ˆσ) can 2 be derived as follows [Balakrishnan and Zhu (2016)]: p. 38/52
107 Exact Inference 1. Case r n : In this case, the joint MGF of (ˆµ, ˆσ) can 2 be derived as follows [Balakrishnan and Zhu (2016)]: E ( e t 1ˆµ+t 2ˆσ ) = r 1 j=0 + r 1 j n r l=0 l=0 p 1 e t 1µ (1 s 2 σ) j (1+s 2 σ) (r 1 j) { 1 (s 1 s 2 l)σ r(n r +1+l) } 1 p 2 e t 1µ (1 s 2 σ) (r 1) ( 1+ t 1σ l+r ) 1, p. 38/52
108 Exact Inference 1. Case r n : In this case, the joint MGF of (ˆµ, ˆσ) can 2 be derived as follows [Balakrishnan and Zhu (2016)]: E ( e t 1ˆµ+t 2ˆσ ) = r 1 j=0 + r 1 j n r l=0 l=0 p 1 e t 1µ (1 s 2 σ) j (1+s 2 σ) (r 1 j) { 1 (s 1 s 2 l)σ r(n r +1+l) } 1 p 2 e t 1µ (1 s 2 σ) (r 1) ( 1+ t 1σ l+r ) 1, where p. 38/52
109 Exact Inference p 1 p 2 = ( 1) l c jr 2 n r 1 j (n r +1+l) 1, l = ( 1) l c jr 2 r l n r (l+r) 1, l c jr = c jr = s 1 = n! j!(r 1 j)!(n r)!, n! (r 1)!(n r)!, ( r 1 ln n ) r 2r +1 t 1 + r 1 r t 2, s 2 = 1 r ln n 2r t r t 2. p. 39/52
110 Exact Inference p 1 p 2 = ( 1) l c jr 2 n r 1 j (n r +1+l) 1, l = ( 1) l c jr 2 r l n r (l+r) 1, l c jr = c jr = s 1 = n! j!(r 1 j)!(n r)!, n! (r 1)!(n r)!, ( r 1 ln n ) r 2r +1 t 1 + r 1 r t 2, s 2 = 1 r ln n 2r t r t 2. Analogous results for other cases have also been derived by Balakrishnan and Zhu (2016). p. 39/52
111 Exact Inference From the joint MGF of (ˆµ, ˆσ), the Bias and MSE of the estimators ˆµ and ˆσ can be easily obtained. p. 40/52
112 Exact Inference From the joint MGF of (ˆµ, ˆσ), the Bias and MSE of the estimators ˆµ and ˆσ can be easily obtained. Also, the correlation between the estimators ˆµ and ˆσ can be obtained. p. 40/52
113 Exact Inference From the joint MGF of (ˆµ, ˆσ), the Bias and MSE of the estimators ˆµ and ˆσ can be easily obtained. Also, the correlation between the estimators ˆµ and ˆσ can be obtained. From the joint MGF of (ˆµ, ˆσ), upon setting t 2 = 0 and t 1 = 0, respectively, the marginal MGFs of ˆµ and ˆσ can be deduced. p. 40/52
114 Exact Inference From the joint MGF of (ˆµ, ˆσ), the Bias and MSE of the estimators ˆµ and ˆσ can be easily obtained. Also, the correlation between the estimators ˆµ and ˆσ can be obtained. From the joint MGF of (ˆµ, ˆσ), upon setting t 2 = 0 and t 1 = 0, respectively, the marginal MGFs of ˆµ and ˆσ can be deduced. Upon inverting these marginal MGFs, the exact distributions of the estimators ˆµ and ˆσ can be derived. p. 40/52
115 Exact Inference From the joint MGF of (ˆµ, ˆσ), the Bias and MSE of the estimators ˆµ and ˆσ can be easily obtained. Also, the correlation between the estimators ˆµ and ˆσ can be obtained. From the joint MGF of (ˆµ, ˆσ), upon setting t 2 = 0 and t 1 = 0, respectively, the marginal MGFs of ˆµ and ˆσ can be deduced. Upon inverting these marginal MGFs, the exact distributions of the estimators ˆµ and ˆσ can be derived. In particular, it can be shown that p. 40/52
116 Exact Inference ˆµ ˆσ r 1 d = µ+ d = + r 1 j=0 + j=0 n r l=0 r 1 j l=0 n r l=0 r 1 j l=0 p 2 [Γ p 1 [Γ p 2 Γ p 1 [Γ ( j, bσ r ( r 1, bσ r ) )+Γ ( r 1 j, bσ r ( )] σ +E, l+r ) ( )] r +b(r l 1)σ +E r(n r+l+1) ( j, σ ) +Γ ( r 1 j, σ ) ( )] (r l 1)σ +E r r r(n r +l+1) ( r 1, σ ) ; r p. 41/52
117 Exact Inference ˆµ ˆσ r 1 d = µ+ d = + r 1 j=0 j=0 n r l=0 r 1 j l=0 r 1 j l=0 p 2 [Γ p 1 [Γ p 1 [Γ ( j, bσ r ( r 1, bσ r ) )+Γ ( r 1 j, bσ r ( )] σ +E, l+r ) ( )] r +b(r l 1)σ +E r(n r+l+1) ( j, σ ) +Γ ( r 1 j, σ ) ( )] (r l 1)σ +E r r r(n r +l+1) + n r l=0 p 2 Γ ( r 1, σ ) ; r in the above, Γ and E denote Gamma and Exponential random variables, Γ and E denote the negative of these variables, and Γ(0, σ r ), Γ (0, σ r ), E(0), E (0) are all degenerate at the point 0. p. 41/52
118 Example The data below, given by Mann and Fertig (1973), are the lifetimes of 13 aeroplane components with the last 3 components having been censored: p. 42/52
119 Example The data below, given by Mann and Fertig (1973), are the lifetimes of 13 aeroplane components with the last 3 components having been censored: p. 42/52
120 Example The data below, given by Mann and Fertig (1973), are the lifetimes of 13 aeroplane components with the last 3 components having been censored: Here, we analyze these data by assuming a Laplace distribution. p. 42/52
121 Example The data below, given by Mann and Fertig (1973), are the lifetimes of 13 aeroplane components with the last 3 components having been censored: Here, we analyze these data by assuming a Laplace distribution. We computed the MLEs based on this Type-II censored sample, their mean square errors (MSE) and correlation, and the 95% confidence intervals based on the exact formulae presented earlier. p. 42/52
122 Example MLEs of µ and σ and their MSEs and correlation coefficient r ˆµ ˆσ MSE(ˆµ) MSE(ˆσ) Corr(ˆµ,ˆσ) p. 43/52
123 Example MLEs of µ and σ and their MSEs and correlation coefficient r ˆµ ˆσ MSE(ˆµ) MSE(ˆσ) Corr(ˆµ,ˆσ) Exact and simulated 95% CIs for µ and σ Exact results Simulated results r µ σ µ σ 10 (0.7828, ) (0.4827, ) (0.7843, ) (0.4837, ) p. 43/52
124 How about Type-I Censoring? p. 44/52
125 Likelihood Function Let x 1:n < x 2:n < < x n:n denote the ordered lifetimes of n units under a life-test. p. 45/52
126 Likelihood Function Let x 1:n < x 2:n < < x n:n denote the ordered lifetimes of n units under a life-test. Suppose the life-test is terminated at a fixed time T. p. 45/52
127 Likelihood Function Let x 1:n < x 2:n < < x n:n denote the ordered lifetimes of n units under a life-test. Suppose the life-test is terminated at a fixed time T. Then, the data observed will be (x 1:n < < x D:n ), where D is the random number of failures until T. p. 45/52
128 Likelihood Function Let x 1:n < x 2:n < < x n:n denote the ordered lifetimes of n units under a life-test. Suppose the life-test is terminated at a fixed time T. Then, the data observed will be (x 1:n < < x D:n ), where D is the random number of failures until T. The corresponding likelihood function is d L = C d f(x i:n ){1 F(T)} n d, x 1:n < < x d:n < T. i=1 p. 45/52
129 Likelihood Function Let x 1:n < x 2:n < < x n:n denote the ordered lifetimes of n units under a life-test. Suppose the life-test is terminated at a fixed time T. Then, the data observed will be (x 1:n < < x D:n ), where D is the random number of failures until T. The corresponding likelihood function is d L = C d f(x i:n ){1 F(T)} n d, x 1:n < < x d:n < T. i=1 The MLEs of µ and σ exist only when D 1, and so all inferential results are based on this condition of at least one failure. p. 45/52
130 MLEs By maximizing L, the MLEs of (µ,σ) have been derived by Zhu and Balakrishnan (2016) as follows: p. 46/52
131 MLEs By maximizing L, the MLEs of (µ,σ) have been derived by Zhu and Balakrishnan (2016) as follows: ˆµ = ˆσ = [X m:n,x m+1:n ], n = 2m,d m+1; X m+1:n, n = 2m+1,d m+1; [X m:n,t], n = 2m,d = m; T + ˆσlog( n 2d ), d < n 2 ; [ 1 (n d)t + d d i=m+1 X i:n ] m i=1 X i:n, n = 2m,d m; [ 1 (n d)t + d d i=m+2 X i:n ] m i=1 X i:n, n = 2m+1,d m+1; 1 di=1 (T X d i:n ), d < n 2. p. 46/52
132 MLEs By maximizing L, the MLEs of (µ,σ) have been derived by Zhu and Balakrishnan (2016) as follows: ˆµ = ˆσ = [X m:n,x m+1:n ], n = 2m,d m+1; X m+1:n, n = 2m+1,d m+1; [X m:n,t], n = 2m,d = m; T + ˆσlog( n 2d ), d < n 2 ; [ 1 (n d)t + d d i=m+1 X i:n ] m i=1 X i:n, n = 2m,d m; [ 1 (n d)t + d d i=m+2 X i:n ] m i=1 X i:n, n = 2m+1,d m+1; 1 di=1 (T X d i:n ), d < n 2. They have then derived the exact conditional joint MGF, marginal MGF and marginal distributions of the MLEs ˆµ and ˆσ. p. 46/52
133 Example Let us again consider the data of Mann and Fertig (1973), but with the life-test terminating at time T = 2.75: p. 47/52
134 Example Let us again consider the data of Mann and Fertig (1973), but with the life-test terminating at time T = 2.75: p. 47/52
135 Example Let us again consider the data of Mann and Fertig (1973), but with the life-test terminating at time T = 2.75: Let us analyze these data by assuming a Laplace distribution. p. 47/52
136 Example Let us again consider the data of Mann and Fertig (1973), but with the life-test terminating at time T = 2.75: Let us analyze these data by assuming a Laplace distribution. We computed the MLEs based on this Type-I censored sample, their mean square errors (MSE) and covariance, and the 95% confidence intervals based on the exact results. p. 47/52
137 Example MLEs of µ and σ and their MSEs and covariance T ˆµ ˆσ MSE(ˆµ) MSE(ˆσ) Cov(ˆµ,ˆσ) p. 48/52
138 Example MLEs of µ and σ and their MSEs and covariance T ˆµ ˆσ MSE(ˆµ) MSE(ˆσ) Cov(ˆµ,ˆσ) Exact and simulated 90% CIs for µ and σ Exact results Simulated results T µ σ µ σ 2.75 (0.9228, ) (0.6053, ) (0.9188, ) (0.6066, ) p. 48/52
139 What else? p. 49/52
140 Other Work Results for Type-II symmetric censoring developed by Iliopoulos and Balakrishnan (2011) are simpler; p. 50/52
141 Other Work Results for Type-II symmetric censoring developed by Iliopoulos and Balakrishnan (2011) are simpler; The results on Type-II and Type-I censoring have been generalized to hybrid Type-II and Type-I censoring; p. 50/52
142 Other Work Results for Type-II symmetric censoring developed by Iliopoulos and Balakrishnan (2011) are simpler; The results on Type-II and Type-I censoring have been generalized to hybrid Type-II and Type-I censoring; From the joint MGF of the MLEs, exact inference has been developed for reliability, quantile and cumulative hazard functions, and likelihood predictors; p. 50/52
143 Other Work Results for Type-II symmetric censoring developed by Iliopoulos and Balakrishnan (2011) are simpler; The results on Type-II and Type-I censoring have been generalized to hybrid Type-II and Type-I censoring; From the joint MGF of the MLEs, exact inference has been developed for reliability, quantile and cumulative hazard functions, and likelihood predictors; Exact inference under progressive Type-II censoring based on BLUEs has been developed by Liu, Zhu and Balakrishnan (2018). p. 50/52
144 Other Work Results for Type-II symmetric censoring developed by Iliopoulos and Balakrishnan (2011) are simpler; The results on Type-II and Type-I censoring have been generalized to hybrid Type-II and Type-I censoring; From the joint MGF of the MLEs, exact inference has been developed for reliability, quantile and cumulative hazard functions, and likelihood predictors; Exact inference under progressive Type-II censoring based on BLUEs has been developed by Liu, Zhu and Balakrishnan (2018). Exact predictive likelihood inference has been developed by Zhu and Balakrishnan (2018). p. 50/52
145 References Balakrishnan, N. (1989). Ann. Inst. Stat. Math., 41. Balakrishnan, N. and Ambagaspitiya, R.S. (1988). Commun. Stat. - Theor. Meth., 17. Balakrishnan, N. and Cramer, E. (2014). The Art of Progressive Censoring, Birkhäuser. Balakrishnan, N. and Cutler, C.D. (1996). Festschrift Volume for H.A. David, Springer. Balakrishnan, N., Govindarajulu, Z. and Balasubramanian, K. (1993). Ann. Inst. Stat. Math., 17. Balakrishnan, N. and Zhu, X. (2016). J. Stat. Comp. Simul., 86. Govindarajulu, Z. (1963). Technometrics, 5. Govindarajulu, Z. (1966). J. Amer. Stat. Assoc., 61. Iliopoulos, G. and Balakrishnan, N. (2011). J. Stat. Plann. Inf., 141. Johnson, N., Kotz, S. and Balakrishnan, N. (1995). Continuous Univariate Distributions Vol. 2, Wiley. p. 51/52
146 References Kotz, S., Kozubowski, T.J. and Podgórski, K. (2001). The Laplace Distribution and Generalizations, Birkhäuser. Laplace, P.S. (1774). Mémories de mathématique et de physique presentés à l Academie royale des scienes, 6. Liu, K., Zhu, X. and Balakrishnan, N. (2018). Metrika, 81. Mann, N.R. and Fertig, K.W. (1973). Technometrics, 15. Stigler, S.M. (1975). Biometrika, 62. Zhu, X. and Balakrishnan, N. (2016). IEEE Trans. Reliab., 65. p. 52/52
Conditional independence of blocked ordered data
Conditional independence of blocked ordered data G. Iliopoulos 1 and N. Balakrishnan 2 Abstract In this paper, we prove that blocks of ordered data formed by some conditioning events are mutually independent.
More informationEstimation of Parameters of the Weibull Distribution Based on Progressively Censored Data
International Mathematical Forum, 2, 2007, no. 41, 2031-2043 Estimation of Parameters of the Weibull Distribution Based on Progressively Censored Data K. S. Sultan 1 Department of Statistics Operations
More informationPoint and Interval Estimation for Gaussian Distribution, Based on Progressively Type-II Censored Samples
90 IEEE TRANSACTIONS ON RELIABILITY, VOL. 52, NO. 1, MARCH 2003 Point and Interval Estimation for Gaussian Distribution, Based on Progressively Type-II Censored Samples N. Balakrishnan, N. Kannan, C. T.
More informationHybrid Censoring; An Introduction 2
Hybrid Censoring; An Introduction 2 Debasis Kundu Department of Mathematics & Statistics Indian Institute of Technology Kanpur 23-rd November, 2010 2 This is a joint work with N. Balakrishnan Debasis Kundu
More informationEstimation for Mean and Standard Deviation of Normal Distribution under Type II Censoring
Communications for Statistical Applications and Methods 2014, Vol. 21, No. 6, 529 538 DOI: http://dx.doi.org/10.5351/csam.2014.21.6.529 Print ISSN 2287-7843 / Online ISSN 2383-4757 Estimation for Mean
More informationExact Inference for the Two-Parameter Exponential Distribution Under Type-II Hybrid Censoring
Exact Inference for the Two-Parameter Exponential Distribution Under Type-II Hybrid Censoring A. Ganguly, S. Mitra, D. Samanta, D. Kundu,2 Abstract Epstein [9] introduced the Type-I hybrid censoring scheme
More informationBest linear unbiased and invariant reconstructors for the past records
Best linear unbiased and invariant reconstructors for the past records B. Khatib and Jafar Ahmadi Department of Statistics, Ordered and Spatial Data Center of Excellence, Ferdowsi University of Mashhad,
More informationStatistical Inference Using Progressively Type-II Censored Data with Random Scheme
International Mathematical Forum, 3, 28, no. 35, 1713-1725 Statistical Inference Using Progressively Type-II Censored Data with Random Scheme Ammar M. Sarhan 1 and A. Abuammoh Department of Statistics
More informationSTEP STRESS TESTS AND SOME EXACT INFERENTIAL RESULTS N. BALAKRISHNAN. McMaster University Hamilton, Ontario, Canada. p.
p. 1/6 STEP STRESS TESTS AND SOME EXACT INFERENTIAL RESULTS N. BALAKRISHNAN bala@mcmaster.ca McMaster University Hamilton, Ontario, Canada p. 2/6 In collaboration with Debasis Kundu, IIT, Kapur, India
More informationA New Two Sample Type-II Progressive Censoring Scheme
A New Two Sample Type-II Progressive Censoring Scheme arxiv:609.05805v [stat.me] 9 Sep 206 Shuvashree Mondal, Debasis Kundu Abstract Progressive censoring scheme has received considerable attention in
More informationHybrid Censoring Scheme: An Introduction
Department of Mathematics & Statistics Indian Institute of Technology Kanpur August 19, 2014 Outline 1 2 3 4 5 Outline 1 2 3 4 5 What is? Lifetime data analysis is used to analyze data in which the time
More informationProblem 1 (20) Log-normal. f(x) Cauchy
ORF 245. Rigollet Date: 11/21/2008 Problem 1 (20) f(x) f(x) 0.0 0.1 0.2 0.3 0.4 0.0 0.2 0.4 0.6 0.8 4 2 0 2 4 Normal (with mean -1) 4 2 0 2 4 Negative-exponential x x f(x) f(x) 0.0 0.1 0.2 0.3 0.4 0.5
More informationRELATIONS FOR MOMENTS OF PROGRESSIVELY TYPE-II RIGHT CENSORED ORDER STATISTICS FROM ERLANG-TRUNCATED EXPONENTIAL DISTRIBUTION
STATISTICS IN TRANSITION new series, December 2017 651 STATISTICS IN TRANSITION new series, December 2017 Vol. 18, No. 4, pp. 651 668, DOI 10.21307/stattrans-2017-005 RELATIONS FOR MOMENTS OF PROGRESSIVELY
More informationA Skewed Look at Bivariate and Multivariate Order Statistics
A Skewed Look at Bivariate and Multivariate Order Statistics Prof. N. Balakrishnan Dept. of Mathematics & Statistics McMaster University, Canada bala@mcmaster.ca p. 1/4 Presented with great pleasure as
More informationAnalysis of Type-II Progressively Hybrid Censored Data
Analysis of Type-II Progressively Hybrid Censored Data Debasis Kundu & Avijit Joarder Abstract The mixture of Type-I and Type-II censoring schemes, called the hybrid censoring scheme is quite common in
More informationTwo hours. To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER. 21 June :45 11:45
Two hours MATH20802 To be supplied by the Examinations Office: Mathematical Formula Tables THE UNIVERSITY OF MANCHESTER STATISTICAL METHODS 21 June 2010 9:45 11:45 Answer any FOUR of the questions. University-approved
More informationStep-Stress Models and Associated Inference
Department of Mathematics & Statistics Indian Institute of Technology Kanpur August 19, 2014 Outline Accelerated Life Test 1 Accelerated Life Test 2 3 4 5 6 7 Outline Accelerated Life Test 1 Accelerated
More informationINFERENCE FOR BIRNBAUM-SAUNDERS, LAPLACE AND SOME RELATED DISTRIBUTIONS UNDER CENSORED DATA
INFERENCE FOR BIRNBAUM-SAUNDERS, LAPLACE AND SOME RELATED DISTRIBUTIONS UNDER CENSORED DATA INFERENCE FOR BIRNBAUM-SAUNDERS, LAPLACE AND SOME RELATED DISTRIBUTIONS UNDER CENSORED DATA By Xiaojun Zhu A
More informationContinuous Univariate Distributions
Continuous Univariate Distributions Volume 2 Second Edition NORMAN L. JOHNSON University of North Carolina Chapel Hill, North Carolina SAMUEL KOTZ University of Maryland College Park, Maryland N. BALAKRISHNAN
More informationEstimation of Stress-Strength Reliability for Kumaraswamy Exponential Distribution Based on Upper Record Values
International Journal of Contemporary Mathematical Sciences Vol. 12, 2017, no. 2, 59-71 HIKARI Ltd, www.m-hikari.com https://doi.org/10.12988/ijcms.2017.7210 Estimation of Stress-Strength Reliability for
More informationA Study of Five Parameter Type I Generalized Half Logistic Distribution
Pure and Applied Mathematics Journal 2017; 6(6) 177-181 http//www.sciencepublishinggroup.com/j/pamj doi 10.11648/j.pamj.20170606.14 ISSN 2326-9790 (Print); ISSN 2326-9812 (nline) A Study of Five Parameter
More informationStatistics - Lecture One. Outline. Charlotte Wickham 1. Basic ideas about estimation
Statistics - Lecture One Charlotte Wickham wickham@stat.berkeley.edu http://www.stat.berkeley.edu/~wickham/ Outline 1. Basic ideas about estimation 2. Method of Moments 3. Maximum Likelihood 4. Confidence
More informationSome Theoretical Properties and Parameter Estimation for the Two-Sided Length Biased Inverse Gaussian Distribution
Journal of Probability and Statistical Science 14(), 11-4, Aug 016 Some Theoretical Properties and Parameter Estimation for the Two-Sided Length Biased Inverse Gaussian Distribution Teerawat Simmachan
More informationFall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.
1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n
More informationThe Complementary Exponential-Geometric Distribution Based On Generalized Order Statistics
Applied Mathematics E-Notes, 152015, 287-303 c ISSN 1607-2510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ The Complementary Exponential-Geometric Distribution Based On Generalized
More informationt x 1 e t dt, and simplify the answer when possible (for example, when r is a positive even number). In particular, confirm that EX 4 = 3.
Mathematical Statistics: Homewor problems General guideline. While woring outside the classroom, use any help you want, including people, computer algebra systems, Internet, and solution manuals, but mae
More informationINVERTED KUMARASWAMY DISTRIBUTION: PROPERTIES AND ESTIMATION
Pak. J. Statist. 2017 Vol. 33(1), 37-61 INVERTED KUMARASWAMY DISTRIBUTION: PROPERTIES AND ESTIMATION A. M. Abd AL-Fattah, A.A. EL-Helbawy G.R. AL-Dayian Statistics Department, Faculty of Commerce, AL-Azhar
More information1. Point Estimators, Review
AMS571 Prof. Wei Zhu 1. Point Estimators, Review Example 1. Let be a random sample from. Please find a good point estimator for Solutions. There are the typical estimators for and. Both are unbiased estimators.
More informationSTAT 6385 Survey of Nonparametric Statistics. Order Statistics, EDF and Censoring
STAT 6385 Survey of Nonparametric Statistics Order Statistics, EDF and Censoring Quantile Function A quantile (or a percentile) of a distribution is that value of X such that a specific percentage of the
More informationarxiv: v1 [stat.ap] 31 May 2016
Some estimators of the PMF and CDF of the arxiv:1605.09652v1 [stat.ap] 31 May 2016 Logarithmic Series Distribution Sudhansu S. Maiti, Indrani Mukherjee and Monojit Das Department of Statistics, Visva-Bharati
More informationBayesian Analysis of Simple Step-stress Model under Weibull Lifetimes
Bayesian Analysis of Simple Step-stress Model under Weibull Lifetimes A. Ganguly 1, D. Kundu 2,3, S. Mitra 2 Abstract Step-stress model is becoming quite popular in recent times for analyzing lifetime
More informationInference on reliability in two-parameter exponential stress strength model
Metrika DOI 10.1007/s00184-006-0074-7 Inference on reliability in two-parameter exponential stress strength model K. Krishnamoorthy Shubhabrata Mukherjee Huizhen Guo Received: 19 January 2005 Springer-Verlag
More informationISI Web of Knowledge (Articles )
ISI Web of Knowledge (Articles 1 -- 18) Record 1 of 18 Title: Estimation and prediction from gamma distribution based on record values Author(s): Sultan, KS; Al-Dayian, GR; Mohammad, HH Source: COMPUTATIONAL
More informationSTAT 512 sp 2018 Summary Sheet
STAT 5 sp 08 Summary Sheet Karl B. Gregory Spring 08. Transformations of a random variable Let X be a rv with support X and let g be a function mapping X to Y with inverse mapping g (A = {x X : g(x A}
More informationFULL LIKELIHOOD INFERENCES IN THE COX MODEL
October 20, 2007 FULL LIKELIHOOD INFERENCES IN THE COX MODEL BY JIAN-JIAN REN 1 AND MAI ZHOU 2 University of Central Florida and University of Kentucky Abstract We use the empirical likelihood approach
More informationPROD. TYPE: COM ARTICLE IN PRESS. Computational Statistics & Data Analysis ( )
COMSTA 28 pp: -2 (col.fig.: nil) PROD. TYPE: COM ED: JS PAGN: Usha.N -- SCAN: Bindu Computational Statistics & Data Analysis ( ) www.elsevier.com/locate/csda Transformation approaches for the construction
More informationESTIMATOR IN BURR XII DISTRIBUTION
Journal of Reliability and Statistical Studies; ISSN (Print): 0974-804, (Online): 9-5666 Vol. 0, Issue (07): 7-6 ON THE VARIANCE OF P ( Y < X) ESTIMATOR IN BURR XII DISTRIBUTION M. Khorashadizadeh*, S.
More informationBurr Type X Distribution: Revisited
Burr Type X Distribution: Revisited Mohammad Z. Raqab 1 Debasis Kundu Abstract In this paper, we consider the two-parameter Burr-Type X distribution. We observe several interesting properties of this distribution.
More informationThe comparative studies on reliability for Rayleigh models
Journal of the Korean Data & Information Science Society 018, 9, 533 545 http://dx.doi.org/10.7465/jkdi.018.9..533 한국데이터정보과학회지 The comparative studies on reliability for Rayleigh models Ji Eun Oh 1 Joong
More informationThe Inverse Weibull Inverse Exponential. Distribution with Application
International Journal of Contemporary Mathematical Sciences Vol. 14, 2019, no. 1, 17-30 HIKARI Ltd, www.m-hikari.com https://doi.org/10.12988/ijcms.2019.913 The Inverse Weibull Inverse Exponential Distribution
More informationON THE FAILURE RATE ESTIMATION OF THE INVERSE GAUSSIAN DISTRIBUTION
ON THE FAILURE RATE ESTIMATION OF THE INVERSE GAUSSIAN DISTRIBUTION ZHENLINYANGandRONNIET.C.LEE Department of Statistics and Applied Probability, National University of Singapore, 3 Science Drive 2, Singapore
More informationA Quasi Gamma Distribution
08; 3(4): 08-7 ISSN: 456-45 Maths 08; 3(4): 08-7 08 Stats & Maths www.mathsjournal.com Received: 05-03-08 Accepted: 06-04-08 Rama Shanker Department of Statistics, College of Science, Eritrea Institute
More informationFitting the generalized Pareto distribution to data using maximum goodness-of-fit estimators
Computational Statistics & Data Analysis 51 (26) 94 917 www.elsevier.com/locate/csda Fitting the generalized Pareto distribution to data using maximum goodness-of-fit estimators Alberto Luceño E.T.S. de
More informationProbability and Stochastic Processes
Probability and Stochastic Processes A Friendly Introduction Electrical and Computer Engineers Third Edition Roy D. Yates Rutgers, The State University of New Jersey David J. Goodman New York University
More informationConstant Stress Partially Accelerated Life Test Design for Inverted Weibull Distribution with Type-I Censoring
Algorithms Research 013, (): 43-49 DOI: 10.593/j.algorithms.01300.0 Constant Stress Partially Accelerated Life Test Design for Mustafa Kamal *, Shazia Zarrin, Arif-Ul-Islam Department of Statistics & Operations
More informationInternational Journal of Scientific & Engineering Research, Volume 6, Issue 2, February-2015 ISSN
1686 On Some Generalised Transmuted Distributions Kishore K. Das Luku Deka Barman Department of Statistics Gauhati University Abstract In this paper a generalized form of the transmuted distributions has
More informationMS&E 226: Small Data. Lecture 11: Maximum likelihood (v2) Ramesh Johari
MS&E 226: Small Data Lecture 11: Maximum likelihood (v2) Ramesh Johari ramesh.johari@stanford.edu 1 / 18 The likelihood function 2 / 18 Estimating the parameter This lecture develops the methodology behind
More informationProbability and Estimation. Alan Moses
Probability and Estimation Alan Moses Random variables and probability A random variable is like a variable in algebra (e.g., y=e x ), but where at least part of the variability is taken to be stochastic.
More informationEstimation in an Exponentiated Half Logistic Distribution under Progressively Type-II Censoring
Communications of the Korean Statistical Society 2011, Vol. 18, No. 5, 657 666 DOI: http://dx.doi.org/10.5351/ckss.2011.18.5.657 Estimation in an Exponentiated Half Logistic Distribution under Progressively
More informationEstimation Under Multivariate Inverse Weibull Distribution
Global Journal of Pure and Applied Mathematics. ISSN 097-768 Volume, Number 8 (07), pp. 4-4 Research India Publications http://www.ripublication.com Estimation Under Multivariate Inverse Weibull Distribution
More informationElements of statistics (MATH0487-1)
Elements of statistics (MATH0487-1) Prof. Dr. Dr. K. Van Steen University of Liège, Belgium November 12, 2012 Introduction to Statistics Basic Probability Revisited Sampling Exploratory Data Analysis -
More informationReconstruction of Order Statistics in Exponential Distribution
JIRSS 200) Vol. 9, No., pp 2-40 Reconstruction of Order Statistics in Exponential Distribution M. Razmkhah, B. Khatib, Jafar Ahmadi Department of Statistics, Ordered and Spatial Data Center of Excellence,
More informationStatistics and Data Analysis
Statistics and Data Analysis The Crash Course Physics 226, Fall 2013 "There are three kinds of lies: lies, damned lies, and statistics. Mark Twain, allegedly after Benjamin Disraeli Statistics and Data
More informationEstimation for generalized half logistic distribution based on records
Journal of the Korean Data & Information Science Society 202, 236, 249 257 http://dx.doi.org/0.7465/jkdi.202.23.6.249 한국데이터정보과학회지 Estimation for generalized half logistic distribution based on records
More informationHigh Breakdown Analogs of the Trimmed Mean
High Breakdown Analogs of the Trimmed Mean David J. Olive Southern Illinois University April 11, 2004 Abstract Two high breakdown estimators that are asymptotically equivalent to a sequence of trimmed
More informationExponential Families
Exponential Families David M. Blei 1 Introduction We discuss the exponential family, a very flexible family of distributions. Most distributions that you have heard of are in the exponential family. Bernoulli,
More informationLearning Objectives for Stat 225
Learning Objectives for Stat 225 08/20/12 Introduction to Probability: Get some general ideas about probability, and learn how to use sample space to compute the probability of a specific event. Set Theory:
More informationInference for the Pareto, half normal and related. distributions
Inference for the Pareto, half normal and related distributions Hassan Abuhassan and David J. Olive Department of Mathematics, Southern Illinois University, Mailcode 4408, Carbondale, IL 62901-4408, USA
More informationChapter 5 continued. Chapter 5 sections
Chapter 5 sections Discrete univariate distributions: 5.2 Bernoulli and Binomial distributions Just skim 5.3 Hypergeometric distributions 5.4 Poisson distributions Just skim 5.5 Negative Binomial distributions
More informationThree hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER.
Three hours To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER EXTREME VALUES AND FINANCIAL RISK Examiner: Answer QUESTION 1, QUESTION
More informationProbability Density Functions
Statistical Methods in Particle Physics / WS 13 Lecture II Probability Density Functions Niklaus Berger Physics Institute, University of Heidelberg Recap of Lecture I: Kolmogorov Axioms Ingredients: Set
More informationGeneralized Linear Models Introduction
Generalized Linear Models Introduction Statistics 135 Autumn 2005 Copyright c 2005 by Mark E. Irwin Generalized Linear Models For many problems, standard linear regression approaches don t work. Sometimes,
More informationMathematical statistics
October 4 th, 2018 Lecture 12: Information Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation Chapter
More informationTesting for a unit root in an ar(1) model using three and four moment approximations: symmetric distributions
Hong Kong Baptist University HKBU Institutional Repository Department of Economics Journal Articles Department of Economics 1998 Testing for a unit root in an ar(1) model using three and four moment approximations:
More informationDefine characteristic function. State its properties. State and prove inversion theorem.
ASSIGNMENT - 1, MAY 013. Paper I PROBABILITY AND DISTRIBUTION THEORY (DMSTT 01) 1. (a) Give the Kolmogorov definition of probability. State and prove Borel cantelli lemma. Define : (i) distribution function
More informationPart 4: Multi-parameter and normal models
Part 4: Multi-parameter and normal models 1 The normal model Perhaps the most useful (or utilized) probability model for data analysis is the normal distribution There are several reasons for this, e.g.,
More informationStat 5101 Lecture Notes
Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random
More informationREFERENCES AND FURTHER STUDIES
REFERENCES AND FURTHER STUDIES by..0. on /0/. For personal use only.. Afifi, A. A., and Azen, S. P. (), Statistical Analysis A Computer Oriented Approach, Academic Press, New York.. Alvarez, A. R., Welter,
More informationAnalysis of variance and linear contrasts in experimental design with generalized secant hyperbolic distribution
Journal of Computational and Applied Mathematics 216 (2008) 545 553 www.elsevier.com/locate/cam Analysis of variance and linear contrasts in experimental design with generalized secant hyperbolic distribution
More informationSTATISTICAL INFERENCE IN ACCELERATED LIFE TESTING WITH GEOMETRIC PROCESS MODEL. A Thesis. Presented to the. Faculty of. San Diego State University
STATISTICAL INFERENCE IN ACCELERATED LIFE TESTING WITH GEOMETRIC PROCESS MODEL A Thesis Presented to the Faculty of San Diego State University In Partial Fulfillment of the Requirements for the Degree
More informationDepartment of Statistical Science FIRST YEAR EXAM - SPRING 2017
Department of Statistical Science Duke University FIRST YEAR EXAM - SPRING 017 Monday May 8th 017, 9:00 AM 1:00 PM NOTES: PLEASE READ CAREFULLY BEFORE BEGINNING EXAM! 1. Do not write solutions on the exam;
More informationQualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama
Qualifying Exam CS 661: System Simulation Summer 2013 Prof. Marvin K. Nakayama Instructions This exam has 7 pages in total, numbered 1 to 7. Make sure your exam has all the pages. This exam will be 2 hours
More informationMathematical statistics
October 1 st, 2018 Lecture 11: Sufficient statistic Where are we? Week 1 Week 2 Week 4 Week 7 Week 10 Week 14 Probability reviews Chapter 6: Statistics and Sampling Distributions Chapter 7: Point Estimation
More informationReview. December 4 th, Review
December 4 th, 2017 Att. Final exam: Course evaluation Friday, 12/14/2018, 10:30am 12:30pm Gore Hall 115 Overview Week 2 Week 4 Week 7 Week 10 Week 12 Chapter 6: Statistics and Sampling Distributions Chapter
More informationEstimation of P (X > Y ) for Weibull distribution based on hybrid censored samples
Estimation of P (X > Y ) for Weibull distribution based on hybrid censored samples A. Asgharzadeh a, M. Kazemi a, D. Kundu b a Department of Statistics, Faculty of Mathematical Sciences, University of
More informationUsing Simulation Procedure to Compare between Estimation Methods of Beta Distribution Parameters
Global Journal of Pure and Applied Mathematics. ISSN 0973-1768 Volume 13, Number 6 (2017), pp. 2307-2324 Research India Publications http://www.ripublication.com Using Simulation Procedure to Compare between
More informationTerminology Suppose we have N observations {x(n)} N 1. Estimators as Random Variables. {x(n)} N 1
Estimation Theory Overview Properties Bias, Variance, and Mean Square Error Cramér-Rao lower bound Maximum likelihood Consistency Confidence intervals Properties of the mean estimator Properties of the
More informationParametric Techniques Lecture 3
Parametric Techniques Lecture 3 Jason Corso SUNY at Buffalo 22 January 2009 J. Corso (SUNY at Buffalo) Parametric Techniques Lecture 3 22 January 2009 1 / 39 Introduction In Lecture 2, we learned how to
More informationBAYESIAN PREDICTION OF WEIBULL DISTRIBUTION BASED ON FIXED AND RANDOM SAMPLE SIZE. A. H. Abd Ellah
Serdica Math. J. 35 (2009, 29 46 BAYESIAN PREDICTION OF WEIBULL DISTRIBUTION BASED ON FIXED AND RANDOM SAMPLE SIZE A. H. Abd Ellah Communicated by S. T. Rachev Abstract. We consider the problem of predictive
More informationCOMPARISON OF THE ESTIMATORS OF THE LOCATION AND SCALE PARAMETERS UNDER THE MIXTURE AND OUTLIER MODELS VIA SIMULATION
(REFEREED RESEARCH) COMPARISON OF THE ESTIMATORS OF THE LOCATION AND SCALE PARAMETERS UNDER THE MIXTURE AND OUTLIER MODELS VIA SIMULATION Hakan S. Sazak 1, *, Hülya Yılmaz 2 1 Ege University, Department
More informationProblem Selected Scores
Statistics Ph.D. Qualifying Exam: Part II November 20, 2010 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. Problem 1 2 3 4 5 6 7 8 9 10 11 12 Selected
More informationAnalysis of Progressive Type-II Censoring. in the Weibull Model for Competing Risks Data. with Binomial Removals
Applied Mathematical Sciences, Vol. 5, 2011, no. 22, 1073-1087 Analysis of Progressive Type-II Censoring in the Weibull Model for Competing Risks Data with Binomial Removals Reza Hashemi and Leila Amiri
More informationLecture 3. Inference about multivariate normal distribution
Lecture 3. Inference about multivariate normal distribution 3.1 Point and Interval Estimation Let X 1,..., X n be i.i.d. N p (µ, Σ). We are interested in evaluation of the maximum likelihood estimates
More informationBTRY 4090: Spring 2009 Theory of Statistics
BTRY 4090: Spring 2009 Theory of Statistics Guozhang Wang September 25, 2010 1 Review of Probability We begin with a real example of using probability to solve computationally intensive (or infeasible)
More informationParametric Techniques
Parametric Techniques Jason J. Corso SUNY at Buffalo J. Corso (SUNY at Buffalo) Parametric Techniques 1 / 39 Introduction When covering Bayesian Decision Theory, we assumed the full probabilistic structure
More informationp y (1 p) 1 y, y = 0, 1 p Y (y p) = 0, otherwise.
1. Suppose Y 1, Y 2,..., Y n is an iid sample from a Bernoulli(p) population distribution, where 0 < p < 1 is unknown. The population pmf is p y (1 p) 1 y, y = 0, 1 p Y (y p) = (a) Prove that Y is the
More informationSTATISTICAL METHODS FOR SIGNAL PROCESSING c Alfred Hero
STATISTICAL METHODS FOR SIGNAL PROCESSING c Alfred Hero 1999 32 Statistic used Meaning in plain english Reduction ratio T (X) [X 1,..., X n ] T, entire data sample RR 1 T (X) [X (1),..., X (n) ] T, rank
More informationinferences on stress-strength reliability from lindley distributions
inferences on stress-strength reliability from lindley distributions D.K. Al-Mutairi, M.E. Ghitany & Debasis Kundu Abstract This paper deals with the estimation of the stress-strength parameter R = P (Y
More informationAnalysis of incomplete data in presence of competing risks
Journal of Statistical Planning and Inference 87 (2000) 221 239 www.elsevier.com/locate/jspi Analysis of incomplete data in presence of competing risks Debasis Kundu a;, Sankarshan Basu b a Department
More informationDistribution Theory. Comparison Between Two Quantiles: The Normal and Exponential Cases
Communications in Statistics Simulation and Computation, 34: 43 5, 005 Copyright Taylor & Francis, Inc. ISSN: 0361-0918 print/153-4141 online DOI: 10.1081/SAC-00055639 Distribution Theory Comparison Between
More informationReview of Probabilities and Basic Statistics
Alex Smola Barnabas Poczos TA: Ina Fiterau 4 th year PhD student MLD Review of Probabilities and Basic Statistics 10-701 Recitations 1/25/2013 Recitation 1: Statistics Intro 1 Overview Introduction to
More informationf (1 0.5)/n Z =
Math 466/566 - Homework 4. We want to test a hypothesis involving a population proportion. The unknown population proportion is p. The null hypothesis is p = / and the alternative hypothesis is p > /.
More informationReliability of Coherent Systems with Dependent Component Lifetimes
Reliability of Coherent Systems with Dependent Component Lifetimes M. Burkschat Abstract In reliability theory, coherent systems represent a classical framework for describing the structure of technical
More informationMultivariate Distributions
IEOR E4602: Quantitative Risk Management Spring 2016 c 2016 by Martin Haugh Multivariate Distributions We will study multivariate distributions in these notes, focusing 1 in particular on multivariate
More informationDouble Gamma Principal Components Analysis
Applied Mathematical Sciences, Vol. 12, 2018, no. 11, 523-533 HIKARI Ltd, www.m-hikari.com https://doi.org/10.12988/ams.2018.8455 Double Gamma Principal Components Analysis Ameerah O. Bahashwan, Zakiah
More informationIntroduction to Maximum Likelihood Estimation
Introduction to Maximum Likelihood Estimation Eric Zivot July 26, 2012 The Likelihood Function Let 1 be an iid sample with pdf ( ; ) where is a ( 1) vector of parameters that characterize ( ; ) Example:
More informationTAMS39 Lecture 2 Multivariate normal distribution
TAMS39 Lecture 2 Multivariate normal distribution Martin Singull Department of Mathematics Mathematical Statistics Linköping University, Sweden Content Lecture Random vectors Multivariate normal distribution
More informationTwo hours. To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER.
Two hours MATH38181 To be supplied by the Examinations Office: Mathematical Formula Tables and Statistical Tables THE UNIVERSITY OF MANCHESTER EXTREME VALUES AND FINANCIAL RISK Examiner: Answer any FOUR
More informationEstimation of parametric functions in Downton s bivariate exponential distribution
Estimation of parametric functions in Downton s bivariate exponential distribution George Iliopoulos Department of Mathematics University of the Aegean 83200 Karlovasi, Samos, Greece e-mail: geh@aegean.gr
More informationNon-parametric Inference and Resampling
Non-parametric Inference and Resampling Exercises by David Wozabal (Last update. Juni 010) 1 Basic Facts about Rank and Order Statistics 1.1 10 students were asked about the amount of time they spend surfing
More information