Generalized inference for the common location. parameter of several location-scale families

Size: px
Start display at page:

Download "Generalized inference for the common location. parameter of several location-scale families"

Transcription

1 Generalized inference for the common location parameter of several location-scale families Fuqi Chen and Sévérien Nkurunziza Abstract In this paper, we are interested in inference problem concerning the common location parameter of k location-scale families with k 2. More specifically, we study the case where the scale parameters of the families are unknown and possibly heterogeneous. The proposed solution is derived by using generalized inference method. To this end, we present a method of constructing the required generalized pivotal quantity (GPQ) and generalized p-value (GPV) for the common location parameter. The proposed approach is based on the minimum risk equivariant estimators (MRE) which is more general and more efficient than the maximum likelihood estimators (MLEs). Thus, we extend the approaches based on MLEs and conditional inference which have been so far applied to some specific distributions. Also, with intensive simulation studies, we illustrate the performance of the proposed approach in small and moderate sample sizes. Finally, the approach is applied to analyse the normal body temperature. Keywords: GCIs; generalized p-value; location-scale family; MRE; Pitman estimator; University of Windsor, 401 Sunset Avenue, Windsor, Ontario, N9B 3P4. chen111n@uwindsor.ca University of Windsor, 401 Sunset Avenue, Windsor, Ontario, N9B 3P4. severien@uwindsor.ca

2 Introduction Testing the common location parameter of several location-scale families with unknown scale parameters is one of the most interesting statistical inference problems. There are many applications in which this problem is involved. For instance, this situation arises in statistical analysis that combines the information from several independent studies or meta-analysis. Indeed, the meta-analysis is more frequent in clinical trials as well as in social and behavioral sciences. Also, this is commonly seen in many statistical areas or designs such as balanced incomplete block designs, panel models, and some regression models, and in each of these scenarios, practitioners are often interested in inference concerning the common location parameter of several distributions with unknown scale parameters. For other applications and scenarios, we refer to Krishnamoorthy and Lu (2003). In particular, the normal location-scale family is the family which got more attention in statistical literature and for this family many researches have been done since the last century. For instance, one of the methods for constructing approximate confidence interval for the common mean, say µ, is based on the well-known Graybill and Deal (1959) estimator for µ. This approach is extensively discussed in statistical literature. To give some other references, we quote Maric and Graybill (1979), Pagurova and Gurskii (1979), Sinha (1985) among others. Another method used by Fairweather (1972) and Jordan and Krishnamoorthy (1996) consists in constructing the exact confidence interval for µ based on inverting weighted linear combinations of the Student s t statistics and the Fisher-Snedecor s F statistics, respectively. Simulation studies showed the methods provided by Fairweather (1972) and Jordan and Krishnamoorthy (1996) are reliable under the different situations. Further, Yu et al. (1999) verified that only the Fairweather (1972) method always produces nonempty confidence intervals. In this paper, we are interested in generalized inference method concerning the common 2

3 location parameter of several location-scale families. As introduced by Tsui and Weerahandi (1989) and Weerahandi (1993), generalized inference is based on the concepts of generalized test variable (GTV), generalized pivotal quantity (GPQ), generalized P-value (GPV) and generalized confidence interval (GCI). It turns out that the GCI and GPV perform well for some small-sample problems where classical procedures are not optimal (see Weerahandi, 1993, Bebu and Mathew, 2007 among others). In Krishnamoorthy and Lu (2003), the authors propose a procedure based on inverting weighted linear combinations of the generalized pivotal quantities. However, in the quoted paper, the authors considered normal family only. In fact, the existing literature does not provide a systematic method of constructing GPQ applicable to all families of parametric methods. In this paper, we present a method of constructing the GPQ and GTV for the common location parameter of several location-scale families with unknown scale parameters. The suggested method is based on minimum risk equivariant estimator (MRE) that is more efficient and more general than the Maximum likelihood estimator (MLE). Also, the proposed approach is more flexible as compared to that based on MLEs since it is well known that the MLE does not exist in some location and scale families. The rest of this paper is organized as follows. In Section 1, we present some backgrounds about generalized inference. Section 2 deals with generalized pivotal quantity and generalized test variable in location and scale family. Section 3 gives the framework and main result. In Section 4, we illustrate the application of the method to some specific location-scale families. In Section 5, we present some numerical examples and simulation studies as well as analysis results of a real data set. Finally, Section 6 gives discussion and concluding remarks. Details and technical results are outlined in the Appendix. 3

4 1 Background and preliminary results For the convenience of the reader, this section recalls some concepts of generalized inference. For more details about these concepts, we refer to Tsui and Weerahandi (1989), Weerahandi (1993), and Krishnamoorthy et al. (2007) among others. Let g 1,...,g k be pdfs and let (X 11,...,X 1n1 ), (X 21,...,X 2n2 ),..., (X k1,...,x knk ) be k independent samples, and assume that for each i = 1,2,...,k, X i1,...,x ini are iid from the pdfs f i (x µ i,σ i ) = σ 1 i g i ((x µ i )/σ i ) (1.1) where µ 1, µ 2,..., µ k,σ 1,σ 2,...,σ k are unknown parameters, < µ i <, σ i > 0, i = 1,2,...,k. Thereafter, we consider that µ 1 = µ 2 = = µ k = µ. With the above statistical model, we are interested in inference problems concerning the common location parameter µ, with the scale parameters σ i, i = 1,...,k unknown and possibly heterogeneous. Namely, 1. we would like to establish the GCI for µ, based on some GPQ; 2. given a real value µ 0, we would like to derive the GPV for testing H 0 : µ µ 0 versus H 1 : µ < µ 0. (1.2) To simplify the notation, let θ 2 = (σ 1,σ 2,...,σ k ) and let θ = (µ,θ 2 ). Definition 1.1 Let R(X,x,θ) be a function of X,x,θ, with θ = (µ,θ 2 ). The function R(X,x,θ) is said to be a generalized pivotal quantity for µ if 1. given x, the distribution of R(X,x,θ) is free from unknown parameters; 2. the observed value, defined as R(x,x,θ), does not depend on the nuisance parameter θ 2. 4

5 Thereafter, we consider a subclass of GPQ for which R(x,x,θ) is a bijection function in µ. In particular, we consider without loss of generalities that the GPQ satisfies R(x,x,θ) = µ. Definition 1.2 Let R(X,x,θ) be a GPQ for a scaler parameter µ. Then, an equal-tailed (1 α)100% generalized confidence interval (GCI) for µ is [R µ,α/2 (x), R µ,1 α/2 (x)], where the quantity R µ,γ (x) satisfies P[R(X,x,θ) R µ,γ (x)] = γ. (1.3) Also, one-sided generalized confidence bounds are defined in the similar way. Definition 1.3 Let T (X,x,θ) be a function of X,x,θ. The function T (X,x,θ) is said to be a generalized test variable for µ if 1. t = T (x,x,θ) does not depend on θ 2 ; 2. For fixed x,θ, the distribution of T (X,x,θ) is free of the nuisance parameter θ 2 ; 3. For fixed x and θ 2, P[T (X,x,θ) T (x,x,θ)] is stochastically monotone in µ. It is noticed that the GTV can be derived from GPQ R(X,x,θ) by taking T (X,x,θ) = R(X,x,θ) R(x,x,θ). Thus, if R(x,x,θ) = µ, we have T (X,x,θ) = R(X,x,θ) µ. Definition 1.4 Let T (X,x,θ) be GTV and consider the testing problem in (1.2). The generalized p-value (GPV) is defined as p = sup H 0 P[T (X,x,θ) 0]. More specifically, if T (X,x,θ) = R(X,x,θ) µ, the GPV for the testing problem in (1.2) becomes p = sup H 0 P[R(X,x,θ) µ 0]. Also, since the distribution of R(X,x,θ) does not depend on θ, and since P[R(X,x,θ) µ 0] is a decreasing function in µ, we have p = P(R(X,x,θ) µ 0 ). (1.4) 5

6 Note that the relations in (1.3) and (1.4) do not always lead to closed form solutions. Nevertheless, since the distribution of R(X,x,θ) is free of unknown parameters, the GCI and GPV can be obtained using a numerical method or Monte Carlo simulation. Let ˆµ i and ˆσ i denote the MREs for µ i and σ i respectively, for each i = 1,...,k. Further, let ˆµ si, ˆσ si denote the observed values of ˆµ i and ˆσ i respectively. We close this section by recalling a result which is used in computing the MREs ˆµ i and ˆσ i. To this end, let Y 1,...,Y n be iid from the population probability density functions (pdfs) f (x λ,τ) = τ 1 g((x λ)/τ) where τ,λ are unknown parameters, and g is a pdf. Then, under quadratic loss function the MRE of λ and τ are respectively (see Theorem A.3 in the Appendix) ˆλ(y) = ˆτ(y) = 0 0 uv n 3 n g((y i u)/v)dudv v n 2 n g((y i u)/v)dudv / ( 0 / ( 0 ) v n 3 n g((y i u)/v)dudv, ) v n 3 n g((y i u)/v)dudv. (1.5) 2 GCI and PTV for the common µ As mentioned in the introduction, the proposed GPQ and GTV are based on MREs. Let ˆµ i and ˆσ i denote respectively the MREs of µ and σ i based on the i th sample, i = 1,...,k. Further, let ˆµ si and ˆσ si denote respectively the observed values of ˆµ i and ˆσ i, i = 1,...,k. By using the fact that ˆµ i and ˆσ i are equivariant estimator, we conclude that, for each i = 1,2,...,k, ( ˆµ i µ)/ ˆσ i and ˆσ i /σ i are pivotal quantities for µ and σ i respectively. Then, based on the i th sample, i = 1,2,...,k, we consider the following GPQ for µ and σ i respectively, R µi = ˆµ si ˆσ si ( ˆµ i µ)/ ˆσ i, and R σi = ˆσ si ( ˆσ i /σ i ) 1. (2.1) Further, it can be verified that a weighted average of R µi is a GPQ for µ (see also Krishnamoorthy and Lu, 2003). Thus, the proposed GPQ is formally stated in the following proposition. 6

7 Proposition 2.1 If the k samples are from the pdf in (1.1), then, the GPQ for µ is R(X,x,θ) = k W i R µi, with W i = ( n i /R 2 σ i ) ( k i = 1,...,k; where R µi, R σi are given in (2.1). Furthermore, the GTV is T (X,x,θ) = k n j /R 2 σ j ) 1, (2.2) W i R µi µ. (2.3) The proof follows directly from the fact that R µi and R σi are GPQ for µ and σ i respectively. In closing this section, we note that 100γ% GCI is obtained by combining (1.3) and (2.2). Further, in solving the testing problem in (1.2), GPV is obtained by combining (1.4) and (2.2). In general, the equations (1.3) and (1.4) do not have a closed form solution and thus, we use Monte Carlo method, with an algorithm that is given in the following section. 3 Algorithm and main result To set up notation, let b i = ( b i1,...,b i(ni 2)) with bi j = (x i j ˆµ i )/ ˆσ i, i = 1,...,k; j = 1,...,n i. Also, let Z 1i = ( ˆµ i µ)/ ˆσ i, Z 2i = ˆσ i /σ i, and let h i (b) = y n i 1 n i 0 g i ((x + b i j )y)dxdy,i = 1,2,...,k. (3.1) The established algorithm uses extensively the following proposition. Proposition 3.1 If (1.1) holds, then, conditionally to b, the pdf of Z 1i is f 1i (z 1i b) = and the pdf of Z 2i is y n i 1 n i 0 f 2i (z 2i b) = z n i 1 2i n i g i ((z 1i + b i j )y)dy / h i (b), i = 1,2,...,k (3.2) g i ((x + b i j )z 2i )dx / h i (b), i = 1,2,...,k; (3.3) where h i (b) is given in(3.1). 7

8 Proof From Proposition A.3 in the Appendix, it follows directly that conditionally to b, the joint pdf of (Z 1i,Z 2i ) is f ( z 1i,z 2i b ) i = z n i 1 2i n i g i ((z 1i + b i j )z 2i ) / h i (b), (3.4) where h i (b) is given in(3.1). Therefore, conditionally to b, the marginal pdf of Z 1i and Z 2i are given by (3.2) and (3.3) respectively, that completes the proof. Algorithm for the proposed GCI and GPV For given (n 1,...,n k ), and data set x; (i). From (1.5), find ˆµ si (x), ˆσ si (x), the observed values of ˆµ i (X), ˆσ i (X), respectively. (ii). Compute {b i j = ( x i j ˆµ i ) / ˆσi }, i = 1,...,k, j = 1,...,n i. (iii). Generate U 1i U(0,1), i = 1,2,...,k. (iv). For each U 1i, find the quantities Z 1i such that given in (3.2). Z1i f 1i (x b)dx = U 1i, where f 1i (z 1i b) is (v). Generate U 2i U(0,1), i = 1,2,...,k. (vi). For each U 2i, find the quantities Z 2i such that given in (3.3). Z2i f 2i (x b)dx = U 2i, where f 2i (z 2i b) is (vii). By using (2.1), compute R µi and R σi. (viii). By using (2.2), compute W i and R(X,x,θ). (ix). Repeat from the step (iii) to (viii), M times (with M large), and set R l (X,x,θ), the value of R(X,x,θ) obtained at the l th replicate, l = 1,2,...,M. 8

9 (x). Find R θ1,α/2(x) and R θ1,1 α/2(x) as respectively 100α/2 and 100(1 α/2) percentiles of R 1 (X,x,θ),R 2 (X,x,θ),...,R M (X,x,θ). (xi) Using (1.4), estimate the GPV p by ˆp = M 1 M I {Rl (X,x,θ) θ 0 }, where I A denotes the l=1 indicator function of the event A. It is noticed that for the normal sample case, the proposed algorithm corresponds to that in Krishnamoorthy and Lu (2003). Indeed, at normal case, the pdfs (3.2) and (3.3) correspond respectively to the pdfs of Student t and Chi-square distribution with n 1 degrees of freedom. 4 Some illustrative k-sample location-scale families In this section, we highlight the application of Proposition 2.1 along with the formulas (1.3), (1.4) and (3.4). In particular, we apply the proposed method to the logistic scale-location families. Also, we apply the method to the case where the MLEs of the location and scale parameters do not exist. First, we consider the k-sample location-scale normal family in order to highlight the fact that the above algorithm generalizes that in Krishnamoorthy and Lu (2003). 4.1 k-sample from normal distributions Assume that for i = 1,...,k, each X i1,x i2,...,x ini are iid with the pdf given by [ f Xi j (x i j ) = σi 1 (2π) 1/2 exp ( 2σi 2 ) 1 (xi j µ) 2], (4.1) where i = 1,...,k, j = 1,...,n i and µ, σ 1,...,σ k are unknown parameters. Under the model in (4.1), we apply the method to construct the GPQ for the k-sample normal family. We also illustrate the computation of GCI and GPV, based on the constructed GPQ. To this end, let b i j = ( X i j ˆµ i ) / ˆσi, j = 1,2,...,n i 2, i = 1,2,...,k, let b i = 1 n i n i 9 b i j,

10 and let S 2 i = n i (b i j b i ) 2, and s i be the observed value of S i, i = 1,...,k. If X i j N (µ,σ i ), j = 1,...,n i, by using the relation in(1.5) (or see Theorem A.3 in the Appendix), one can verify that based on the i th sample, the MRE for µ is X i, which is the same as MLE. Also, the MRE estimator of σ i is given by ˆσ i (X) = Γ(n i /2)/Γ((n i + 1)/2) 2 1 n i (X i j X i ) 2, and then, b i = 0 and S 2 i = n i b 2 i j = ( 2Γ 2 ((n i + 1)/2) )/ Γ 2 (n i /2). Further, the GPQ R µi as given in (2.1) becomes R µi = ˆµ si ˆσ si (s i / ( ( ) ni ˆµi µ / ) n i (n i 1)) (n i 1) S i ˆσ i = ˆµ si ˆσ si (s i (T ni 1)/ ) n i (n i 1), (4.2) and the GPQ R σi as given in (2.1) can be rewritten as R σi = ˆσ si ( ˆσ i /σ i ) 1 = s i ˆσ si / Xn 2 i 1, i = 1,...,k. (4.3) Therefore, if the data are from normal distributions, the GPQ for µ is given in (2.2), with R µi and R σi given by (4.2) and (4.3), respectively. Thus, from (4.2) and (4.3), we conclude that, for k-sample normal distributions, the proposed method is equivalent to that in Krishnamoorthy and Lu (2003). Below, we consider some other examples which illustrate that the proposed approach is applicable to other members of scale-location families. 4.2 k-sample from Logistic distributions Here we apply the proposed method to k-sample logistic families. In this case, we directly apply the methods described above with the pdf g i, i = 1,...,k set as the standard logistic pdf. That is, the pdfs of Z 1i and Z 2i have the form in (3.2) and (3.3) with g i (x) = exp( x)/[1 + exp( x)] 2, i = 1,...,k. 10

11 4.3 k-sample Location-scale families case where MLE does not exist As mentioned above, the proposed method is also applicable to the case where MLEs do not exist. In order to illustrate this last point, we consider the following example that is based on the result due to Pitman (1979). Let the scale-location families σ 1 i g i ((x µ)/σ i ), i = 1,2,...,k, where g i (x) = ( 2(1 + x )(1 + log(1 + x )) 2) 1, < x <, i = 1,...k. (4.4) As proved in Pitman (1979), MLEs for σ i, µ, i = 1,...,k do not exist. Another illustrative example corresponds to the scale-location family studied in Gupta and Székely (1994). We consider σi 1 g i ((x µ i )/σ i ), where g i (x) = c i (x i log 2 x i ) 1, 0 < x i l i < 1, i = 1,2,...,k with k a constant 0 < l i < 1 and c i = 1/log(l i ) is a constant. Gupta and Székely (1994) proved that for such families, MLEs for the location and scale parameters do not exist. 5 Simulation study and data analysis 5.1 Simulation study In this section, we carry out intensive simulation studies in order to evaluate the performances of the suggested approach in small and moderate sample sizes. To this end, we set k = 3 and generate samples from the related distributions of interest. Namely, the simulated coverage probabilities of the 95% GCI are presented in Tables 1-3 and, at significance level α =.05, the simulated powers of the proposed test are given in Tables 4-6. From Tables 1-3, the empirical confidence level of the proposed GCI for µ, is close to the nominal confidence level of 95%. Further, it is noticed that, as the sample size increases, the coverage probability get close to the nominal confidence level (95%). Also, we study the 11

12 Table 1: The coverage probabilities (CPR) of the 95% GCI (Normal samples) (n 1,n 2,n 3 ) (µ, σ 1, σ 2, σ 3 ) CPR (µ,σ 1, σ 2, σ 3 ) CPR (5, 5, 5) (2, 2, 2, 2) (2, 2, 4, 6) (10, 10, 10) (2, 2, 2, 2) (2, 2, 4, 6) (20, 20, 20) (2, 2, 2, 2) (2, 2, 4, 6) (5, 10, 20) (2, 2, 4, 6) (2, 4, 6, 2) (5, 5, 5) (2,0.5,100,500) (2, 2, 100, 200) (20, 20, 20) (2,0.5,100,500) (2, 2, 100, 200) (5,10,20) (2,0.5,100,500) (2,500,100,0.5) Table 2: The coverage probabilities (CPR) of the 95% GCI (Logistic samples) (µ,σ 1,σ 2,σ 3 ) (n 1,n 2,n 3 ) CPR (µ,σ 1,σ 2,σ 3 ) (n 1,n 2,n 3 ) CPR (2, 2, 2, 2) (5, 5, 5) (2, 2, 4, 6) (5, 5, 5) (2, 2, 2, 2) (10, 10, 10) (2, 2, 4, 6) (10, 10, 10) (2, 2, 2, 2) (20, 20, 20) (2, 2, 4, 6) (20, 20, 20) performance of the solution to the testing problem (1.2), for the case where k = 3 and µ 0 = 2. Tables 4-6 show that the power function varies with different values of µ, n i and σ i, i = 1,2,3. More specifically, from the above tables, it can be seen that when µ = µ 0 = 2, the powers are all approximately equal to But as the exact value of µ decreases, the power continually increases to 1, when the distance between µ and µ 0 increases. Also, the power decreases to 0 as the exact value of µ increases. Furthermore, when the exact value of µ is less than µ 0, the power increases as the sample size increases, for each value of µ. Figure 5.1 confirms the monotonicity of the power as well as the consistency of the proposed test. 12

13 Table 3: The coverage probabilities (CPR) of the 95% GCI for µ (family in (4.4)) (µ,σ 1,σ 2,σ 3 ) (n 1,n 2,n 3 ) CPR (µ,σ 1,σ 2,σ 3 ) (n 1,n 2,n 3 ) CPR (2, 2, 2, 2) (5, 5, 5) (2, 2, 4, 6) (5, 5, 5) (2, 2, 2, 2) (10, 10, 10) (2, 2, 4, 6) (10, 10, 10) (2, 2, 2, 2) (20, 20, 20) (2, 2, 4, 6) (20, 20, 20) Table 4: The power function of µ versus sample size (Normal family) (n 1, n 2, n 3 ) (µ, σ 1, σ 2, σ 3 ) Power (µ,σ 1, σ 2, σ 3 ) Power (0, 2, 2, 2) (0, 2, 4, 6) (5, 5, 5) (1, 2, 2, 2) (1, 2, 4, 6) (2, 2, 2, 2) (2, 2, 4, 6) (3, 2, 2, 2) (3, 2, 4, 6) (0, 2, 2, 2) 1 (0, 2, 4, 6) (20, 20, 20) (1, 2, 2, 2) (1, 2, 4, 6) (2, 2, 2, 2) (2, 2, 4, 6) (3, 2, 2, 2) 0 (3, 2, 4, 6) 0 (0, 2, 4, 6) (0, 4, 6, 2) (5, 10, 20) (1, 2, 4, 6) (1, 4, 6, 2) (2, 2, 4, 6) (2, 4, 6, 2) (3, 2, 4, 6) (3, 4, 6, 2) 0 13

14 Table 5: The power function of µ versus sample size (Logistic family) (n 1, n 2, n 3 ) (µ, σ 1, σ 2, σ 3 ) Power (n 1, n 2, n 3 ) (µ,σ 1, σ 2, σ 3 ) Power (0, 2, 4, 6) (0, 2, 4, 6) (5, 5, 5) (1, 2, 4, 6) (20,20,20) (1, 2, 4, 6) (2, 2, 4, 6) (2, 2, 4, 6) (3, 2, 4, 6) (3, 2, 4, 6) 0 Table 6: The simulated powers for µ (the location-scale family in (4.4)) (n 1, n 2, n 3 ) (µ, σ 1, σ 2, σ 3 ) Power (n 1, n 2, n 3 ) (µ,σ 1, σ 2, σ 3 ) Power (0, 2, 4, 6) (0, 2, 4, 6) (5, 5, 5) (1, 2, 4, 6) (20,20,20) (1, 2, 4, 6) (2, 2, 4, 6) (2, 2, 4, 6) (3, 2, 4, 6) (3, 2, 4, 6)

15 Power size: (5,5,5) size: (20,20,20) Power size: (5,5,5) size: (20,20,20) size: (5,10,20) Exact Value Exact Value (a) (b) Power in Normal family (σ 1 = 2, σ 2 = 2, σ 3 = 2) Power in Normal family (σ 1 = 2, σ 2 = 4, σ 3 = 6) Power size: (5,5,5) size: (20,20,20) size: (5,10,20) Power size: (5,5,5) size: (20,20,20) Exact Value Exact Value (c) (d) Logistic family Power in Normal family (σ 1 =.5, σ 2 = 100, σ 3 = 500) Power size: (5,5,5) size: (20,20,20) Exact Value (e) family case in (4.4) 15

16 5.2 Illustrative examples and data analysis In this subsection, we illustrate the application of the proposed method with a data set that correspond to the case where k = 2. In this example, we consider that the samples are taken from normal populations Normal Body Temperature data set This data set is presented in Mackowiak et al. (1992). In this data set, a total number of 130 patients have been assigned, with 65 males and 65 females, and their body temperatures have been recorded. Furthermore, it is already confirmed that the temperatures in these 2 gender groups are normally distributed. In particular, for the male group, one can consider X 1 N (µ 1,σ 1 ) and for the female group, one can consider X 2 N (µ 2,σ 2 ). In addition, it is reasonable to assume that the average body temperature for male and female are equal. This can be confirmed by the fact that 95% GCI for µ 1 µ 2 contains 0. Then, by applying the proposed method, the 95% GCI of common parameter µ is ± for which the value 98.6 is excluded. This contrast with the fact that, for many years, the value of 98.6 has been considered as the normal average body temperature (see Mackowiak et al., 1992 and references therein). However, in the quoted paper, the authors concluded that this value is erroneous. Thus, the obtained GCI corroborates this finding. Also, we consider to test H 0 : µ 98.6 versus H 1 : µ < 98.6 at 5% of significance level. By applying the proposed method, the GPV is 0. Thus, since the GPV is smaller than 0.05, we fail to reject the null hypothesis at 0.05 level of significance. Further, Mackowiak et al. (1992) concluded that the average normal body temperature is 98.2 o F. Accordingly, we consider to test H 0 : µ 98.2 versus H 1 : µ < The proposed GPV is which is greater than 0.05, and hence, we fail to reject H 0 at 0.05 level of 16

17 significance. 6 Conclusion In this paper, we studied an inference problem concerning the common location parameter of several location-scale families where the scale parameters are unknown and possibly unequal. In solving this problem, we presented a general approach for establishing the GPQ and GTV for the common location parameter. In particular, the proposed GPQ and GTV are functions of the MREs which are known to be more general and more efficient than the MLEs. Also, we carried out intensive simulation studies which showed that the proposed approach gives confidence intervals with high coverage probability. Further, the resulting tests have high power, and preserve the significance level. In order to illustrate the application of the proposed method, we analysed the normal body temperature. In particular, our findings corroborate that in Mackowiak et al. (1992) for which the average body temperature of 98.6 o F is erroneous although this value has been used as a standard for many years. In contrast, the value of 98.2 o F, given in Mackowiak et al. (1992), as the average normal body temperature seems to be reliable. Finally, it is noticed that the proposed approach is applicable to all members of the locationscale families, as opposed to the method in Krishnamoorthy and Lu (2003) which is designed only for the normal case. 17

18 A Appendix A.1 Minimum risk equivariant estimator of scale-location parameters In this subsection, we present some results which are useful in deriving MRE. Since we consider the quadratic loss function, the MRE corresponds to Pitman estimator (Pitman, 1939). Theorem A.1 Let X 1,...,X n be the iid random sample from location family with pdf f (x λ) = g(x λ), where λ unknown. Also, consider the loss function L(λ,a) = (λ a) 2 and suppose that there exists equivariant estimator δ 0 with finite risk. Then, the MRE of λ is ( n 1 n ˆλ p (x) = t g(x i t)dt g(x i t)dt). (A.1) The proof follows directly from Theorem 1.20 in Lehmann and Casella (1998, p. 154), and Theorem 6.10 in Schervish (1997, p. 348). Theorem A.2 Let X 1,...,X n be iid random sample from scale family with pdf f (x τ) = τ 1 g(x/τ), where τ is unknown. Also, let the loss function L(τ,a) = (a τ) 2 /τ 2 and suppose that there exists equivariant estimator δ 0 with finite risk. Then, the MRE of τ is ( ˆτ p (x) = t n n 1 g(tx i )dt t n+1 n g(tx i )dt). (A.2) 0 0 For a proof, we refer to Lehmann and Casella (1998, p. 170), and Schervish (1997, p. 352). Theorem A.3 Let X 1,X 2,...,X n be iid random sample from scale-location family with pdf f (x λ,τ) = τ 1 g((x λ)/τ), where λ and τ are unknown. Also, suppose that there exists equivariant estimator δ 0 with finite risk. Then, under quadratic loss function the MRE of λ and τ are respectively ˆλ(x) = ˆτ(x) = 0 uv n 3 n g((x i u)/v)dudv v n 2 n 0 / ( / ( g((x i u)/v)dudv ) v n 3 n g((x i u)/v)dudv, ) v n 3 n g((x i u)/v)dudv.

19 Proof The result is given in Lehmann and Casella (1998, chap. 3). However, for our paper to be self-contained, the main steps of a proof are outlined here. Let δ 0 (X) and δ 1 (X) be equivariant estimators of µ and σ respectively. By Theorem 3.17 in Lehmann and Casella (1998, p. 174), the MRE for µ is ˆµ(x) = δ 0 (x) w(z)δ 1 (x), (A.3) where by relation (3.44) in Lehmann and Casella (1998, p. 175) / w(z) = E[δ 0 (X)δ 1 (X) Z] E [ δ1 2 (X) Z ], (A.4) with Z = (Z 1,Z 2,...,Z n 1 ), where Z i = (X i X n )/(X n 1 X n ), i = 1,2,...,n 2, Z n 1 = (X n 1 X n )/ X n 1 X n. Further, as equivariant estimators of µ and σ, we choose δ 0 (X) = X n and δ 1 (X) = δ 2 with δ 2 = X n 1 X n respectively. One can verify that the transformation from x to (z,δ 0,δ 2 ) has Jacobian δ 2 n 2, and then, for µ = 0, σ = 1, the joint pdf of (Z,δ 0,δ 2 ) is given by f Z,δ0,δ 2 (z,δ 0,δ 2 ) = δ 2 n 2 g( δ 2 + δ 0 ) g(δ 0 ) n 2 g(z i δ 2 + δ 0 ), both for z n 1 = 1 and z n 1 = 1. Then, by some algebraic computations, we get w(z) = 0 x 1 x n v n 1 ug(u) 0 v n g(u) n 1 n 1 g((x i x n )v + u) dudv g((x i x n )v + u) dudv Further, by the change of variables v = 1/s and u = x n v tv, we get w(z) = x n x 1 x n s n 3 t n 0 s n 3 0 g((x i t)/s) dt ds n g((x i t)/s) dt ds Therefore, combining (A.3) and (A.5), we get the first statement of the theorem... (A.5) 19

20 To prove the second statement, note that ˆσ p (X) is the MRE for σ if and only if it is a function of the differences Y i = X i X n, i = 1,2,...,n 1 (see Lehmann and Casella, 1998, p ). Further, the joint pdf of (Y 1,Y 2,...,Y n 1 ) is σ n n 1 g(t/σ) g((y i +t)/σ) dt = σ n+1 n 1 g(u) g((y i /σ) + u) du, and this a joint density of n 1 observations from scale family with the scale parameter σ. Therefore, it suffices to apply Theorem A.2 by replacing t n and n 1 g(u) n g(tx i ) by t n 1 and g(ty i + u) du respectively. Further, we replace y i by x i x n, i = 1,2,...,n 1, and then the desired result follows from the transformation t = 1/s and u = x n s vs, that completes the proof. A.2 Distributions of pivotal quantities Let ˆµ i be the equivariant estimator of µ based on the i th sample, i = 1,...,k. Also, let a i j = X i j ˆµ i, j = 1,...,n i ; i = 1,...,k (A.6) where the k samples (X i1,x 2,...,X ini ), i = 1,...,k are independent from location families with common location parameter µ. Also, let a i = (, a i1,a i2,...,a i(ni 1)) i = 1,2,...,k. Similarly, for the case of scale families, a i j are replaced by c i j = X i j / ˆσ i, j = 1,2,...,n i, i = 1,2,...,k, respectively. Proposition A.1 Assume k random samples are from k independent location families and assume that relation (A.6) holds. Then a = ( a ( 1 k),...,a is ancillary statistic. Furthermore, k the joint pdf of a is f (a n i k ) =... g i (a i j + z i )) dz i. Proof Let e m denote a m-column vector with all entrees equal to 1. We have X i ˆµ i e ni = (X i µ i e ni ) ( ˆµ i µ i )e ni, i = 1,2,...,k. Further, let δ(x i ) = ˆµ i. Since δ(x i ) is equivariant 20

21 estimator, we have ˆµ i µ i = δ(x i ) µ i = δ (X i µ i e ni ). Therefore, since the distribution of X i µ i e ni = ( X i j µ i ), j = 1,2,...,ni does not depend on parameter, we conclude that the distributions of (X i ˆµ i e ni ), i = 1,2,...,k do not depend on parameter, and this proves that a is ancillary statistic. To establish the joint pdf of a, we assume without loss of generality, that σ 1 = σ 2 = = σ k = 1. Also, let us define a ini by X ni = a ini + ˆµ i, i = 1,2,...,k. Then, since ˆµ i, i = 1,2,...,k are equivariant, a ini can be expressed as a function of a i1,...,a i(ni 1), for each i = 1,2,...,k and thus, one can set a ini = T i (a i1,...,a i(ni 1)). Then x i j = a i j + ˆµ i, j = 1,...,n i 1; x ni = a ni + ˆµ i, i = 1,...,k. Let X = ( X 1,...,X k) with Xi = (X i1,x i2,...,x ini ), i = 1,2,...,k, and let x = ( x 1,...,x k) with xi = (x i1,x i2,...,x ini ), i = 1,2,...,k. We have f (x) = k n i g i ( xi j µ i ). Also, let ˆµ = ( ˆµ1, ˆµ 2,..., ˆµ k ). The transformation from x to (a, ˆµ ) has matrix Jacobian J J J = J k x i1 a... i1 with J i =.... x ini a... i1 x i1 a i(ni 1). x ini a i(ni 1) x i1 ˆµ i. x ini ˆµ i = I n i 1 e ni 1 0 1, i = 1,2,...,k; where I m stands for the identity matrix of size m, e m is a m-column vector with all entrees equal to 1. Then, since J = k J i = 1, the joint pdf of (a, ˆµ ) is f ( a, ˆµ ) = k n i g i ( ai j + ˆµ i µ i ). (A.7) Therefore, combining (A.7) and the change of variables z i = ˆµ i µ i, i = 1,2,...,k, we get f ( ( a ) k n i k =... g i (a i j + z i )) dz i, that completes the proof. Proposition A.2 Assume that the k samples are taken from the pdfs in (1.1) with σ i = 1. Then, 21

22 conditionally to a, the joint pdf of ˆµ µ is f ( x a ) = k n i g i ( ai j + x i ) /( ( k... n i k g i (a i j + z i )) dz i ), x R k. Proof From (A.7), the joint pdf of (a, ˆµ µ ) is f (a,x ) = x = (x 1,x 2,...,x k ) R k, that completes the proof. k n i g i ( ai j + x i ), In the similar way, we establish the following proposition that gives the corresponding result for the general case where the k samples are from scale-location family. To this end, let Z 1 = (Z 11,Z 12,...,Z 1k ), Z 2 = (Z 21,Z 22,...,Z 2k ) with Z 1i = ( ˆµ i µ i )/ ˆσ i, Z 2i = ˆσ i /σ i, i = 1,2,...,k. Also, let b = ( b 1,b 2,...,b k) with bi = ( b i1,b i2,...,b i(ni 2)), b i j = ( X i j ˆµ i ) / ˆσi, j = 1,2,...,n i, i = 1,2,...,k. (A.8) Proposition A.3 Assume that k random samples are from k independent scale-location families and assume that relation (A.8) holds. Then b = ( b 1,...,b k) is ancillary statistic. Further, conditionally to b, the joint pdf of (Z 1,Z 2 ) is with k ( b ) = Proof f ( ( x,y b ) k = y n i 1 i n i 0 g i ( (ai j + x i )y i ) ) /k ( b ), x R k, y R +k, ( k w n i 1 i n i k g i ((b i j + z i )w i )) dw i dz i. From the equivariance of ˆµ i, ˆσ i, i = 1,2...,k and by using similar arguments as in the proof of Proposition A.1, we prove that b is ancillary statistic. Further, let ˆµ = ( ˆµ 1, ˆµ 2,..., ˆµ k ) ˆσ = ( ˆσ 1, ˆσ 2,..., ˆσ k ). In the similar ways as in the proof of Proposition A.1, the joint pdf of (b, ˆµ, ˆσ ) is given by f (b, ˆµ, ˆσ k ( ) ) = b ni ˆσ n i 2 i /σ n n i ( ) i i g i ( ˆσbi j + ˆµ i µ i )/σ i. Further, the transformation from (b, ˆµ, ˆσ ) to (b,z 1,z 2 ) has Jacobian k i σ i 2z 2i, and then, the joint pdf of (b,z 1,Z 2 ) is f (b,z 1 k,z 2 ) = b ni z n n i ( i 1 2i g i z2i (b i j + z 1i ) ), that completes the proof. 22

23 References [1] Bebu, I., and Mathew, T. (2007). Comparing the means and variances of a bivariate log-normal distribution. Statist. Med., 27, 14, [2] Fairweather, W.R. (1972). A method for obtaining an exact confidence interval for the common mean of several normal populations. Appl. Statist., 21, [3] Graybill, F.A., and Deal, R.B. (1959). Combining unbiased estimators. Biometrics, 15, [4] Gupta, A.K., and Székely, G.J. (1994). On location and scale maximum likelihood estimators. Proceedings of the American Mathematical Society, 120, 2, [5] Jordan, S.M. and Krishnamoorthy, K. (1996). Exact confidence intervals for the common mean of several normal populations. Biometrics, 52, [6] Krishnamoorthy, k., and Lu, Yong (2003). Inferences on the common mean of several normal populations based on the generalized variable method. Biometrics, 59, [7] Krishnamoorthy, K., Mathew, T., and Ramachandran, G. (2007). Upper limits for exceedance probabilities under the one-way random effects model. Ann. Occup. Hyg., 51, 4, [8] Lehmann, E. L., and Casella, G. (1998). Theory of Point Estimation. 2 nd ed., Springer-Verlag. [9] Mackowiak, P. A., S. S. Wasserman, and M. M. Levine. (1992). A Critical appraisal of 98.6 degrees F, the upper limit of the normal body temperature, and other legacies 23

24 of Carl Reinhold August Wunderlich. Journal of the American Medical Association, 268: [10] Maric, N., and Graybill, F.A. (1979). Small samples confidence interals on common mean of two normal distributions with unequal variances. Communications in Statistics Theory and Methods, A8, [11] Pagurova, V.I., and Gurskii V.V. (1979). A confidence interval for the common mean of several normal distributions. Theory of Probability and Its Applications, 88, [12] Pitman, E.J.G. (1979). Some basic theory for statistical inference. Chapman and Hall Ltd. [13] Pitman, E.J.G. (1939). The estimation of the location and scale parameters of a continuous population of any given form. Biometrika, 30, [14] Schervish, M.J. (1997). Theory of Statistics, Springer. [15] Sinha, B.k. (1985). Unbiased estimation of the variance of the Graybill-deal estimator of the common mean of several normal populations. Can. J. Statist., 13, [16] Tsui, K., and Weerahandi, S. (1989). Generalized p-values in significance testing of hypotheses in the presence of nuisance parameters. JASA, 84, 406, [17] Weerahandi, S. (1993). Generalized confidence intervals. JASA, 88, 423, [18] Yu, P.L.H., Sun, Y., and Sinha, B.K. (1999) On exact confidence intervals for the common mean of several normal populations. JSPI, 81,

Mean. Pranab K. Mitra and Bimal K. Sinha. Department of Mathematics and Statistics, University Of Maryland, Baltimore County

Mean. Pranab K. Mitra and Bimal K. Sinha. Department of Mathematics and Statistics, University Of Maryland, Baltimore County A Generalized p-value Approach to Inference on Common Mean Pranab K. Mitra and Bimal K. Sinha Department of Mathematics and Statistics, University Of Maryland, Baltimore County 1000 Hilltop Circle, Baltimore,

More information

Inference on reliability in two-parameter exponential stress strength model

Inference on reliability in two-parameter exponential stress strength model Metrika DOI 10.1007/s00184-006-0074-7 Inference on reliability in two-parameter exponential stress strength model K. Krishnamoorthy Shubhabrata Mukherjee Huizhen Guo Received: 19 January 2005 Springer-Verlag

More information

Confidence intervals for parameters of normal distribution.

Confidence intervals for parameters of normal distribution. Lecture 5 Confidence intervals for parameters of normal distribution. Let us consider a Matlab example based on the dataset of body temperature measurements of 30 individuals from the article []. The dataset

More information

COMPARISON OF FIVE TESTS FOR THE COMMON MEAN OF SEVERAL MULTIVARIATE NORMAL POPULATIONS

COMPARISON OF FIVE TESTS FOR THE COMMON MEAN OF SEVERAL MULTIVARIATE NORMAL POPULATIONS Communications in Statistics - Simulation and Computation 33 (2004) 431-446 COMPARISON OF FIVE TESTS FOR THE COMMON MEAN OF SEVERAL MULTIVARIATE NORMAL POPULATIONS K. Krishnamoorthy and Yong Lu Department

More information

Statistical Inference

Statistical Inference Statistical Inference Classical and Bayesian Methods Revision Class for Midterm Exam AMS-UCSC Th Feb 9, 2012 Winter 2012. Session 1 (Revision Class) AMS-132/206 Th Feb 9, 2012 1 / 23 Topics Topics We will

More information

ON COMBINING CORRELATED ESTIMATORS OF THE COMMON MEAN OF A MULTIVARIATE NORMAL DISTRIBUTION

ON COMBINING CORRELATED ESTIMATORS OF THE COMMON MEAN OF A MULTIVARIATE NORMAL DISTRIBUTION ON COMBINING CORRELATED ESTIMATORS OF THE COMMON MEAN OF A MULTIVARIATE NORMAL DISTRIBUTION K. KRISHNAMOORTHY 1 and YONG LU Department of Mathematics, University of Louisiana at Lafayette Lafayette, LA

More information

Generalized confidence interval and p-value in. location and scale family

Generalized confidence interval and p-value in. location and scale family Geeralized cofidece iterval ad p-value i locatio ad scale family Sévérie Nkuruziza ad Fuqi Che Abstract The cocept of geeralized pivotal quatity (GPQ) ad geeralized p-value (GPV) have recetly become popular

More information

Distribution Theory. Comparison Between Two Quantiles: The Normal and Exponential Cases

Distribution Theory. Comparison Between Two Quantiles: The Normal and Exponential Cases Communications in Statistics Simulation and Computation, 34: 43 5, 005 Copyright Taylor & Francis, Inc. ISSN: 0361-0918 print/153-4141 online DOI: 10.1081/SAC-00055639 Distribution Theory Comparison Between

More information

Bootstrap Procedures for Testing Homogeneity Hypotheses

Bootstrap Procedures for Testing Homogeneity Hypotheses Journal of Statistical Theory and Applications Volume 11, Number 2, 2012, pp. 183-195 ISSN 1538-7887 Bootstrap Procedures for Testing Homogeneity Hypotheses Bimal Sinha 1, Arvind Shah 2, Dihua Xu 1, Jianxin

More information

A Multiple Comparison Procedure for Populations with Unequal Variances

A Multiple Comparison Procedure for Populations with Unequal Variances Journal of Statistical Theory and Applications Volume 11, Number 2, 212, pp. 165-181 ISSN 1538-7887 A Multiple Comparison Procedure for Populations with Unequal Variances Hong Li Department of Mathematical

More information

Generalized confidence intervals for the ratio or difference of two means for lognormal populations with zeros

Generalized confidence intervals for the ratio or difference of two means for lognormal populations with zeros UW Biostatistics Working Paper Series 9-7-2006 Generalized confidence intervals for the ratio or difference of two means for lognormal populations with zeros Yea-Hung Chen University of Washington, yeahung@u.washington.edu

More information

Lecture 3. Inference about multivariate normal distribution

Lecture 3. Inference about multivariate normal distribution Lecture 3. Inference about multivariate normal distribution 3.1 Point and Interval Estimation Let X 1,..., X n be i.i.d. N p (µ, Σ). We are interested in evaluation of the maximum likelihood estimates

More information

SOME ASPECTS OF MULTIVARIATE BEHRENS-FISHER PROBLEM

SOME ASPECTS OF MULTIVARIATE BEHRENS-FISHER PROBLEM SOME ASPECTS OF MULTIVARIATE BEHRENS-FISHER PROBLEM Junyong Park Bimal Sinha Department of Mathematics/Statistics University of Maryland, Baltimore Abstract In this paper we discuss the well known multivariate

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

Estimation of parametric functions in Downton s bivariate exponential distribution

Estimation of parametric functions in Downton s bivariate exponential distribution Estimation of parametric functions in Downton s bivariate exponential distribution George Iliopoulos Department of Mathematics University of the Aegean 83200 Karlovasi, Samos, Greece e-mail: geh@aegean.gr

More information

Statistics 3858 : Maximum Likelihood Estimators

Statistics 3858 : Maximum Likelihood Estimators Statistics 3858 : Maximum Likelihood Estimators 1 Method of Maximum Likelihood In this method we construct the so called likelihood function, that is L(θ) = L(θ; X 1, X 2,..., X n ) = f n (X 1, X 2,...,

More information

A3. Statistical Inference Hypothesis Testing for General Population Parameters

A3. Statistical Inference Hypothesis Testing for General Population Parameters Appendix / A3. Statistical Inference / General Parameters- A3. Statistical Inference Hypothesis Testing for General Population Parameters POPULATION H 0 : θ = θ 0 θ is a generic parameter of interest (e.g.,

More information

Tolerance limits for a ratio of normal random variables

Tolerance limits for a ratio of normal random variables Tolerance limits for a ratio of normal random variables Lanju Zhang 1, Thomas Mathew 2, Harry Yang 1, K. Krishnamoorthy 3 and Iksung Cho 1 1 Department of Biostatistics MedImmune, Inc. One MedImmune Way,

More information

The Restricted Likelihood Ratio Test at the Boundary in Autoregressive Series

The Restricted Likelihood Ratio Test at the Boundary in Autoregressive Series The Restricted Likelihood Ratio Test at the Boundary in Autoregressive Series Willa W. Chen Rohit S. Deo July 6, 009 Abstract. The restricted likelihood ratio test, RLRT, for the autoregressive coefficient

More information

Problem Selected Scores

Problem Selected Scores Statistics Ph.D. Qualifying Exam: Part II November 20, 2010 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. Problem 1 2 3 4 5 6 7 8 9 10 11 12 Selected

More information

Approximate and Fiducial Confidence Intervals for the Difference Between Two Binomial Proportions

Approximate and Fiducial Confidence Intervals for the Difference Between Two Binomial Proportions Approximate and Fiducial Confidence Intervals for the Difference Between Two Binomial Proportions K. Krishnamoorthy 1 and Dan Zhang University of Louisiana at Lafayette, Lafayette, LA 70504, USA SUMMARY

More information

2 Functions of random variables

2 Functions of random variables 2 Functions of random variables A basic statistical model for sample data is a collection of random variables X 1,..., X n. The data are summarised in terms of certain sample statistics, calculated as

More information

NEW APPROXIMATE INFERENTIAL METHODS FOR THE RELIABILITY PARAMETER IN A STRESS-STRENGTH MODEL: THE NORMAL CASE

NEW APPROXIMATE INFERENTIAL METHODS FOR THE RELIABILITY PARAMETER IN A STRESS-STRENGTH MODEL: THE NORMAL CASE Communications in Statistics-Theory and Methods 33 (4) 1715-1731 NEW APPROXIMATE INFERENTIAL METODS FOR TE RELIABILITY PARAMETER IN A STRESS-STRENGT MODEL: TE NORMAL CASE uizhen Guo and K. Krishnamoorthy

More information

arxiv: v1 [math.st] 2 May 2014

arxiv: v1 [math.st] 2 May 2014 Generalized onfidence Interval for the ommon oefficient of arxiv:1405.0434v1 [math.st] 2 May 2014 Variation J. Behboodian* and A. A. Jafari** *Department of Mathematics, Shiraz Islamic Azad University,

More information

[y i α βx i ] 2 (2) Q = i=1

[y i α βx i ] 2 (2) Q = i=1 Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation

More information

Improved Confidence Intervals for the Ratio of Coefficients of Variation of Two Lognormal Distributions

Improved Confidence Intervals for the Ratio of Coefficients of Variation of Two Lognormal Distributions Journal of Statistical Theory and Applications, Vol. 16, No. 3 (September 017) 345 353 Improved Confidence Intervals for the Ratio of Coefficients of Variation of Two Lognormal Distributions Md Sazib Hasan

More information

Inferences about Parameters of Trivariate Normal Distribution with Missing Data

Inferences about Parameters of Trivariate Normal Distribution with Missing Data Florida International University FIU Digital Commons FIU Electronic Theses and Dissertations University Graduate School 7-5-3 Inferences about Parameters of Trivariate Normal Distribution with Missing

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

Assessing occupational exposure via the one-way random effects model with unbalanced data

Assessing occupational exposure via the one-way random effects model with unbalanced data Assessing occupational exposure via the one-way random effects model with unbalanced data K. Krishnamoorthy 1 and Huizhen Guo Department of Mathematics University of Louisiana at Lafayette Lafayette, LA

More information

Stat 5101 Lecture Notes

Stat 5101 Lecture Notes Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random

More information

PART I INTRODUCTION The meaning of probability Basic definitions for frequentist statistics and Bayesian inference Bayesian inference Combinatorics

PART I INTRODUCTION The meaning of probability Basic definitions for frequentist statistics and Bayesian inference Bayesian inference Combinatorics Table of Preface page xi PART I INTRODUCTION 1 1 The meaning of probability 3 1.1 Classical definition of probability 3 1.2 Statistical definition of probability 9 1.3 Bayesian understanding of probability

More information

INTERVAL ESTIMATION AND HYPOTHESES TESTING

INTERVAL ESTIMATION AND HYPOTHESES TESTING INTERVAL ESTIMATION AND HYPOTHESES TESTING 1. IDEA An interval rather than a point estimate is often of interest. Confidence intervals are thus important in empirical work. To construct interval estimates,

More information

Empirical Likelihood Inference for Two-Sample Problems

Empirical Likelihood Inference for Two-Sample Problems Empirical Likelihood Inference for Two-Sample Problems by Ying Yan A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Master of Mathematics in Statistics

More information

Explicit Bounds for the Distribution Function of the Sum of Dependent Normally Distributed Random Variables

Explicit Bounds for the Distribution Function of the Sum of Dependent Normally Distributed Random Variables Explicit Bounds for the Distribution Function of the Sum of Dependent Normally Distributed Random Variables Walter Schneider July 26, 20 Abstract In this paper an analytic expression is given for the bounds

More information

Spring 2012 Math 541B Exam 1

Spring 2012 Math 541B Exam 1 Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote

More information

Part III. A Decision-Theoretic Approach and Bayesian testing

Part III. A Decision-Theoretic Approach and Bayesian testing Part III A Decision-Theoretic Approach and Bayesian testing 1 Chapter 10 Bayesian Inference as a Decision Problem The decision-theoretic framework starts with the following situation. We would like to

More information

Modied generalized p-value and condence interval by Fisher's ducial approach

Modied generalized p-value and condence interval by Fisher's ducial approach Hacettepe Journal of Mathematics and Statistics Volume 46 () (017), 339 360 Modied generalized p-value and condence interval by Fisher's ducial approach Evren Ozkip, Berna Yazici and Ahmet Sezer Ÿ Abstract

More information

Some New Aspects of Dose-Response Models with Applications to Multistage Models Having Parameters on the Boundary

Some New Aspects of Dose-Response Models with Applications to Multistage Models Having Parameters on the Boundary Some New Aspects of Dose-Response Models with Applications to Multistage Models Having Parameters on the Boundary Bimal Sinha Department of Mathematics & Statistics University of Maryland, Baltimore County,

More information

Review: General Approach to Hypothesis Testing. 1. Define the research question and formulate the appropriate null and alternative hypotheses.

Review: General Approach to Hypothesis Testing. 1. Define the research question and formulate the appropriate null and alternative hypotheses. 1 Review: Let X 1, X,..., X n denote n independent random variables sampled from some distribution might not be normal!) with mean µ) and standard deviation σ). Then X µ σ n In other words, X is approximately

More information

Least Absolute Value vs. Least Squares Estimation and Inference Procedures in Regression Models with Asymmetric Error Distributions

Least Absolute Value vs. Least Squares Estimation and Inference Procedures in Regression Models with Asymmetric Error Distributions Journal of Modern Applied Statistical Methods Volume 8 Issue 1 Article 13 5-1-2009 Least Absolute Value vs. Least Squares Estimation and Inference Procedures in Regression Models with Asymmetric Error

More information

TUTORIAL 8 SOLUTIONS #

TUTORIAL 8 SOLUTIONS # TUTORIAL 8 SOLUTIONS #9.11.21 Suppose that a single observation X is taken from a uniform density on [0,θ], and consider testing H 0 : θ = 1 versus H 1 : θ =2. (a) Find a test that has significance level

More information

Fiducial Inference and Generalizations

Fiducial Inference and Generalizations Fiducial Inference and Generalizations Jan Hannig Department of Statistics and Operations Research The University of North Carolina at Chapel Hill Hari Iyer Department of Statistics, Colorado State University

More information

Primer on statistics:

Primer on statistics: Primer on statistics: MLE, Confidence Intervals, and Hypothesis Testing ryan.reece@gmail.com http://rreece.github.io/ Insight Data Science - AI Fellows Workshop Feb 16, 018 Outline 1. Maximum likelihood

More information

MA 575 Linear Models: Cedric E. Ginestet, Boston University Non-parametric Inference, Polynomial Regression Week 9, Lecture 2

MA 575 Linear Models: Cedric E. Ginestet, Boston University Non-parametric Inference, Polynomial Regression Week 9, Lecture 2 MA 575 Linear Models: Cedric E. Ginestet, Boston University Non-parametric Inference, Polynomial Regression Week 9, Lecture 2 1 Bootstrapped Bias and CIs Given a multiple regression model with mean and

More information

FULL LIKELIHOOD INFERENCES IN THE COX MODEL

FULL LIKELIHOOD INFERENCES IN THE COX MODEL October 20, 2007 FULL LIKELIHOOD INFERENCES IN THE COX MODEL BY JIAN-JIAN REN 1 AND MAI ZHOU 2 University of Central Florida and University of Kentucky Abstract We use the empirical likelihood approach

More information

Statistics - Lecture One. Outline. Charlotte Wickham 1. Basic ideas about estimation

Statistics - Lecture One. Outline. Charlotte Wickham  1. Basic ideas about estimation Statistics - Lecture One Charlotte Wickham wickham@stat.berkeley.edu http://www.stat.berkeley.edu/~wickham/ Outline 1. Basic ideas about estimation 2. Method of Moments 3. Maximum Likelihood 4. Confidence

More information

Interval Estimation for the Ratio and Difference of Two Lognormal Means

Interval Estimation for the Ratio and Difference of Two Lognormal Means UW Biostatistics Working Paper Series 12-7-2005 Interval Estimation for the Ratio and Difference of Two Lognormal Means Yea-Hung Chen University of Washington, yeahung@u.washington.edu Xiao-Hua Zhou University

More information

Estimation of Conditional Kendall s Tau for Bivariate Interval Censored Data

Estimation of Conditional Kendall s Tau for Bivariate Interval Censored Data Communications for Statistical Applications and Methods 2015, Vol. 22, No. 6, 599 604 DOI: http://dx.doi.org/10.5351/csam.2015.22.6.599 Print ISSN 2287-7843 / Online ISSN 2383-4757 Estimation of Conditional

More information

Conditional Distributions

Conditional Distributions Conditional Distributions The goal is to provide a general definition of the conditional distribution of Y given X, when (X, Y ) are jointly distributed. Let F be a distribution function on R. Let G(,

More information

Probability and Distributions

Probability and Distributions Probability and Distributions What is a statistical model? A statistical model is a set of assumptions by which the hypothetical population distribution of data is inferred. It is typically postulated

More information

Supplement to Quantile-Based Nonparametric Inference for First-Price Auctions

Supplement to Quantile-Based Nonparametric Inference for First-Price Auctions Supplement to Quantile-Based Nonparametric Inference for First-Price Auctions Vadim Marmer University of British Columbia Artyom Shneyerov CIRANO, CIREQ, and Concordia University August 30, 2010 Abstract

More information

Tests for Assessment of Agreement Using Probability Criteria

Tests for Assessment of Agreement Using Probability Criteria Tests for Assessment of Agreement Using Probability Criteria Pankaj K. Choudhary Department of Mathematical Sciences, University of Texas at Dallas Richardson, TX 75083-0688; pankaj@utdallas.edu H. N.

More information

Pairwise rank based likelihood for estimating the relationship between two homogeneous populations and their mixture proportion

Pairwise rank based likelihood for estimating the relationship between two homogeneous populations and their mixture proportion Pairwise rank based likelihood for estimating the relationship between two homogeneous populations and their mixture proportion Glenn Heller and Jing Qin Department of Epidemiology and Biostatistics Memorial

More information

Quantitative Introduction ro Risk and Uncertainty in Business Module 5: Hypothesis Testing

Quantitative Introduction ro Risk and Uncertainty in Business Module 5: Hypothesis Testing Quantitative Introduction ro Risk and Uncertainty in Business Module 5: Hypothesis Testing M. Vidyasagar Cecil & Ida Green Chair The University of Texas at Dallas Email: M.Vidyasagar@utdallas.edu October

More information

Statistics 581, Problem Set 1 Solutions Wellner; 10/3/ (a) The case r = 1 of Chebychev s Inequality is known as Markov s Inequality

Statistics 581, Problem Set 1 Solutions Wellner; 10/3/ (a) The case r = 1 of Chebychev s Inequality is known as Markov s Inequality Statistics 581, Problem Set 1 Solutions Wellner; 1/3/218 1. (a) The case r = 1 of Chebychev s Inequality is known as Markov s Inequality and is usually written P ( X ɛ) E( X )/ɛ for an arbitrary random

More information

Practice Problems Section Problems

Practice Problems Section Problems Practice Problems Section 4-4-3 4-4 4-5 4-6 4-7 4-8 4-10 Supplemental Problems 4-1 to 4-9 4-13, 14, 15, 17, 19, 0 4-3, 34, 36, 38 4-47, 49, 5, 54, 55 4-59, 60, 63 4-66, 68, 69, 70, 74 4-79, 81, 84 4-85,

More information

Empirical Likelihood Methods for Sample Survey Data: An Overview

Empirical Likelihood Methods for Sample Survey Data: An Overview AUSTRIAN JOURNAL OF STATISTICS Volume 35 (2006), Number 2&3, 191 196 Empirical Likelihood Methods for Sample Survey Data: An Overview J. N. K. Rao Carleton University, Ottawa, Canada Abstract: The use

More information

One-Sample Numerical Data

One-Sample Numerical Data One-Sample Numerical Data quantiles, boxplot, histogram, bootstrap confidence intervals, goodness-of-fit tests University of California, San Diego Instructor: Ery Arias-Castro http://math.ucsd.edu/~eariasca/teaching.html

More information

Modied tests for comparison of group means under heteroskedasticity and non-normality caused by outlier(s)

Modied tests for comparison of group means under heteroskedasticity and non-normality caused by outlier(s) Hacettepe Journal of Mathematics and Statistics Volume 46 (3) (2017), 493 510 Modied tests for comparison of group means under heteroskedasticity and non-normality caused by outlier(s) Mustafa Cavus, Berna

More information

Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data

Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data Applied Mathematical Sciences, Vol 3, 009, no 54, 695-70 Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data Evelina Veleva Rousse University A Kanchev Department of Numerical

More information

Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone Missing Data

Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone Missing Data Journal of Multivariate Analysis 78, 6282 (2001) doi:10.1006jmva.2000.1939, available online at http:www.idealibrary.com on Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone

More information

GENERALIZED CONFIDENCE INTERVALS FOR THE SCALE PARAMETER OF THE INVERTED EXPONENTIAL DISTRIBUTION

GENERALIZED CONFIDENCE INTERVALS FOR THE SCALE PARAMETER OF THE INVERTED EXPONENTIAL DISTRIBUTION Internation Journ of Latest Research in Science and Technology ISSN (Online):7- Volume, Issue : Page No.-, November-December 0 (speci Issue Paper ) http://www.mnkjourns.com/ijlrst.htm Speci Issue on Internation

More information

SEQUENTIAL ESTIMATION OF A COMMON LOCATION PARAMETER OF TWO POPULATIONS

SEQUENTIAL ESTIMATION OF A COMMON LOCATION PARAMETER OF TWO POPULATIONS REVSTAT Statistical Journal Volume 14, Number 3, June 016, 97 309 SEQUENTIAL ESTIMATION OF A COMMON LOCATION PARAMETER OF TWO POPULATIONS Authors: Agnieszka St epień-baran Institute of Applied Mathematics

More information

ASSESSING A VECTOR PARAMETER

ASSESSING A VECTOR PARAMETER SUMMARY ASSESSING A VECTOR PARAMETER By D.A.S. Fraser and N. Reid Department of Statistics, University of Toronto St. George Street, Toronto, Canada M5S 3G3 dfraser@utstat.toronto.edu Some key words. Ancillary;

More information

TA: Sheng Zhgang (Th 1:20) / 342 (W 1:20) / 343 (W 2:25) / 344 (W 12:05) Haoyang Fan (W 1:20) / 346 (Th 12:05) FINAL EXAM

TA: Sheng Zhgang (Th 1:20) / 342 (W 1:20) / 343 (W 2:25) / 344 (W 12:05) Haoyang Fan (W 1:20) / 346 (Th 12:05) FINAL EXAM STAT 301, Fall 2011 Name Lec 4: Ismor Fischer Discussion Section: Please circle one! TA: Sheng Zhgang... 341 (Th 1:20) / 342 (W 1:20) / 343 (W 2:25) / 344 (W 12:05) Haoyang Fan... 345 (W 1:20) / 346 (Th

More information

4. CONTINUOUS RANDOM VARIABLES

4. CONTINUOUS RANDOM VARIABLES IA Probability Lent Term 4 CONTINUOUS RANDOM VARIABLES 4 Introduction Up to now we have restricted consideration to sample spaces Ω which are finite, or countable; we will now relax that assumption We

More information

Research Article A Nonparametric Two-Sample Wald Test of Equality of Variances

Research Article A Nonparametric Two-Sample Wald Test of Equality of Variances Advances in Decision Sciences Volume 211, Article ID 74858, 8 pages doi:1.1155/211/74858 Research Article A Nonparametric Two-Sample Wald Test of Equality of Variances David Allingham 1 andj.c.w.rayner

More information

University of California San Diego and Stanford University and

University of California San Diego and Stanford University and First International Workshop on Functional and Operatorial Statistics. Toulouse, June 19-21, 2008 K-sample Subsampling Dimitris N. olitis andjoseph.romano University of California San Diego and Stanford

More information

Preliminary Statistics Lecture 5: Hypothesis Testing (Outline)

Preliminary Statistics Lecture 5: Hypothesis Testing (Outline) 1 School of Oriental and African Studies September 2015 Department of Economics Preliminary Statistics Lecture 5: Hypothesis Testing (Outline) Gujarati D. Basic Econometrics, Appendix A.8 Barrow M. Statistics

More information

Mathematics Qualifying Examination January 2015 STAT Mathematical Statistics

Mathematics Qualifying Examination January 2015 STAT Mathematical Statistics Mathematics Qualifying Examination January 2015 STAT 52800 - Mathematical Statistics NOTE: Answer all questions completely and justify your derivations and steps. A calculator and statistical tables (normal,

More information

LECTURE 5 NOTES. n t. t Γ(a)Γ(b) pt+a 1 (1 p) n t+b 1. The marginal density of t is. Γ(t + a)γ(n t + b) Γ(n + a + b)

LECTURE 5 NOTES. n t. t Γ(a)Γ(b) pt+a 1 (1 p) n t+b 1. The marginal density of t is. Γ(t + a)γ(n t + b) Γ(n + a + b) LECTURE 5 NOTES 1. Bayesian point estimators. In the conventional (frequentist) approach to statistical inference, the parameter θ Θ is considered a fixed quantity. In the Bayesian approach, it is considered

More information

1 Introduction to Estimation

1 Introduction to Estimation STT 430/630/ES 760 Lecture Notes: Chapter 5: Estimation 1 February 23, 2009 Chapter 5: Estimation The probability distributions such as the normal, exponential, or binomial are defined in terms of parameters

More information

Statistical methods for evaluating the linearity in assay validation y,z

Statistical methods for evaluating the linearity in assay validation y,z Research Article Received: 28 February 2008, Revised: 21 June 2008, Accepted: 20 August 2008, Published online in Wiley InterScience: 6 October 2008 (www.interscience.wiley.com) DOI: 10.1002/cem.1194 Statistical

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information

Fundamental Probability and Statistics

Fundamental Probability and Statistics Fundamental Probability and Statistics "There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don't know. But there are

More information

Regression and Statistical Inference

Regression and Statistical Inference Regression and Statistical Inference Walid Mnif wmnif@uwo.ca Department of Applied Mathematics The University of Western Ontario, London, Canada 1 Elements of Probability 2 Elements of Probability CDF&PDF

More information

By Godase, Shirke, Kashid. Published: 26 April 2017

By Godase, Shirke, Kashid. Published: 26 April 2017 Electronic Journal of Applied Statistical Analysis EJASA, Electron. J. App. Stat. Anal. http://siba-ese.unisalento.it/index.php/ejasa/index e-issn: 2070-5948 DOI: 10.1285/i20705948v10n1p29 Tolerance intervals

More information

HANDBOOK OF APPLICABLE MATHEMATICS

HANDBOOK OF APPLICABLE MATHEMATICS HANDBOOK OF APPLICABLE MATHEMATICS Chief Editor: Walter Ledermann Volume VI: Statistics PART A Edited by Emlyn Lloyd University of Lancaster A Wiley-Interscience Publication JOHN WILEY & SONS Chichester

More information

Applications of Basu's TheorelTI. Dennis D. Boos and Jacqueline M. Hughes-Oliver I Department of Statistics, North Car-;'lina State University

Applications of Basu's TheorelTI. Dennis D. Boos and Jacqueline M. Hughes-Oliver I Department of Statistics, North Car-;'lina State University i Applications of Basu's TheorelTI by '. Dennis D. Boos and Jacqueline M. Hughes-Oliver I Department of Statistics, North Car-;'lina State University January 1997 Institute of Statistics ii-limeo Series

More information

Master s Written Examination

Master s Written Examination Master s Written Examination Option: Statistics and Probability Spring 016 Full points may be obtained for correct answers to eight questions. Each numbered question which may have several parts is worth

More information

TECHNICAL REPORT # 59 MAY Interim sample size recalculation for linear and logistic regression models: a comprehensive Monte-Carlo study

TECHNICAL REPORT # 59 MAY Interim sample size recalculation for linear and logistic regression models: a comprehensive Monte-Carlo study TECHNICAL REPORT # 59 MAY 2013 Interim sample size recalculation for linear and logistic regression models: a comprehensive Monte-Carlo study Sergey Tarima, Peng He, Tao Wang, Aniko Szabo Division of Biostatistics,

More information

Chapter 3: Maximum Likelihood Theory

Chapter 3: Maximum Likelihood Theory Chapter 3: Maximum Likelihood Theory Florian Pelgrin HEC September-December, 2010 Florian Pelgrin (HEC) Maximum Likelihood Theory September-December, 2010 1 / 40 1 Introduction Example 2 Maximum likelihood

More information

Testing Simple Hypotheses R.L. Wolpert Institute of Statistics and Decision Sciences Duke University, Box Durham, NC 27708, USA

Testing Simple Hypotheses R.L. Wolpert Institute of Statistics and Decision Sciences Duke University, Box Durham, NC 27708, USA Testing Simple Hypotheses R.L. Wolpert Institute of Statistics and Decision Sciences Duke University, Box 90251 Durham, NC 27708, USA Summary: Pre-experimental Frequentist error probabilities do not summarize

More information

SAMPLE SIZE RE-ESTIMATION FOR ADAPTIVE SEQUENTIAL DESIGN IN CLINICAL TRIALS

SAMPLE SIZE RE-ESTIMATION FOR ADAPTIVE SEQUENTIAL DESIGN IN CLINICAL TRIALS Journal of Biopharmaceutical Statistics, 18: 1184 1196, 2008 Copyright Taylor & Francis Group, LLC ISSN: 1054-3406 print/1520-5711 online DOI: 10.1080/10543400802369053 SAMPLE SIZE RE-ESTIMATION FOR ADAPTIVE

More information

Statistics 3858 : Contingency Tables

Statistics 3858 : Contingency Tables Statistics 3858 : Contingency Tables 1 Introduction Before proceeding with this topic the student should review generalized likelihood ratios ΛX) for multinomial distributions, its relation to Pearson

More information

Direction: This test is worth 250 points and each problem worth points. DO ANY SIX

Direction: This test is worth 250 points and each problem worth points. DO ANY SIX Term Test 3 December 5, 2003 Name Math 52 Student Number Direction: This test is worth 250 points and each problem worth 4 points DO ANY SIX PROBLEMS You are required to complete this test within 50 minutes

More information

Hypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes

Hypothesis Test. The opposite of the null hypothesis, called an alternative hypothesis, becomes Neyman-Pearson paradigm. Suppose that a researcher is interested in whether the new drug works. The process of determining whether the outcome of the experiment points to yes or no is called hypothesis

More information

Lecture 13: Subsampling vs Bootstrap. Dimitris N. Politis, Joseph P. Romano, Michael Wolf

Lecture 13: Subsampling vs Bootstrap. Dimitris N. Politis, Joseph P. Romano, Michael Wolf Lecture 13: 2011 Bootstrap ) R n x n, θ P)) = τ n ˆθn θ P) Example: ˆθn = X n, τ n = n, θ = EX = µ P) ˆθ = min X n, τ n = n, θ P) = sup{x : F x) 0} ) Define: J n P), the distribution of τ n ˆθ n θ P) under

More information

A Very Brief Summary of Statistical Inference, and Examples

A Very Brief Summary of Statistical Inference, and Examples A Very Brief Summary of Statistical Inference, and Examples Trinity Term 2009 Prof. Gesine Reinert Our standard situation is that we have data x = x 1, x 2,..., x n, which we view as realisations of random

More information

Mathematics Ph.D. Qualifying Examination Stat Probability, January 2018

Mathematics Ph.D. Qualifying Examination Stat Probability, January 2018 Mathematics Ph.D. Qualifying Examination Stat 52800 Probability, January 2018 NOTE: Answers all questions completely. Justify every step. Time allowed: 3 hours. 1. Let X 1,..., X n be a random sample from

More information

Online publication date: 12 January 2010

Online publication date: 12 January 2010 This article was downloaded by: [Zhang, Lanju] On: 13 January 2010 Access details: Access Details: [subscription number 918543200] Publisher Taylor & Francis Informa Ltd Registered in England and Wales

More information

EXACT AND ASYMPTOTICALLY ROBUST PERMUTATION TESTS. Eun Yi Chung Joseph P. Romano

EXACT AND ASYMPTOTICALLY ROBUST PERMUTATION TESTS. Eun Yi Chung Joseph P. Romano EXACT AND ASYMPTOTICALLY ROBUST PERMUTATION TESTS By Eun Yi Chung Joseph P. Romano Technical Report No. 20-05 May 20 Department of Statistics STANFORD UNIVERSITY Stanford, California 94305-4065 EXACT AND

More information

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015

Part IB Statistics. Theorems with proof. Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua. Lent 2015 Part IB Statistics Theorems with proof Based on lectures by D. Spiegelhalter Notes taken by Dexter Chua Lent 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly)

More information

Master s Written Examination - Solution

Master s Written Examination - Solution Master s Written Examination - Solution Spring 204 Problem Stat 40 Suppose X and X 2 have the joint pdf f X,X 2 (x, x 2 ) = 2e (x +x 2 ), 0 < x < x 2

More information

Multiple Random Variables

Multiple Random Variables Multiple Random Variables Joint Probability Density Let X and Y be two random variables. Their joint distribution function is F ( XY x, y) P X x Y y. F XY ( ) 1, < x

More information

University of California, Berkeley

University of California, Berkeley University of California, Berkeley U.C. Berkeley Division of Biostatistics Working Paper Series Year 24 Paper 153 A Note on Empirical Likelihood Inference of Residual Life Regression Ying Qing Chen Yichuan

More information

Probabilities & Statistics Revision

Probabilities & Statistics Revision Probabilities & Statistics Revision Christopher Ting Christopher Ting http://www.mysmu.edu/faculty/christophert/ : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 January 6, 2017 Christopher Ting QF

More information

Simple Linear Regression

Simple Linear Regression Simple Linear Regression In simple linear regression we are concerned about the relationship between two variables, X and Y. There are two components to such a relationship. 1. The strength of the relationship.

More information

Math 494: Mathematical Statistics

Math 494: Mathematical Statistics Math 494: Mathematical Statistics Instructor: Jimin Ding jmding@wustl.edu Department of Mathematics Washington University in St. Louis Class materials are available on course website (www.math.wustl.edu/

More information

Bootstrap. Director of Center for Astrostatistics. G. Jogesh Babu. Penn State University babu.

Bootstrap. Director of Center for Astrostatistics. G. Jogesh Babu. Penn State University  babu. Bootstrap G. Jogesh Babu Penn State University http://www.stat.psu.edu/ babu Director of Center for Astrostatistics http://astrostatistics.psu.edu Outline 1 Motivation 2 Simple statistical problem 3 Resampling

More information