ON COMBINING CORRELATED ESTIMATORS OF THE COMMON MEAN OF A MULTIVARIATE NORMAL DISTRIBUTION

Size: px
Start display at page:

Download "ON COMBINING CORRELATED ESTIMATORS OF THE COMMON MEAN OF A MULTIVARIATE NORMAL DISTRIBUTION"

Transcription

1 ON COMBINING CORRELATED ESTIMATORS OF THE COMMON MEAN OF A MULTIVARIATE NORMAL DISTRIBUTION K. KRISHNAMOORTHY 1 and YONG LU Department of Mathematics, University of Louisiana at Lafayette Lafayette, LA , USA The inferential procedures based on an optimal combination of correlated estimators of the common mean of a multivariate normal distribution are considered. Exact properties of the conditional and unconditional confidence intervals due to Halperin 1961) are numerically evaluated. Our numerical studies show that the conditional confidence interval is slightly shorter than the unconditional confidence interval. A condition under which the conditional approach is advantageous over the best of the t procedures based on individual components is discussed. The methods are illustrated using an example. Key words: Concomitant variable; expected length; maximum likelihood estimator; noncentral t distribution; multiple correlation coefficient; power - 1 Corresponding Author: krishna@louisiana.edu Phone: ; Fax:

2 1. INTRODUCTION The problem of combining independent estimators for the common mean of several normal populations is well-known and has been well addressed in the literature. An important result in the common mean problem is due to Graybill and Deal 1956) who first showed, for the two-sample case, that the weighted average of the sample means with weights inversely proportional to their variances has smaller variance than either sample mean provided the sample sizes are greater than 10. Since then many authors improved and extended this result to the case of more than two populations, and developed several methods for hypothesis testing and interval estimation for the common mean. For a good exposition of the work in this area, we refer to Cohen and Sckrowitz 1984), Zhou and Mathew 1993), Yu, Sun and Sinha 1999) and Krishnamoorthy and Lu 003) and the references therein. However, the results on combining the correlated estimators in the normal case are very limited. Halperin 1961) seems to be the first paper addressed this problem. Halperin pointed out that the problem of estimating the common mean of a multivariate normal population arises when several alike neutron transportation problems are considered. Halperin derived the maximum likelihood estimator MLE) and developed two interval estimates for the common mean of a multivariate normal population. We shall now describe the setup of the problem as given in Halperin 1961). Let U N p eµ, Σ), where e denotes the vector of ones. Let Ū and S u denote respectively the mean and covariance matrix based a sample of n observations from N p eµ, Σ). The maximum likelihood estimator of µ due to Halperin 1961) is given by ˆµ = e Su 1Ū e Su 1 e. 1) If Σ is known, then the best linear unbiased estimator BLUE) of µ is given by e Σ 1 Ū/e Σ 1 e) and it has variance ne Σ 1 e) 1. If Σ is unknown, replacing Σ by its estimate S u, we can get the MLE. The variance of the MLE is given by Varˆµ) = 1 + p 1 n p 1 which approaches the variance of the BLUE as n. ) 1 ne Σ 1 e, ) The form of the MLE in 1) is not conducive to develop a confidence interval for µ. To derive the distribution of the MLE, Halperin suggested to use the following transformation. Let A = a ij ) be a p p matrix such that a i1 = 1 for i = 1,..., p, a ii = 1 for i =,..., p, and a ij = 0 elsewhere. Then AU = y, x 1,..., x p 1 ) = y, X ) follows a p-variate normal distribution with mean vector µ, 0,..., 0) and covariance matrix ) AΣA σyy σ Xy = σ Xy Σ XX p p. 3) Thus, estimation of the common mean µ is equivalent to estimation of the mean of y given that the mean of X = 0 p 1. Let y 1, X 1 ),..., y n, X n ) be independent observations on y, X). Define ȳ, X ) = 1 n y i, X n i) 4) i=1

3 3 and wyy W Xy W p p = W Xy W XX ) ni=1 y = i ȳ) ni=1 y i ȳ)x i X) ) ni=1 y i ȳ)x i X) ni=1 X i X)X i X), 5) so that W XX is a p 1) p 1) matrix. Let a = a 1,..., a p 1 ) be a vector of real numbers, and β = Σ 1 XX σ Xy. Consider the class of estimators of the form ȳa) = ȳ p 1 i=1 a i x i. It can be easily shown that Varȳa)) is minimized when a = β. Thus, if β is known, then ȳβ) is the best linear unbiased estimator of µ. If β is unknown, then replacing it by b = WXX 1 W Xy we get ˆµ = ȳ b X. 6) This is an alternative form of the MLE in 1). The expression for the variance of the MLE can be written as Varˆµ) = 1 + p 1 ) σyy.x, for n > p + 1, 7) n p 1 n where σ yy.x = σ yy 1 ρ y.x ), and ρ y.x = σ Xy Σ 1 XX σ Xy)/σ yy is the squared multiple correlation coefficient between y and X. It follows from 17) that the variance of the MLE is smaller than that of ȳ = ū 1 if and only if ρ y.x > p 1 n. Recall that the transformation we used is u 1, u 1 u,..., u 1 u p ) = y, x 1,..., x p 1 ). If we let u, u u 1,..., u u p ) = y, x 1,..., x p 1 ), then the MLE has smaller variance than ū if and only if ρ u 1.u 1 u ),...,u 1 u p ) > p 1)/n ). Proceeding this way, we see that the MLE has smaller variance than the min{varū 1 ),..., Varū p )} if and only if min{ρ u 1.u 1 u ),...,u 1 u p),..., ρ u p.u p u 1 ),...,u p u p 1 ) } > p 1 n. 8) Krishnamoorthy and Rohatgi 1990) showed that ˆµ can be improved using the fact that the mean of X is known to be zero. In particular, they suggested using W X0 = n i=1 X i X i to estimate β. This leads to the estimator ˆµ 1 = ȳ b X, 0 where b 0 = W X0 W Xy and W Xy is defined in 5). Krishnamoorthy and Rohatgi 1990) showed that ˆµ 1 has smaller variance than ˆµ over a wide range of parameter space. The problem of estimating the mean of y given that the mean of X is 0 p 1 1 has been considered by Berry 1987), Tan and Gleser 1993), and Jin and Berry 1993). These authors refer to the vector X as concomitant control vector for estimating the mean µ of y. This problem is equivalent to the common mean problem with the transformed variables. However, the main interest in the common mean problem is to develop a better inferential procedure, based on a combination of the correlated estimators, than the best of the t procedures based on individual estimators whereas there is no such interest in the problem of estimating µ with a concomitant control vector. In this article, we are mainly interested in comparing three confidence intervals, including the t-interval based on the marginal distribution of y, that are given in Halperin 1961). In the following section, we describe the conditional interval and the unconditional interval due

4 4 to Halperin, and present expressions for their expected lengths. The expected lengths are compared numerically. Our comparison studies in Section 3 shows that the conditional intervals are either slightly shorter than or almost close to the unconditional intervals for all the cases considered. We also discuss a condition under which the expected length of the conditional confidence interval is shorter than the best of the t intervals based on individual means. For the sake completeness, we also present the test based on the conditional approach, and its power function. The methods are illustrated using a simulated data set.. INTERVAL ESTIMATION AND EXPECTED LENGTHS In the following lemma, we present some basic distributional results related to the statistics defined in the previous section. These results can be found, for example, in Muirhead 198, Chapter 3). Lemma.1. i) The conditional distribution of b = W Xy W 1 XX given X 1,..., X n ) is N p 1 β, σ yy.x W 1 XX ). ii) n iii) V = XΣ 1 XX X = n X Σ 1 )Σ 1 X n) = Z Z χ p 1. XΣ 1 XX X XW 1 XX X χ n p+1 independently of X or Z) iv) Q = n X WXX 1 X = Z Z/V p 1 n p+1 F p 1,n p+1, where F a,b denotes the F random variable with the numerator df = a and the denominator df = b. vi) The sample conditional variance of y given X is defined as ˆσ yy.x = wyy W Xy W 1 XX W Xy n p and is distributed as σ yy.x n p χ n p independently of Q. We shall now present the confidence intervals that will be considered for comparison..1 The t-interval The usual t-interval based on the marginal distribution of y is given by ȳ ± t n 1,1 α/ syy n, 9) where s yy is the sample variance of y and t m,α denotes the αth quantile of the Student s t distribution. The expected length of the t-interval is given by EL 1 = t n 1,1 α/ E ) syy n where Γ.) denotes the gamma function. Γn/) = t n 1,1 α/ n Γn 1)/ σyy n 1, 10)

5 5. The Conditional Interval We shall now describe the conditional confidence interval due to Halperin 1961). Using the results of Lemma.1, it can be readily verified that the conditional distribution of ˆµ given X 1,..., X n is normal with mean µ and variance σ yy.x 1 + Q), where Q = n X WXX 1 X defined in Lemma.1v). We write ˆµ X 1,..., X n ) N µ, σ yy.x 1 + Q)/n). 11) Notice that n p)ˆσ yy.x /σ yy.x χ n p. Using this result, we see that, conditionally given Q, the pivotal quantity nˆµ µ) t n p. 1) ˆσ yy.x 1 + Q) This leads to the conditional confidence interval ˆµ ± t n p,1 α/ 1 + Q) 1 ˆσ yy.x n. 13) It follows from Lemma.1iv) that 1 + Q) is distributed as U 1, where U is a beta random variable with parameters n p+1)/ and p 1)/. Using this result and the fact that X WXX 1 X and ˆσ yy.x are independent, it is easy to see that the expected length of the conditional confidence interval in 13) is EL = t n p,1 α/ n Γ ) n ) Γ n 1 σyy 1 ρ y.x ) n p ) 1. 14) It should be noted that the formula for EL given in Halperin 1961) is incorrect..3 The Unconditional Confidence Interval It follows from 1) and Lemma.1iv) that nȳb) µ0 ) T = t n p 1 + p 1 ) 1 ˆσyy.X n p + 1 F p 1,n p+1. 15) The percentiles of T can be used to form a 1 α confidence interval for µ. Using some standard methods, it can be shown that the 1 - α)th quantile k of T is the solution of the equation Γn/) Γp 1)/)Γn p + 1)/) 1 0 G k ) 1 x; n p x p 1)/ 1 1 x) n p+1)/ 1 dx = 1 α, where G.; m) denotes the Student s t cdf with df = m. To get 16), we used the fact that F a,b is distributed as bu/a1 U)), where U is a betaa/, b/) random variable. Noting that the Student s t distribution is symmetric about zero, it follows from 16) that the distribution of T is also symmetric about zero. Let T α denote the αth quantile of T. Then, the unconditional 1 α confidence interval for µ is given by ˆµ ± T 1 α/ ˆσ yy.x n. 17) 16)

6 6 The expected length of the unconditional confidence interval is given by EL 3 = T 1 α/ E ˆσ yy.x n = T 1 α/ n Γ n p+1 ) Γ n p ) σyy 1 ρ y.x ) n p ) 1. 18) Remark 1. Using 16), we computed the values of T 1 α/ when α = 0.05 and 0.1, p =, 3, 4 and 5, and values of n ranging from 6 to These critical values are presented in Table I. We also found that the distribution of T in 15) can be approximated by ct n p, where c = n )/n p 1). The constant c was obtained by solving the equation Ec t n p) = ET ).. Using this approximation, we have T 1 α/ = t n p,1 α/ n )/n p 1). This approximation is satisfactory as long as n p COMPARISON OF EXPECTED LENGTHS It is clear from the expressions of EL 1, EL and EL 3, that the ratios EL /EL 1 and EL 3 /EL 1 depend on the parameter space only through ρ y.x. Using this fact, direct comparison between EL and EL 1 shows that the expected length of the conditional confidence interval is shorter than the expected length of the usual t interval based on y observations alone if and only if ) n ) ρ tn 1,1 α/ p y.x > 1. 19) n 1 t n p,1 α/ The above inequality is different from the one given in Halperin 1961, p.41), because, as we already pointed out, Halperin s formula for the expected length of the conditional interval is incorrect. For fixed p, the right-hand side of 19) approaches zero as n. This implies that, for large n, EL is smaller than EL 1 for all practically meaningful values of ρ y.x. However, this does not mean that EL is smaller than the expected length of the shortest of the individual t intervals. The above condition merely implies that EL is smaller than the expected length of the t-interval based on ū 1 = ȳ. For EL to be shorter than the t-interval based on ū, we should have ρ u.u u 1 ),...,u u p) > 1 tn 1,1 α/ t n p,1 α/ ) n ) p, n 1 where ρ u.u u 1 ),...,u u p) is the squared multiple correlation coefficient between U and U U 1 ),..., U U p )). Proceeding this way, we see that EL is shorter than the shortest of the t-intervals if and only if ) n ) min{ρ u 1.u 1 u ),...,u 1 u p ),..., ρ u p.u p u 1 ),...,u p u p 1 ) } > 1 tn 1,1 α/ p. 0) t n p,1 α/ n 1 Comparison between EL and EL 3 shows that the ratio EL /EL 3 > 1 if and only if Γ ) ) n Γ n p+1 t n p,1 α/ ) < T Γ n 1 1 α/ Γ n p ). 1)

7 7 TABLE I Critical points T 1 α/ for constructing unconditional confidence intervals n p = p = 3 p = 4 p = 5 α = 0.05 α = 0.10 α = 0.05 α = 0.10 α = 0.05 α = 0.10 α = 0.05 α = We numerically evaluated EL /EL 1 and EL 3 /EL 1 and presented them in Table II. It is clear from the table values that EL is in general either very close to EL 3 or smaller than EL 3, and the difference between them decreases as n increases. Thus, we see that the conditional confidence interval is not only simple to construct but also narrower than the unconditional confidence interval. Furthermore, if n p is small and ρ y.x is small, then the usual t interval is shorter than both conditional and unconditional intervals see the values in Table II when n = 6, p = 3 and n = 6, p = 5). Thus, the conditional combined method is preferable to the best of the t procedures only when condition 0) holds and/or n p is moderately large.

8 8 TABLE II Ratios of the Expected Lengths of 95% Confidence Intervals; p = n = 6 n = 10 n = 15 n = 0 n = 30 ρ y.x EL EL 3 EL EL 3 EL EL 3 EL EL 3 EL EL 3 EL 1 EL 1 EL 1 EL 1 EL 1 EL 1 EL 1 EL 1 EL 1 EL ρ L TABLE II continued. p = 3 n = 6 n = 10 n = 0 n = 30 n = 40 ρ y.x EL EL 3 EL EL 3 EL EL 3 EL EL 3 EL EL 3 EL 1 EL 1 EL 1 EL 1 EL 1 EL 1 EL 1 EL 1 EL 1 EL ρ L Note: ρ L is the lower bound given in the right-hand side of 0); EL < EL 1 when 0) holds

9 9 TABLE II continued. p = 5 n = 6 n = 10 n = 0 n = 30 n = 40 ρ y.x EL EL 3 EL EL 3 EL EL 3 EL EL 3 EL EL 3 EL 1 EL 1 EL 1 EL 1 EL 1 EL 1 EL 1 EL 1 EL 1 EL ρ L Note: ρ L is the lower bound given in the right-hand side of 0); EL < EL 1 when 0) holds 4. POWER FUNCTION We observed in the preceding section that the conditional method performs better than the unconditional method, and hence we consider only the power function of the conditional test based on 1). Consider the hypotheses H 0 : µ µ 0 vs. H a : µ > µ 0. The conditional non-null distribution given T ) is the noncentral t distribution with df = n p 1 and the noncentrality parameter nµ µ0 ) δq) =, ) σyy.x 1 + Q where µ is the true value and µ 0 is the specified value of the mean. The unconditional power of a right-tail test can be expressed as E Q [P t n p 1 δq)) > t n p 1,1 α )]. 3) Again, using the fact that 1+Q is distributed as the reciprocal of a betan p)/, p/) random variable, the power can be computed using the numerical integration 1 Γn/) Γp/)Γn p)/) 1 0 ) G c 1 ; n p 1, δu 1 ) u p/ 1 1 u) n p)/ 1 du, 4) where c 1 = t n p 1,1 α and Gx; m, d) denotes the cdf of a noncentral t random variable with df = m and the noncentrality parameter d. Although it is not difficult to compute 0), a simple approximate power expression can be obtained from 19), and is given by P t n p 1 δeq)) > t n p 1,1 α ). 5)

10 10 Noting that EQ) = n )/n p ), for a given level of significance α, p and η = µ µ 0 )/σ yy.x, an approximate sample size n that is required to attain a power of 1 β satisfies P t n p 1 δ 1 ) > t n p 1,1 α ) = 1 β, 6) where δ 1 = ) 1 nµ µ0 ) n p. 7) σyy.x n In order to understand the validity of the approximation, we computed the exact power using 4) and the approximate power based on 6) for various values of n, η = µ µ 0 )/σ yy.x and p =, 3 and 4. These powers are presented TABLE II. We see from the table values that the approximate powers are close to the exact powers provided n is moderately large in comparison to p. Our extensive numerical studies for various values of p not reported here) showed that the approximation is very satisfactory for values of n 5p. An advantage of this approximation is that it only involves the computation of the noncentral cdf with fixed noncentrality parameter when η, n and p are given), and so the power computation can be carried out using freely available PC calculators such as StatCalc or online calculator available at TABLE III Exact powers 4) and approximate powers 5) of the conditional test when α = 0.05 p = n = 6 n = 8 n = 1 n = 16 n = 0 η = µ µ 0 σyy.x Exact Appr. Exact Appr. Exact Appr. Exact Appr. Exact Appr

11 11 TABLE III continued. p = 3 n = 8 n = 1 n = 16 n = 0 n = 4 η = µ µ 0 σyy.x Exact Appr. Exact Appr. Exact Appr. Exact Appr. Exact Appr TABLE III continued. p = 4 n = 8 n = 1 n = 16 n = 0 n = 4 η = µ µ 0 σyy.x Exact Appr. Exact Appr. Exact Appr. Exact Appr. Exact Appr AN EXAMPLE To illustrate the methods of this paper, we generated a sample of 0 observations from Nµ, Σ), where µ = 4 and Σ = The data points are given in Table IV. The summary statistics are ) wyy W ȳ, x 1, x ) = , 4.160, 4.91), yy.x = W yy.x W XX ,

12 W 1 XX = E E E ) 1, ˆµ = , ˆσ yy.x = and ρ y.x = The standard deviations are s u1 = 1.747, s u = and s u3 = The critical points t 19,0.975 =.0930 and T u,0.975 =.418. Using these statistics, we computed the following confidence intervals for µ. TABLE IV Simulated data; n = 0, p = 3 u 1 u u 3 y = u 1 x 1 = u 1 u x = u 1 u The 95% t-intervals a) ū 1 ± t 19,0.975 s u1 n = 4.91 ± ; ˆρ u 1.u 1 u,u 1 u 3 ) = b) ū ± t 19,0.975 s u n = ± ; ˆρ u.u u 1,u u 3 ) = c) ū 3 ± t 19,0.975 s u3 n = ± ; ˆρ u 3.u 3 u,u 3 u 1 ) = The conditional interval in 13) = ± The unconditional interval in 17) = ± The interval b) is the shortest among all the intervals. We also note that, for the conditional interval to be the shortest, we must have min{ρ u 1.u 1 u ),u 1 u ), ρ u 1.u 1 u ),u 1 u ), ρ u 1.u 1 u ),u 1 u )} > )

13 13 see Table II, n = 0, p = 3). Since the minimum of the sample squared multiple correlation coefficients is , we do not have any evidence in favor of 8). Therefore, as already observed, the conditional approach did not produce the shortest interval. We also see that, among all the point estimates, the MLE is very close to the true mean CONCLUDING REMARKS We observed from the preceding sections that the unconditional approach is not only simple to use but also better than the unconditional method for constructing confidence interval for the common mean µ. Furthermore, if the sample size is sufficiently large, then the conditional approach may yield better results than the ones based on the individual t procedures. For a fixed p and α = 0.05, we computed the least value of n for which ρ L = 1 tn 1,1 α/ ) ) n p t n p,1 α/ n 1 > Based on a linear fit of these pairs of n, p), we found that ρ L > 0.05 for any n > 0p 15. This implies that n must be at least 0p 15 for the conditional approach offers improvement over the best of the t procedures for any ρ y.x > For moderate sample sizes, to check if the conditional combined approach is superior to the best of individual t methods, one should test whether the minimum of the squared sample multiple correlation coefficients is greater than ρ L = 1 tn 1,1 α/ ) ) n p t n p,1 α/ n 1. It is difficult to obtain an exact test to verify this condition. Therefore, in practice one may want to compute all the t intervals based on the individual components and the conditional confidence interval, and then choose the shortest of the intervals for applications. Acknowledgement The authors are thankful to a referee for reviewing this article.

14 14 References Berry, C. J. 1987) Equivariant estimation of a normal mean using a normal concomitant variable for covariance adjustment. The Canadian Journal of Statistics, 15, Cohen, A. and Sackrowitz, H. B. 1984) Testing hypotheses about the common mean of normal distributions. Journal of Statistical Planning and Inference, 9, Halperin, M. 1961) Almost linearly-optimum combination of unbiased estimates. Journal of the American Statistical Association, 56, Jin, C. and Berry, C. J. 1993) Equivariant estimation of a normal mean vector using a normal concomitant vector for covariance adjustment. Communications in Statistics Theory and Methods,, Krishnamoorthy, K. and Rohatgi, V. K. 1990). Unbiased estimation of the common mean of a multivariate normal distribution. Communication Statistics Theory Methods, 19, Krishnamoorthy, K. and Lu, Y. 003) Inferences on the common mean of several normal populations based on the generalized variable method. Biometrics, 59, Muirhead, R. J. 198) Aspects of Multivariate Statistical Theory, Wiley, New York. Tan, M. and Gleser, L. J. 1993) Improved point and confidence interval estimators of mean response in simulation when control variates are used, Communications in Statistics Simulation and Computation,, Yu, P. L. H., Sun, Y. and Sinha, B. K. 1999) On exact confidence intervals for the common mean of several normal populations. Journal of Statistical Planning and Inference, 81, Zhou, L. and Mathew, T. 1993) Combining independent tests in linear models. Journal of the American Statistical Association, 88,

COMPARISON OF FIVE TESTS FOR THE COMMON MEAN OF SEVERAL MULTIVARIATE NORMAL POPULATIONS

COMPARISON OF FIVE TESTS FOR THE COMMON MEAN OF SEVERAL MULTIVARIATE NORMAL POPULATIONS Communications in Statistics - Simulation and Computation 33 (2004) 431-446 COMPARISON OF FIVE TESTS FOR THE COMMON MEAN OF SEVERAL MULTIVARIATE NORMAL POPULATIONS K. Krishnamoorthy and Yong Lu Department

More information

Mean. Pranab K. Mitra and Bimal K. Sinha. Department of Mathematics and Statistics, University Of Maryland, Baltimore County

Mean. Pranab K. Mitra and Bimal K. Sinha. Department of Mathematics and Statistics, University Of Maryland, Baltimore County A Generalized p-value Approach to Inference on Common Mean Pranab K. Mitra and Bimal K. Sinha Department of Mathematics and Statistics, University Of Maryland, Baltimore County 1000 Hilltop Circle, Baltimore,

More information

NEW APPROXIMATE INFERENTIAL METHODS FOR THE RELIABILITY PARAMETER IN A STRESS-STRENGTH MODEL: THE NORMAL CASE

NEW APPROXIMATE INFERENTIAL METHODS FOR THE RELIABILITY PARAMETER IN A STRESS-STRENGTH MODEL: THE NORMAL CASE Communications in Statistics-Theory and Methods 33 (4) 1715-1731 NEW APPROXIMATE INFERENTIAL METODS FOR TE RELIABILITY PARAMETER IN A STRESS-STRENGT MODEL: TE NORMAL CASE uizhen Guo and K. Krishnamoorthy

More information

Distribution Theory. Comparison Between Two Quantiles: The Normal and Exponential Cases

Distribution Theory. Comparison Between Two Quantiles: The Normal and Exponential Cases Communications in Statistics Simulation and Computation, 34: 43 5, 005 Copyright Taylor & Francis, Inc. ISSN: 0361-0918 print/153-4141 online DOI: 10.1081/SAC-00055639 Distribution Theory Comparison Between

More information

Approximate and Fiducial Confidence Intervals for the Difference Between Two Binomial Proportions

Approximate and Fiducial Confidence Intervals for the Difference Between Two Binomial Proportions Approximate and Fiducial Confidence Intervals for the Difference Between Two Binomial Proportions K. Krishnamoorthy 1 and Dan Zhang University of Louisiana at Lafayette, Lafayette, LA 70504, USA SUMMARY

More information

Simple Linear Regression

Simple Linear Regression Simple Linear Regression In simple linear regression we are concerned about the relationship between two variables, X and Y. There are two components to such a relationship. 1. The strength of the relationship.

More information

Assessing occupational exposure via the one-way random effects model with unbalanced data

Assessing occupational exposure via the one-way random effects model with unbalanced data Assessing occupational exposure via the one-way random effects model with unbalanced data K. Krishnamoorthy 1 and Huizhen Guo Department of Mathematics University of Louisiana at Lafayette Lafayette, LA

More information

Inference on reliability in two-parameter exponential stress strength model

Inference on reliability in two-parameter exponential stress strength model Metrika DOI 10.1007/s00184-006-0074-7 Inference on reliability in two-parameter exponential stress strength model K. Krishnamoorthy Shubhabrata Mukherjee Huizhen Guo Received: 19 January 2005 Springer-Verlag

More information

Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone Missing Data

Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone Missing Data Journal of Multivariate Analysis 78, 6282 (2001) doi:10.1006jmva.2000.1939, available online at http:www.idealibrary.com on Inferences on a Normal Covariance Matrix and Generalized Variance with Monotone

More information

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A. 1. Let P be a probability measure on a collection of sets A. (a) For each n N, let H n be a set in A such that H n H n+1. Show that P (H n ) monotonically converges to P ( k=1 H k) as n. (b) For each n

More information

Lecture 15. Hypothesis testing in the linear model

Lecture 15. Hypothesis testing in the linear model 14. Lecture 15. Hypothesis testing in the linear model Lecture 15. Hypothesis testing in the linear model 1 (1 1) Preliminary lemma 15. Hypothesis testing in the linear model 15.1. Preliminary lemma Lemma

More information

MAS223 Statistical Inference and Modelling Exercises

MAS223 Statistical Inference and Modelling Exercises MAS223 Statistical Inference and Modelling Exercises The exercises are grouped into sections, corresponding to chapters of the lecture notes Within each section exercises are divided into warm-up questions,

More information

On Selecting Tests for Equality of Two Normal Mean Vectors

On Selecting Tests for Equality of Two Normal Mean Vectors MULTIVARIATE BEHAVIORAL RESEARCH, 41(4), 533 548 Copyright 006, Lawrence Erlbaum Associates, Inc. On Selecting Tests for Equality of Two Normal Mean Vectors K. Krishnamoorthy and Yanping Xia Department

More information

Introduction to Normal Distribution

Introduction to Normal Distribution Introduction to Normal Distribution Nathaniel E. Helwig Assistant Professor of Psychology and Statistics University of Minnesota (Twin Cities) Updated 17-Jan-2017 Nathaniel E. Helwig (U of Minnesota) Introduction

More information

where x and ȳ are the sample means of x 1,, x n

where x and ȳ are the sample means of x 1,, x n y y Animal Studies of Side Effects Simple Linear Regression Basic Ideas In simple linear regression there is an approximately linear relation between two variables say y = pressure in the pancreas x =

More information

Chapter 12 - Lecture 2 Inferences about regression coefficient

Chapter 12 - Lecture 2 Inferences about regression coefficient Chapter 12 - Lecture 2 Inferences about regression coefficient April 19th, 2010 Facts about slope Test Statistic Confidence interval Hypothesis testing Test using ANOVA Table Facts about slope In previous

More information

GENERALIZED CONFIDENCE INTERVALS FOR THE SCALE PARAMETER OF THE INVERTED EXPONENTIAL DISTRIBUTION

GENERALIZED CONFIDENCE INTERVALS FOR THE SCALE PARAMETER OF THE INVERTED EXPONENTIAL DISTRIBUTION Internation Journ of Latest Research in Science and Technology ISSN (Online):7- Volume, Issue : Page No.-, November-December 0 (speci Issue Paper ) http://www.mnkjourns.com/ijlrst.htm Speci Issue on Internation

More information

Linear Models and Estimation by Least Squares

Linear Models and Estimation by Least Squares Linear Models and Estimation by Least Squares Jin-Lung Lin 1 Introduction Causal relation investigation lies in the heart of economics. Effect (Dependent variable) cause (Independent variable) Example:

More information

Canonical Correlation Analysis of Longitudinal Data

Canonical Correlation Analysis of Longitudinal Data Biometrics Section JSM 2008 Canonical Correlation Analysis of Longitudinal Data Jayesh Srivastava Dayanand N Naik Abstract Studying the relationship between two sets of variables is an important multivariate

More information

Ch 2: Simple Linear Regression

Ch 2: Simple Linear Regression Ch 2: Simple Linear Regression 1. Simple Linear Regression Model A simple regression model with a single regressor x is y = β 0 + β 1 x + ɛ, where we assume that the error ɛ is independent random component

More information

Exercises and Answers to Chapter 1

Exercises and Answers to Chapter 1 Exercises and Answers to Chapter The continuous type of random variable X has the following density function: a x, if < x < a, f (x), otherwise. Answer the following questions. () Find a. () Obtain mean

More information

Multivariate Statistical Analysis

Multivariate Statistical Analysis Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 9 for Applied Multivariate Analysis Outline Addressing ourliers 1 Addressing ourliers 2 Outliers in Multivariate samples (1) For

More information

Bootstrap Procedures for Testing Homogeneity Hypotheses

Bootstrap Procedures for Testing Homogeneity Hypotheses Journal of Statistical Theory and Applications Volume 11, Number 2, 2012, pp. 183-195 ISSN 1538-7887 Bootstrap Procedures for Testing Homogeneity Hypotheses Bimal Sinha 1, Arvind Shah 2, Dihua Xu 1, Jianxin

More information

Simple and Multiple Linear Regression

Simple and Multiple Linear Regression Sta. 113 Chapter 12 and 13 of Devore March 12, 2010 Table of contents 1 Simple Linear Regression 2 Model Simple Linear Regression A simple linear regression model is given by Y = β 0 + β 1 x + ɛ where

More information

Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data

Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data Applied Mathematical Sciences, Vol 3, 009, no 54, 695-70 Testing a Normal Covariance Matrix for Small Samples with Monotone Missing Data Evelina Veleva Rousse University A Kanchev Department of Numerical

More information

Linear models and their mathematical foundations: Simple linear regression

Linear models and their mathematical foundations: Simple linear regression Linear models and their mathematical foundations: Simple linear regression Steffen Unkel Department of Medical Statistics University Medical Center Göttingen, Germany Winter term 2018/19 1/21 Introduction

More information

Lecture 2: Basic Concepts and Simple Comparative Experiments Montgomery: Chapter 2

Lecture 2: Basic Concepts and Simple Comparative Experiments Montgomery: Chapter 2 Lecture 2: Basic Concepts and Simple Comparative Experiments Montgomery: Chapter 2 Fall, 2013 Page 1 Random Variable and Probability Distribution Discrete random variable Y : Finite possible values {y

More information

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and

This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution

More information

4. Distributions of Functions of Random Variables

4. Distributions of Functions of Random Variables 4. Distributions of Functions of Random Variables Setup: Consider as given the joint distribution of X 1,..., X n (i.e. consider as given f X1,...,X n and F X1,...,X n ) Consider k functions g 1 : R n

More information

Lecture 3. Inference about multivariate normal distribution

Lecture 3. Inference about multivariate normal distribution Lecture 3. Inference about multivariate normal distribution 3.1 Point and Interval Estimation Let X 1,..., X n be i.i.d. N p (µ, Σ). We are interested in evaluation of the maximum likelihood estimates

More information

Master s Written Examination - Solution

Master s Written Examination - Solution Master s Written Examination - Solution Spring 204 Problem Stat 40 Suppose X and X 2 have the joint pdf f X,X 2 (x, x 2 ) = 2e (x +x 2 ), 0 < x < x 2

More information

Modified Normal-based Approximation to the Percentiles of Linear Combination of Independent Random Variables with Applications

Modified Normal-based Approximation to the Percentiles of Linear Combination of Independent Random Variables with Applications Communications in Statistics - Simulation and Computation ISSN: 0361-0918 (Print) 1532-4141 (Online) Journal homepage: http://www.tandfonline.com/loi/lssp20 Modified Normal-based Approximation to the Percentiles

More information

MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7

MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7 MA 575 Linear Models: Cedric E. Ginestet, Boston University Midterm Review Week 7 1 Random Vectors Let a 0 and y be n 1 vectors, and let A be an n n matrix. Here, a 0 and A are non-random, whereas y is

More information

Analysis of 2 n Factorial Experiments with Exponentially Distributed Response Variable

Analysis of 2 n Factorial Experiments with Exponentially Distributed Response Variable Applied Mathematical Sciences, Vol. 5, 2011, no. 10, 459-476 Analysis of 2 n Factorial Experiments with Exponentially Distributed Response Variable S. C. Patil (Birajdar) Department of Statistics, Padmashree

More information

STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots. March 8, 2015

STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots. March 8, 2015 STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots March 8, 2015 The duality between CI and hypothesis testing The duality between CI and hypothesis

More information

Tolerance limits for a ratio of normal random variables

Tolerance limits for a ratio of normal random variables Tolerance limits for a ratio of normal random variables Lanju Zhang 1, Thomas Mathew 2, Harry Yang 1, K. Krishnamoorthy 3 and Iksung Cho 1 1 Department of Biostatistics MedImmune, Inc. One MedImmune Way,

More information

STA 2101/442 Assignment 3 1

STA 2101/442 Assignment 3 1 STA 2101/442 Assignment 3 1 These questions are practice for the midterm and final exam, and are not to be handed in. 1. Suppose X 1,..., X n are a random sample from a distribution with mean µ and variance

More information

MULTIVARIATE PROBABILITY DISTRIBUTIONS

MULTIVARIATE PROBABILITY DISTRIBUTIONS MULTIVARIATE PROBABILITY DISTRIBUTIONS. PRELIMINARIES.. Example. Consider an experiment that consists of tossing a die and a coin at the same time. We can consider a number of random variables defined

More information

MATH5745 Multivariate Methods Lecture 07

MATH5745 Multivariate Methods Lecture 07 MATH5745 Multivariate Methods Lecture 07 Tests of hypothesis on covariance matrix March 16, 2018 MATH5745 Multivariate Methods Lecture 07 March 16, 2018 1 / 39 Test on covariance matrices: Introduction

More information

Master s Written Examination

Master s Written Examination Master s Written Examination Option: Statistics and Probability Spring 05 Full points may be obtained for correct answers to eight questions Each numbered question (which may have several parts) is worth

More information

Inferences about a Mean Vector

Inferences about a Mean Vector Inferences about a Mean Vector Edps/Soc 584, Psych 594 Carolyn J. Anderson Department of Educational Psychology I L L I N O I S university of illinois at urbana-champaign c Board of Trustees, University

More information

Applied Multivariate and Longitudinal Data Analysis

Applied Multivariate and Longitudinal Data Analysis Applied Multivariate and Longitudinal Data Analysis Chapter 2: Inference about the mean vector(s) Ana-Maria Staicu SAS Hall 5220; 919-515-0644; astaicu@ncsu.edu 1 In this chapter we will discuss inference

More information

Statistics 3657 : Moment Approximations

Statistics 3657 : Moment Approximations Statistics 3657 : Moment Approximations Preliminaries Suppose that we have a r.v. and that we wish to calculate the expectation of g) for some function g. Of course we could calculate it as Eg)) by the

More information

Asymptotic Statistics-VI. Changliang Zou

Asymptotic Statistics-VI. Changliang Zou Asymptotic Statistics-VI Changliang Zou Kolmogorov-Smirnov distance Example (Kolmogorov-Smirnov confidence intervals) We know given α (0, 1), there is a well-defined d = d α,n such that, for any continuous

More information

Confidence Intervals, Testing and ANOVA Summary

Confidence Intervals, Testing and ANOVA Summary Confidence Intervals, Testing and ANOVA Summary 1 One Sample Tests 1.1 One Sample z test: Mean (σ known) Let X 1,, X n a r.s. from N(µ, σ) or n > 30. Let The test statistic is H 0 : µ = µ 0. z = x µ 0

More information

Section 4.6 Simple Linear Regression

Section 4.6 Simple Linear Regression Section 4.6 Simple Linear Regression Objectives ˆ Basic philosophy of SLR and the regression assumptions ˆ Point & interval estimation of the model parameters, and how to make predictions ˆ Point and interval

More information

Test Volume 11, Number 1. June 2002

Test Volume 11, Number 1. June 2002 Sociedad Española de Estadística e Investigación Operativa Test Volume 11, Number 1. June 2002 Optimal confidence sets for testing average bioequivalence Yu-Ling Tseng Department of Applied Math Dong Hwa

More information

The Delta Method and Applications

The Delta Method and Applications Chapter 5 The Delta Method and Applications 5.1 Local linear approximations Suppose that a particular random sequence converges in distribution to a particular constant. The idea of using a first-order

More information

Statistics. Statistics

Statistics. Statistics The main aims of statistics 1 1 Choosing a model 2 Estimating its parameter(s) 1 point estimates 2 interval estimates 3 Testing hypotheses Distributions used in statistics: χ 2 n-distribution 2 Let X 1,

More information

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8

Peter Hoff Linear and multilinear models April 3, GLS for multivariate regression 5. 3 Covariance estimation for the GLM 8 Contents 1 Linear model 1 2 GLS for multivariate regression 5 3 Covariance estimation for the GLM 8 4 Testing the GLH 11 A reference for some of this material can be found somewhere. 1 Linear model Recall

More information

Multivariate Analysis and Likelihood Inference

Multivariate Analysis and Likelihood Inference Multivariate Analysis and Likelihood Inference Outline 1 Joint Distribution of Random Variables 2 Principal Component Analysis (PCA) 3 Multivariate Normal Distribution 4 Likelihood Inference Joint density

More information

Modied generalized p-value and condence interval by Fisher's ducial approach

Modied generalized p-value and condence interval by Fisher's ducial approach Hacettepe Journal of Mathematics and Statistics Volume 46 () (017), 339 360 Modied generalized p-value and condence interval by Fisher's ducial approach Evren Ozkip, Berna Yazici and Ahmet Sezer Ÿ Abstract

More information

Stat 135, Fall 2006 A. Adhikari HOMEWORK 6 SOLUTIONS

Stat 135, Fall 2006 A. Adhikari HOMEWORK 6 SOLUTIONS Stat 135, Fall 2006 A. Adhikari HOMEWORK 6 SOLUTIONS 1a. Under the null hypothesis X has the binomial (100,.5) distribution with E(X) = 50 and SE(X) = 5. So P ( X 50 > 10) is (approximately) two tails

More information

MULTIVARIATE POPULATIONS

MULTIVARIATE POPULATIONS CHAPTER 5 MULTIVARIATE POPULATIONS 5. INTRODUCTION In the following chapters we will be dealing with a variety of problems concerning multivariate populations. The purpose of this chapter is to provide

More information

Problems. Suppose both models are fitted to the same data. Show that SS Res, A SS Res, B

Problems. Suppose both models are fitted to the same data. Show that SS Res, A SS Res, B Simple Linear Regression 35 Problems 1 Consider a set of data (x i, y i ), i =1, 2,,n, and the following two regression models: y i = β 0 + β 1 x i + ε, (i =1, 2,,n), Model A y i = γ 0 + γ 1 x i + γ 2

More information

Journal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error

Journal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error Journal of Multivariate Analysis 00 (009) 305 3 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva Sphericity test in a GMANOVA MANOVA

More information

This does not cover everything on the final. Look at the posted practice problems for other topics.

This does not cover everything on the final. Look at the posted practice problems for other topics. Class 7: Review Problems for Final Exam 8.5 Spring 7 This does not cover everything on the final. Look at the posted practice problems for other topics. To save time in class: set up, but do not carry

More information

i=1 X i/n i=1 (X i X) 2 /(n 1). Find the constant c so that the statistic c(x X n+1 )/S has a t-distribution. If n = 8, determine k such that

i=1 X i/n i=1 (X i X) 2 /(n 1). Find the constant c so that the statistic c(x X n+1 )/S has a t-distribution. If n = 8, determine k such that Math 47 Homework Assignment 4 Problem 411 Let X 1, X,, X n, X n+1 be a random sample of size n + 1, n > 1, from a distribution that is N(µ, σ ) Let X = n i=1 X i/n and S = n i=1 (X i X) /(n 1) Find the

More information

STT 843 Key to Homework 1 Spring 2018

STT 843 Key to Homework 1 Spring 2018 STT 843 Key to Homework Spring 208 Due date: Feb 4, 208 42 (a Because σ = 2, σ 22 = and ρ 2 = 05, we have σ 2 = ρ 2 σ σ22 = 2/2 Then, the mean and covariance of the bivariate normal is µ = ( 0 2 and Σ

More information

Likelihood Ratio Tests and Intersection-Union Tests. Roger L. Berger. Department of Statistics, North Carolina State University

Likelihood Ratio Tests and Intersection-Union Tests. Roger L. Berger. Department of Statistics, North Carolina State University Likelihood Ratio Tests and Intersection-Union Tests by Roger L. Berger Department of Statistics, North Carolina State University Raleigh, NC 27695-8203 Institute of Statistics Mimeo Series Number 2288

More information

STA 114: Statistics. Notes 21. Linear Regression

STA 114: Statistics. Notes 21. Linear Regression STA 114: Statistics Notes 1. Linear Regression Introduction A most widely used statistical analysis is regression, where one tries to explain a response variable Y by an explanatory variable X based on

More information

Estimating σ 2. We can do simple prediction of Y and estimation of the mean of Y at any value of X.

Estimating σ 2. We can do simple prediction of Y and estimation of the mean of Y at any value of X. Estimating σ 2 We can do simple prediction of Y and estimation of the mean of Y at any value of X. To perform inferences about our regression line, we must estimate σ 2, the variance of the error term.

More information

Topic 22 Analysis of Variance

Topic 22 Analysis of Variance Topic 22 Analysis of Variance Comparing Multiple Populations 1 / 14 Outline Overview One Way Analysis of Variance Sample Means Sums of Squares The F Statistic Confidence Intervals 2 / 14 Overview Two-sample

More information

STA 2201/442 Assignment 2

STA 2201/442 Assignment 2 STA 2201/442 Assignment 2 1. This is about how to simulate from a continuous univariate distribution. Let the random variable X have a continuous distribution with density f X (x) and cumulative distribution

More information

Linear Regression. Simple linear regression model determines the relationship between one dependent variable (y) and one independent variable (x).

Linear Regression. Simple linear regression model determines the relationship between one dependent variable (y) and one independent variable (x). Linear Regression Simple linear regression model determines the relationship between one dependent variable (y) and one independent variable (x). A dependent variable is a random variable whose variation

More information

Reconstruction of Order Statistics in Exponential Distribution

Reconstruction of Order Statistics in Exponential Distribution JIRSS 200) Vol. 9, No., pp 2-40 Reconstruction of Order Statistics in Exponential Distribution M. Razmkhah, B. Khatib, Jafar Ahmadi Department of Statistics, Ordered and Spatial Data Center of Excellence,

More information

Review: General Approach to Hypothesis Testing. 1. Define the research question and formulate the appropriate null and alternative hypotheses.

Review: General Approach to Hypothesis Testing. 1. Define the research question and formulate the appropriate null and alternative hypotheses. 1 Review: Let X 1, X,..., X n denote n independent random variables sampled from some distribution might not be normal!) with mean µ) and standard deviation σ). Then X µ σ n In other words, X is approximately

More information

Optimum designs for model. discrimination and estimation. in Binary Response Models

Optimum designs for model. discrimination and estimation. in Binary Response Models Optimum designs for model discrimination and estimation in Binary Response Models by Wei-Shan Hsieh Advisor Mong-Na Lo Huang Department of Applied Mathematics National Sun Yat-sen University Kaohsiung,

More information

Spring 2012 Math 541B Exam 1

Spring 2012 Math 541B Exam 1 Spring 2012 Math 541B Exam 1 1. A sample of size n is drawn without replacement from an urn containing N balls, m of which are red and N m are black; the balls are otherwise indistinguishable. Let X denote

More information

Multivariate Regression Analysis

Multivariate Regression Analysis Matrices and vectors The model from the sample is: Y = Xβ +u with n individuals, l response variable, k regressors Y is a n 1 vector or a n l matrix with the notation Y T = (y 1,y 2,...,y n ) 1 x 11 x

More information

Introduction to bivariate analysis

Introduction to bivariate analysis Introduction to bivariate analysis When one measurement is made on each observation, univariate analysis is applied. If more than one measurement is made on each observation, multivariate analysis is applied.

More information

Generalized inference for the common location. parameter of several location-scale families

Generalized inference for the common location. parameter of several location-scale families Generalized inference for the common location parameter of several location-scale families Fuqi Chen and Sévérien Nkurunziza Abstract In this paper, we are interested in inference problem concerning the

More information

557: MATHEMATICAL STATISTICS II HYPOTHESIS TESTING: EXAMPLES

557: MATHEMATICAL STATISTICS II HYPOTHESIS TESTING: EXAMPLES 557: MATHEMATICAL STATISTICS II HYPOTHESIS TESTING: EXAMPLES Example Suppose that X,..., X n N, ). To test H 0 : 0 H : the most powerful test at level α is based on the statistic λx) f π) X x ) n/ exp

More information

Design of Experiments. Factorial experiments require a lot of resources

Design of Experiments. Factorial experiments require a lot of resources Design of Experiments Factorial experiments require a lot of resources Sometimes real-world practical considerations require us to design experiments in specialized ways. The design of an experiment is

More information

Introduction to bivariate analysis

Introduction to bivariate analysis Introduction to bivariate analysis When one measurement is made on each observation, univariate analysis is applied. If more than one measurement is made on each observation, multivariate analysis is applied.

More information

A3. Statistical Inference Hypothesis Testing for General Population Parameters

A3. Statistical Inference Hypothesis Testing for General Population Parameters Appendix / A3. Statistical Inference / General Parameters- A3. Statistical Inference Hypothesis Testing for General Population Parameters POPULATION H 0 : θ = θ 0 θ is a generic parameter of interest (e.g.,

More information

Problem Selected Scores

Problem Selected Scores Statistics Ph.D. Qualifying Exam: Part II November 20, 2010 Student Name: 1. Answer 8 out of 12 problems. Mark the problems you selected in the following table. Problem 1 2 3 4 5 6 7 8 9 10 11 12 Selected

More information

MAT2377. Rafa l Kulik. Version 2015/November/26. Rafa l Kulik

MAT2377. Rafa l Kulik. Version 2015/November/26. Rafa l Kulik MAT2377 Rafa l Kulik Version 2015/November/26 Rafa l Kulik Bivariate data and scatterplot Data: Hydrocarbon level (x) and Oxygen level (y): x: 0.99, 1.02, 1.15, 1.29, 1.46, 1.36, 0.87, 1.23, 1.55, 1.40,

More information

2 Functions of random variables

2 Functions of random variables 2 Functions of random variables A basic statistical model for sample data is a collection of random variables X 1,..., X n. The data are summarised in terms of certain sample statistics, calculated as

More information

UQ, Semester 1, 2017, Companion to STAT2201/CIVL2530 Exam Formulae and Tables

UQ, Semester 1, 2017, Companion to STAT2201/CIVL2530 Exam Formulae and Tables UQ, Semester 1, 2017, Companion to STAT2201/CIVL2530 Exam Formulae and Tables To be provided to students with STAT2201 or CIVIL-2530 (Probability and Statistics) Exam Main exam date: Tuesday, 20 June 1

More information

Improved Confidence Intervals for the Ratio of Coefficients of Variation of Two Lognormal Distributions

Improved Confidence Intervals for the Ratio of Coefficients of Variation of Two Lognormal Distributions Journal of Statistical Theory and Applications, Vol. 16, No. 3 (September 017) 345 353 Improved Confidence Intervals for the Ratio of Coefficients of Variation of Two Lognormal Distributions Md Sazib Hasan

More information

High-dimensional asymptotic expansions for the distributions of canonical correlations

High-dimensional asymptotic expansions for the distributions of canonical correlations Journal of Multivariate Analysis 100 2009) 231 242 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva High-dimensional asymptotic

More information

7.3 The Chi-square, F and t-distributions

7.3 The Chi-square, F and t-distributions 7.3 The Chi-square, F and t-distributions Ulrich Hoensch Monday, March 25, 2013 The Chi-square Distribution Recall that a random variable X has a gamma probability distribution (X Gamma(r, λ)) with parameters

More information

Quantitative Analysis of Financial Markets. Summary of Part II. Key Concepts & Formulas. Christopher Ting. November 11, 2017

Quantitative Analysis of Financial Markets. Summary of Part II. Key Concepts & Formulas. Christopher Ting. November 11, 2017 Summary of Part II Key Concepts & Formulas Christopher Ting November 11, 2017 christopherting@smu.edu.sg http://www.mysmu.edu/faculty/christophert/ Christopher Ting 1 of 16 Why Regression Analysis? Understand

More information

TA: Sheng Zhgang (Th 1:20) / 342 (W 1:20) / 343 (W 2:25) / 344 (W 12:05) Haoyang Fan (W 1:20) / 346 (Th 12:05) FINAL EXAM

TA: Sheng Zhgang (Th 1:20) / 342 (W 1:20) / 343 (W 2:25) / 344 (W 12:05) Haoyang Fan (W 1:20) / 346 (Th 12:05) FINAL EXAM STAT 301, Fall 2011 Name Lec 4: Ismor Fischer Discussion Section: Please circle one! TA: Sheng Zhgang... 341 (Th 1:20) / 342 (W 1:20) / 343 (W 2:25) / 344 (W 12:05) Haoyang Fan... 345 (W 1:20) / 346 (Th

More information

Inferences for Correlation

Inferences for Correlation Inferences for Correlation Quantitative Methods II Plan for Today Recall: correlation coefficient Bivariate normal distributions Hypotheses testing for population correlation Confidence intervals for population

More information

So far our focus has been on estimation of the parameter vector β in the. y = Xβ + u

So far our focus has been on estimation of the parameter vector β in the. y = Xβ + u Interval estimation and hypothesis tests So far our focus has been on estimation of the parameter vector β in the linear model y i = β 1 x 1i + β 2 x 2i +... + β K x Ki + u i = x iβ + u i for i = 1, 2,...,

More information

Political Science 236 Hypothesis Testing: Review and Bootstrapping

Political Science 236 Hypothesis Testing: Review and Bootstrapping Political Science 236 Hypothesis Testing: Review and Bootstrapping Rocío Titiunik Fall 2007 1 Hypothesis Testing Definition 1.1 Hypothesis. A hypothesis is a statement about a population parameter The

More information

5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality.

5.1 Consistency of least squares estimates. We begin with a few consistency results that stand on their own and do not depend on normality. 88 Chapter 5 Distribution Theory In this chapter, we summarize the distributions related to the normal distribution that occur in linear models. Before turning to this general problem that assumes normal

More information

Notes on the Multivariate Normal and Related Topics

Notes on the Multivariate Normal and Related Topics Version: July 10, 2013 Notes on the Multivariate Normal and Related Topics Let me refresh your memory about the distinctions between population and sample; parameters and statistics; population distributions

More information

EXAMINERS REPORT & SOLUTIONS STATISTICS 1 (MATH 11400) May-June 2009

EXAMINERS REPORT & SOLUTIONS STATISTICS 1 (MATH 11400) May-June 2009 EAMINERS REPORT & SOLUTIONS STATISTICS (MATH 400) May-June 2009 Examiners Report A. Most plots were well done. Some candidates muddled hinges and quartiles and gave the wrong one. Generally candidates

More information

2014/2015 Smester II ST5224 Final Exam Solution

2014/2015 Smester II ST5224 Final Exam Solution 014/015 Smester II ST54 Final Exam Solution 1 Suppose that (X 1,, X n ) is a random sample from a distribution with probability density function f(x; θ) = e (x θ) I [θ, ) (x) (i) Show that the family of

More information

Testing Equality of Two Intercepts for the Parallel Regression Model with Non-sample Prior Information

Testing Equality of Two Intercepts for the Parallel Regression Model with Non-sample Prior Information Testing Equality of Two Intercepts for the Parallel Regression Model with Non-sample Prior Information Budi Pratikno 1 and Shahjahan Khan 2 1 Department of Mathematics and Natural Science Jenderal Soedirman

More information

[y i α βx i ] 2 (2) Q = i=1

[y i α βx i ] 2 (2) Q = i=1 Least squares fits This section has no probability in it. There are no random variables. We are given n points (x i, y i ) and want to find the equation of the line that best fits them. We take the equation

More information

Inference about the Indirect Effect: a Likelihood Approach

Inference about the Indirect Effect: a Likelihood Approach Discussion Paper: 2014/10 Inference about the Indirect Effect: a Likelihood Approach Noud P.A. van Giersbergen www.ase.uva.nl/uva-econometrics Amsterdam School of Economics Department of Economics & Econometrics

More information

Bayesian Inference for Normal Mean

Bayesian Inference for Normal Mean Al Nosedal. University of Toronto. November 18, 2015 Likelihood of Single Observation The conditional observation distribution of y µ is Normal with mean µ and variance σ 2, which is known. Its density

More information

K. Krishnamoorthy a & Jie Peng b a Department of Mathematics, University of Louisiana at

K. Krishnamoorthy a & Jie Peng b a Department of Mathematics, University of Louisiana at This article was downloaded by: [74.80.1.18] On: 4 March 014, At: 05:08 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 107954 Registered office: Mortimer House,

More information

Lecture 28: Asymptotic confidence sets

Lecture 28: Asymptotic confidence sets Lecture 28: Asymptotic confidence sets 1 α asymptotic confidence sets Similar to testing hypotheses, in many situations it is difficult to find a confidence set with a given confidence coefficient or level

More information

HANDBOOK OF APPLICABLE MATHEMATICS

HANDBOOK OF APPLICABLE MATHEMATICS HANDBOOK OF APPLICABLE MATHEMATICS Chief Editor: Walter Ledermann Volume VI: Statistics PART A Edited by Emlyn Lloyd University of Lancaster A Wiley-Interscience Publication JOHN WILEY & SONS Chichester

More information