A Large Sample Normality Test

Similar documents
TESTING FOR NORMALITY IN THE LINEAR REGRESSION MODEL: AN EMPIRICAL LIKELIHOOD RATIO TEST

UNIVERSITY OF ILLINOIS LIBRARY AT URBANA-CHAMPAIGN BOOKSTACKS

AN EMPIRICAL LIKELIHOOD RATIO TEST FOR NORMALITY

Adjusting the Tests for Skewness and Kurtosis for Distributional Misspecifications. Abstract

A New Procedure for Multiple Testing of Econometric Models

Finite-sample quantiles of the Jarque-Bera test

This content downloaded on Wed, 6 Mar :23:52 AM All use subject to JSTOR Terms and Conditions

UiMlvtKbi., OF ILLINOIS LIBRARY AT URBANA-CHA: BOOKSTACKS

Statistics and Probability Letters

Least Absolute Value vs. Least Squares Estimation and Inference Procedures in Regression Models with Asymmetric Error Distributions

BEBR FACULTY WORKING PAPER NO An Adjustment Procedure for Predicting NOV Systematic Risk. Srinivasan Kannan. Anil K.

ECON 4230 Intermediate Econometric Theory Exam

The Goodness-of-fit Test for Gumbel Distribution: A Comparative Study

Models, Testing, and Correction of Heteroskedasticity. James L. Powell Department of Economics University of California, Berkeley

Model Estimation Example

Simulating Uniform- and Triangular- Based Double Power Method Distributions

BEBR FACULTY WORKING PAPER NO Joint Tests of Non-Nested Models and General Error Specifications. Michael McAleer M.

Estimation and Hypothesis Testing in LAV Regression with Autocorrelated Errors: Is Correction for Autocorrelation Helpful?

* Tuesday 17 January :30-16:30 (2 hours) Recored on ESSE3 General introduction to the course.

IMPACT OF ALTERNATIVE DISTRIBUTIONS ON QUANTILE-QUANTILE NORMALITY PLOT

A Bootstrap Test for Conditional Symmetry

Practical Solutions to Behrens-Fisher Problem: Bootstrapping, Permutation, Dudewicz-Ahmed Method

Estimation of cumulative distribution function with spline functions

Application of Variance Homogeneity Tests Under Violation of Normality Assumption

A nonparametric two-sample wald test of equality of variances

Teruko Takada Department of Economics, University of Illinois. Abstract

Statistics 203: Introduction to Regression and Analysis of Variance Course review

Chapter 3: Regression Methods for Trends

Distribution-Free Monitoring of Univariate Processes. Peihua Qiu 1 and Zhonghua Li 1,2. Abstract

Glossary. The ISI glossary of statistical terms provides definitions in a number of different languages:

COMPARING PARAMETRIC AND SEMIPARAMETRIC ERROR CORRECTION MODELS FOR ESTIMATION OF LONG RUN EQUILIBRIUM BETWEEN EXPORTS AND IMPORTS

Specification Tests for Families of Discrete Distributions with Applications to Insurance Claims Data

11. Bootstrap Methods

Irr. Statistical Methods in Experimental Physics. 2nd Edition. Frederick James. World Scientific. CERN, Switzerland

where x i and u i are iid N (0; 1) random variates and are mutually independent, ff =0; and fi =1. ff(x i )=fl0 + fl1x i with fl0 =1. We examine the e

A Note on Adaptive Box-Cox Transformation Parameter in Linear Regression

Monte Carlo Studies. The response in a Monte Carlo study is a random variable.

STAT 6350 Analysis of Lifetime Data. Probability Plotting

Lecture 3: Multiple Regression

Quantile regression and heteroskedasticity

Research Article A Nonparametric Two-Sample Wald Test of Equality of Variances

Lecture Outline. Biost 518 Applied Biostatistics II. Choice of Model for Analysis. Choice of Model. Choice of Model. Lecture 10: Multiple Regression:

ROBUST ESTIMATION OF A CORRELATION COEFFICIENT: AN ATTEMPT OF SURVEY

A New Approach to Robust Inference in Cointegration

Does k-th Moment Exist?

Stat 5101 Lecture Notes

Test of fit for a Laplace distribution against heavier tailed alternatives

Do Markov-Switching Models Capture Nonlinearities in the Data? Tests using Nonparametric Methods

AJAE Appendix: The Commodity Terms of Trade, Unit Roots, and Nonlinear Alternatives

ANALYSIS OF VARIANCE OF BALANCED DAIRY SCIENCE DATA USING SAS

Physics 509: Bootstrap and Robust Parameter Estimation

Identification through Heteroscedasticity: What If We Have the Wrong Form of Heteroscedasticity?

Confidence intervals for kernel density estimation

The In-and-Out-of-Sample (IOS) Likelihood Ratio Test for Model Misspecification p.1/27

Statistical Inference: Estimation and Confidence Intervals Hypothesis Testing

Regression Analysis. BUS 735: Business Decision Making and Research. Learn how to detect relationships between ordinal and categorical variables.

DETAILED CONTENTS PART I INTRODUCTION AND DESCRIPTIVE STATISTICS. 1. Introduction to Statistics

A GOODNESS-OF-FIT TEST FOR THE INVERSE GAUSSIAN DISTRIBUTION USING ITS INDEPENDENCE CHARACTERIZATION

FACULTY WORKING PAPER NO. 1130

Some Monte Carlo Evidence for Adaptive Estimation of Unit-Time Varying Heteroscedastic Panel Data Models

SUPERSTARDOM IN THE U.S. POPULAR MUSIC INDUSTRY REVISITED*

Research Article The Laplace Likelihood Ratio Test for Heteroscedasticity

Volatility. Gerald P. Dwyer. February Clemson University

Spatial Regression. 3. Review - OLS and 2SLS. Luc Anselin. Copyright 2017 by Luc Anselin, All Rights Reserved

Modified Kolmogorov-Smirnov Test of Goodness of Fit. Catalonia-BarcelonaTECH, Spain

On Forecast Strength of Some Linear and Non Linear Time Series Models for Stationary Data Structure

Goodness-of-fit tests for randomly censored Weibull distributions with estimated parameters

Marcia Gumpertz and Sastry G. Pantula Department of Statistics North Carolina State University Raleigh, NC

Subject CS1 Actuarial Statistics 1 Core Principles

POWER AND TYPE I ERROR RATE COMPARISON OF MULTIVARIATE ANALYSIS OF VARIANCE

A MONTE CARLO INVESTIGATION INTO THE PROPERTIES OF A PROPOSED ROBUST ONE-SAMPLE TEST OF LOCATION. Bercedis Peterson

Reliability of inference (1 of 2 lectures)

Financial Econometrics and Quantitative Risk Managenent Return Properties

Chapter 1 Statistical Inference

Economics 536 Lecture 7. Introduction to Specification Testing in Dynamic Econometric Models

Journal of Biostatistics and Epidemiology

Generalized Linear. Mixed Models. Methods and Applications. Modern Concepts, Walter W. Stroup. Texts in Statistical Science.

Institute of Actuaries of India

The Exact Distribution of the t-ratio with Robust and Clustered Standard Errors

STATISTICS SYLLABUS UNIT I

WEIGHTED QUANTILE REGRESSION THEORY AND ITS APPLICATION. Abstract

Regression, Ridge Regression, Lasso

HANDBOOK OF APPLICABLE MATHEMATICS

STAT 135 Lab 6 Duality of Hypothesis Testing and Confidence Intervals, GLRT, Pearson χ 2 Tests and Q-Q plots. March 8, 2015

Application of Parametric Homogeneity of Variances Tests under Violation of Classical Assumption

The Exact Distribution of the t-ratio with Robust and Clustered Standard Errors

Statistical Analysis of Engineering Data The Bare Bones Edition. Precision, Bias, Accuracy, Measures of Precision, Propagation of Error

Approximate Median Regression via the Box-Cox Transformation

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Christopher Dougherty London School of Economics and Political Science

Bootstrapping the Confidence Intervals of R 2 MAD for Samples from Contaminated Standard Logistic Distribution

Increasing Power in Paired-Samples Designs. by Correcting the Student t Statistic for Correlation. Donald W. Zimmerman. Carleton University

M O N A S H U N I V E R S I T Y

Economics 471: Econometrics Department of Economics, Finance and Legal Studies University of Alabama

Simulating Properties of the Likelihood Ratio Test for a Unit Root in an Explosive Second Order Autoregression

The Size and Power of Four Tests for Detecting Autoregressive Conditional Heteroskedasticity in the Presence of Serial Correlation

A NEW PROCEDURE FOR MULTIPLE TESTING OF ECONOMETRIC MODELS

Testing for Regime Switching in Singaporean Business Cycles

Applied Econometrics (QEM)

Inference via Kernel Smoothing of Bootstrap P Values

Transcription:

Faculty Working Paper 93-0171 330 STX B385 1993:171 COPY 2 A Large Sample Normality Test ^' of the JAN /> -, ^'^^srsitv fj nil, Anil K. Bera Pin T. Ng Department of Economics Department of Economics University of Illinois University of Houston, TX Bureau of Economic and Business Research College of Commerce and Business Administration University of Illinois at Urbana-Champaign

BEBR FACULTY WORKING PAPER NO. 93-0171 College of Commerce and Business Administration University of Illinois at Urbana-Champaign November 1993 A Large Sample Normality Test Anil K. Bera Pin T. Ng

Digitized by the Internet Archive in 2011 with funding from University of Illinois Urbana-Champaign http://www.archive.org/details/largesamplenorma93171bera

A LARGE SAMPLE NORMALITY TEST ANIL K. BERA and PIN T. NG Department of Economics, University of Illinois, Champaign, IL 61820 Department of Economics, University of Houston, TX 77204-5882 November 22, 1993 Abstract The score function, defined as the negative logarithmic derivative of the probability density function, plays an ubiquitous role in statistics. Since the score function of the normal distribution is empirical score function. linear, testing normality amounts to checking the linearity of the Using the score function, we present a graphical alternative to the Q-Q plot for detecting departures from normality. Even though graphical approaches are informative, they lack the objectivity of formal testing procedures. therefore, supplement our graphical approach with a formal large sample chi-square test. Our graphical approach is then applied to a wide range of alternative data generating processes. The finite sample size and power performances of the chi square test are investigated through a small scale Monte Carlo study. KEY WORDS: Normality test; Score function; Graphical approach We, 1 Introduction Since Geary's (1947) suggestion of putting the statement "Normality is a myth. There never was and will never be, a normal distribution" in front of all statistical texts, the need to test for the normality assumption in many statistical models has been widely acknowledged. As a result, a wide range of tests for normality are currently available. Most of these tests basically fall into the fouowing categories: (1) tests based on probability or Q-Q plots, (2) moments tests, (3) distance tests based on the empirical distribution function, (4) goodness of fit tests, and (5) tests based on the empirical characteristic function. No single test statistic can reveal as much information as a graphical display. In Section 2, we present a graphical alternative to the Q-Q plot using the score function, defined as the negative logarithmic derivative of the probability density function. Even though graphical approaches are informative, they lack the objectivity of formal testing procedures. We therefore supplement our graphical approach with a formal large sample x'^ test based on the score function in Section 3. The performances of our graphical approach and score function based x^ test depend on our ability to estimate the score function accurately. We review some score function estimators in Section 4.

In Jarque and Bera (1987), a moment test was shown to possess superior powers compared to most other normality tests. Their moment test utilizes the normal distribution's skewness measure y/b^ = and kurtosis measure 62 = 3.0. As a result, under certain non normal distributions with skewness and kurtosis measures identical to the normal distribution, moment tests based on y/bi and 62 will have no power. Some of such distribution are Tukey's A distributions when A = 0.135 and 5.2 [see Joiner and Rosenblatt (1971)]. Moment based tests also have power against only certain alternatives. Our score function based x^ test, on the other hand, does not have this disadvantage. The superior power of our score function based x^ ^^st is demonstrated in a small scale Monte Carlo study in Section 5. 2 A Graphical Approach The score function, defined as ip{x) = log'j[x) yuji *^^^ random variable having probability density function f{x) plays an ubiquitous role in statistics. It is related to the constructions of L-, M- and R-estimators for location and scale model as well as regression models in the robustness literatures. [See Joiner and Hall (1983) for an excellent overview]. It is also used in constructing various adaptive L-, M- and R-estimators which achieve the Cramer-Rao efficiency bounds asymptotically. [See Koenker (1982)]. It can also be used to estimate the Fisher information. In hypothesis testing, the score function plays a crucial role in robustifying conventional testing procedures. [See Bickel (1978) and Bera and Ng (1992)]. Its fundamental contribution to statistics, however, can best be seen in the realm of exploratory data analysis. The plots of the density and score functions of some common distributions are presented in Figure 1 and Figure 2 respectively. While it is difficult to differentiate the tails of a Gaussian distribution from those of a Cauchy distribution through the density functions, the tails of their score functions are very distinct. In fact, we can easily distinguish among various distributions by investigating the score functions. It is clear from Figure 1 and Figure 2 that the mode of a distribution is characterized by an upward crossing of the score function at the horizontal axis while an anti-mode is located at the point of downward crossing. An exponential distribution has a horizontal score function. A tail thicker than the exponential has a negatively sloped score while a tail thinner than the exponential corresponds to an upward sloping score. A Gaussian distribution has a linear score function passing through the horizontal axis at its location parameter with a slope equal to the reciprocal of its variance. This suggests an alternative to the familiar and popular probability or Q-Q plot. estimated score function with a redescending tail towards the horizontal axis indicates departure towards distributions with thicker tails than the normal distribution while a diverging tail suggests departure in the direction of thinner tailed distributions. We can even recover the estimate of the density function through exponentiating the negative integral of the estimated score function although this roundabout approach. An may seem to be a

Figure 1: Probability Density Functions 04 04 04 4 0.4 00 00 0.0-3 3-3 3-3 3-3 3-3 3 N(0.1) 1(5) Cauchy(0,l) DouExp(0,1) Logis(0,.5) 04 ^X\ 2 1 1-3 3 1 Extreme(O.l) Unjf(O.I) Exp{1) Lnomi(O.I) Be(a(3.2) 1 / 2 ^ 4 4 Gamtna(2.l) Weibull(5.2) Perato(l.1) F(5,5) ChiSq(3) Figure 2: Score Functions -3 3-3 3-3 3-3 3-3 3 N(0.1) 1(5) Cauchy(0,1) DouExp(0,1) Logis(0. 5) ^ > -3 3 ^1 Exlreme(0,1) Unif(0.1) Exp(1) Lnorm(O.I) Beta(3.2) L 40-10 L L 4 Gamma(2,l) Weibull(5.2) Peraio(1.1) F(5,5) ChiSq(3)

3 A Formal Test A formal "objective" test on the null hypothesis of a Gaussian distribution is equivalent to testing the linearity of the score function. Since a straight line can be viewed as a first order approximation to any polynomial, the normality test can easily be carried out through the asymptotic x^ test of regressing the estimated score function V'C^^t) on some polynomial of a:,. The null hypothesis of a Gaussian distribution wiu correspond to the linear relationship between VK^i) ^^d a:,. When the null hypothesis of a Gaussian distribution cannot be rejected, we can estimate the location parameter by the point at which the ordinary least squares regression line intersects the horizontal axis and the estimate of the scale parameter will be the square root of the reciprocal of the regression slope. 4 Estimating the Score Function Performances of the above graphical approach and formal x"^ test rely on accurate estimates of the score functions. Numerous score function estimators are available, most of which are constructed from some kernel density estimators. [See Stone (1975), Manski (1984) and Cox and Martin (1988)]. Csorgo and Revesz (1983) used a nearestneighbor approach. Cox (1985) proposed a smoothing spline version, which is further refined and implemented in Ng (1994). It has often been argued that the choice of kernel is not crucial in kernel density estimation. The correct choice of kernel, however, becomes important in the tails where density is low and few observations will help smooth things out. This sensitivity to kernel choice is further amplified in score function estimation where higher derivatives of the density function are involved [see Portnoy and Koenker (1989), and Ng (1994)]. Ng (1994) found that the smoothing spline score estimator, which finds its theoretical justification from an explicit mean squared errors minimization criterion, is more robust than the kernel estimators to distributional variations. We use this score estimator in the paper. The smoothing spline score estimator is the solution to min /(V'^ - 2il)')dFn + A j {i)"{x)fdx (1) t/'e//2[a,6] where /r2[a,6] = {V' : V'jV'' Q-re absolutely continuous, and /^ [V^"(a:)]'^c^ar < oo}. The objective function (1) is the (penalized) empirical analogue of minimizing the following mean-squared error: j{i^ - i^ofdfo = Jii^' - 2iP')dFo -^ Ji^idFo (2) in which i/^o is the unknown true score function and the equality is due to the fact that under some mild regularity conditions [see Cox (1985)] J^oHFo = - j f'q{x)'4){x)dx = Jir'dFo. Since the second term on the right hand side of (2) is independent of ip, minimizing the mean-squared error may focus exclusively on the first term. Minimizing (1) yields a

Figure 3: Estimated Score Functions 3n 3 -I 3-* -3-' -3-" -3 N(0,1) t(5) DouExp(0,1) Ture Score Estimated Score -IS-i -S-' Beia(3,2) Gamma(2,1) Lnorm(O.I) balance between "fidelity-to-data" measured by the mean-squared error term and the smoothness represented by the second term. As in any nonparametric score function estimator, the smoothing spline score estimator has the penalty parameter A to choose. The penalty parameter merely controls the trade-off between "fidelity-to-data" and smoothness of the estimated score function. We use the automated penalty parameter choice mechanism, the adaptive information criteria, suggested and implemented in Ng (1994) [see Ng (1991) for a FORTRAN source codes]. 5 Some Examples and Simulation Results In Figure 3, we present the smoothing spline estimated score functions for each of the 100 random observations drawn from some of the distributions in Figure 2. The random number generators were Marsaglla's Super- Duper random number generators available in "S" [Becker, Chambers and Wilks (1988)] installed on a Sun SPARCstation 10. The smoothing spline score estimator was Ng's (1991) Fortran version adapted for ""S". It is obvious from Figure 3 that any departure from the Gaussian distribution can be easily detected from the plots. To study the finite sample properties of our score function based x^ test and the moment based LM test of Jarque and Bera (1987), we perform a small scale Monte Carlo study. The LM test was shown in Jarque and Bera (1987) to possess very good power as compared to the skewness measure test y/bi, the Kurtosis measure

test 62, D'Agostino's (1971) D* test, Pearson, D'Agostino and Bowman's (1977) R test, Shapiro and Wilk's (1965) W test, and Shapiro and Francia's (1972) W test against the various alternatives distributions investigated. As a result, we use it as our bench mark to evaluate the performance of our x^ test. The null distribution here is the standard normal distribution and the alternatives are Gamma (2,1), Beta (3,2), Student's t (5) and Tukey's A distribution with A = 5.2. All distributions are standardized to have zero mean and variance twenty-five. Our x^ test is obtained by running the following regression i>{xi) = To + lixi + 722;,-^ + -js^i^ + 74a:,'* + IsXi^ + 76X,^ +,- and testing for //q : 72 = 73 = 74 = 75 = 76 = 0. The x^ test statistic is then given by where RSSr is {RSSr - RSS) D 2. u RSS/{N - 7) ^ '^^ ""^'^ ^' the restricted residual sum of squares obtained from regressing 0(x,) on the intercept and x, alone, RSS is the residual sum of squares of the whole regression and N is the sample size. The LM test is given by LM = N 6 24 Under the nuu hypothesis of normality, LM is asymptotically distributed as a X2- The estimated sizes and powers of the LM and x^ test in 1000 Monte Carlo replications are reported in Table 1. The standard errors of the estimated probabilities are no greater than \/.25/1000 =.016. The sample sizes considered are 25, 50, 100 and 250. The performances of the x test from regressing the ^'(^^t) on some higher order polynomials of x, were also investigated. The results are similar so we choose not to report those here. Under the Gaussian distribution, the estimated probabilities of Type I error are computed from the true x^ critical points. From Table 1, we can see that the estimated Type I errors of the x^ test are much closer than the LM test to the nominal value of.10 for all sample sizes. The LM test under estimated the sizes of the test in all the sample sizes we investigated. To make a valid power comparison, we size adjust the power under all alternative distributions. The empirical significance level we use is 10%. At the smaller sample test under Gamma sizes of N=25 and 50, the LM test has higher powers than the x^ (2,1), Log (0,1) and t(5). The discrepancies, however, become less prominent as the sample size increases. This is due to the fact that the score functions of both Log (0,1) and Gamma (1,2) are approximately linear in the high density regions as can be seen from Figure 2. More observations in the tails will be needed to facilitate estimation of the score functions that are distinguishable from the linear Gaussian score. The situation is similar in the Student's t(5). However, an even bigger sample size will probably be needed for some realizations in the tails to discern the estimated score function of the Student's t from that of the Gaussian. As expected, the x^ test has some power for A(5.2) and this increases rapidly with the sample size. The x^ test also performs better for Beta (3,2) alternative. The LM test has powers even lower than its sizes for the Tukey's A alternative in all sample sizes.

Table 1: Estimated Powers for 1000 Replications (Empirical size =.10) Sample Sizes Distributions LM x' Gaussian.044.126 Beta (3,2).063.213 iv = 25 Gamma (2,1).700.625 t(5).352.163 Log (0,1).962.795 A(5.2).09.152 Gaussian.059.111 Beta (3,2).154.370 N = 50 Gamma (2,1).944.872 t (5).517.260 Log(0,l) 1.00.927 A(5.2).061.334 Gaussian.067.099 Beta (3,2).433.548 N = 100 Gamma (2,1).998.990 t (5).699.347 Log(0,l) 1.00 1.00 A(5.2).019.641 Gaussian.085.091 Beta (3,2).980.901 A^ = 250 Gamma (2,1) 1.00 1.00 t(5).933.701 Log(0,l) 1.00 1.00 A(5.2).011.980

Based on our examples and simulation results, we conclude that the estimated score function is informative in performing exploratory data analysis. It also allows us to formulate a formal large sample test for normality that possesses reasonable size and good power properties under finite sample situations. Acknowledgement The authors would like to thank Roger Koenker and Robin Sickles for their helpful suggestions and incisive comments. The computations were performed on computing facilities support by the National Science Foundation Grants SES 89-22472. References Bera, A.K. and Ng, P.T. (1992), "Robust Tests for Heteroskedasticity and Autocorrelation Using Score Function,'' Tilburg University, CentER for Economic Research, Discussion Paper No. 9245. 2l Bickel, P.J. (1978), "Using Residuals Robustly I: Tests for Heteroscedasticity, Nonlinearity," The Annals of Statistics, 6, 266-291. 31 Cox, D.D. (1985), "A Penalty Method for Nonparametric Estimation of the Logarithmic Derivative of a Density Function," Annals of the Institute of Statistical Mathematics, 37, 271-288. Cox, D.D. and Martin, D.R. (1988), "Estimation of Score Functions", Technical Report, University of Washington. Csorgo, M. and Revesz, P. (1983), ".A.n N.N-estimator for the Score Function," Seminarbericht Nr.J^Q, Sektion Mathematik. D'Agostino, R.B. (1971), "An Omnibus Test for Normality for Moderate adn Large Size Samples," Biometrika, 58, 341-348. Geary, R.C. (1947), "Testing for Normality," Biometrika, 34, 209-242. 8] Jarque, CM. and Bera, A.K. (1987), "Test for for Normality of Observations and Regression Residuals," International and Statistical Review, 55, 163-172. 91 Joiner, B.L. and HaU, D.L. (1983), "The Ubiquitous Role of /'// in Efficient Estimation of Location," The American Statistician, 37, 128-133. Proceedings of the First Easter Conference on Model Theory, Joiner, B.L. and Rosenblatt, J.R. (1971), "Some Properties of the Range in Samples from Tukey's Symmetric Lambda Distributions," Journal of the American Statistical Association, 66, 394-399. Koenker, R. (1982), "Robust Methods in Econometrics," Econometric Review, 1, 214-255. [12 Manski, C.F. (1984), "Adaptive Estimation of Non-linear Regression Models," Econometric Reviews, 3, 145-194. Ng, Pin T. (1991), "Computing Smoothing Spline Score Estimator," working paper.

[14] Ng, Pin T. (1994), "Smoothing Spline Score Estimation," SIAM, Journal of Scientific Computing, forthcoming. [15] Pearson, E.S., D'Agostino, R.B. and Bowman, K.O. (1977), "Tests for Departure from Normality: Comparison of Powers," Biometrika^ 64, 231-246. [16] Portnoy, S. and Koenlcer, R. (1989), "Adaptive L- Estimation of Linear Models," The Annals of Statistics, 17, 362-381. [17] Shapiro, S.S. and Wilk, M.B. (1965), "An Analysis of Variance Test for Normality (Complete Samples)," Biometrika, 52, 591-611. [18] Shapiro, S.S. and Francia, R.S. (1972), "Approximate Analysis of Variance Test for Normality," Journal of the American Statistical Association, 67,215-216. [19] Stone, C.J. (1975), "Adaptive Maximum Likelihood Estimators of a Location Parameter," The Annals of Statistics, 3, 267-284.