COMPARISON OF THE ESTIMATORS OF THE LOCATION AND SCALE PARAMETERS UNDER THE MIXTURE AND OUTLIER MODELS VIA SIMULATION

Similar documents
Analysis of variance and linear contrasts in experimental design with generalized secant hyperbolic distribution

Testing for a unit root in an ar(1) model using three and four moment approximations: symmetric distributions

OPTIMAL B-ROBUST ESTIMATORS FOR THE PARAMETERS OF THE GENERALIZED HALF-NORMAL DISTRIBUTION

ROBUST ESTIMATION OF A CORRELATION COEFFICIENT: AN ATTEMPT OF SURVEY

High Breakdown Analogs of the Trimmed Mean

Breakdown points of Cauchy regression-scale estimators

PROD. TYPE: COM ARTICLE IN PRESS. Computational Statistics & Data Analysis ( )

Increasing Power in Paired-Samples Designs. by Correcting the Student t Statistic for Correlation. Donald W. Zimmerman. Carleton University

Empirical likelihood-based methods for the difference of two trimmed means

ON THE FAILURE RATE ESTIMATION OF THE INVERSE GAUSSIAN DISTRIBUTION

9. Robust regression

INFLUENCE OF USING ALTERNATIVE MEANS ON TYPE-I ERROR RATE IN THE COMPARISON OF INDEPENDENT GROUPS ABSTRACT

Does k-th Moment Exist?

Estimators for the binomial distribution that dominate the MLE in terms of Kullback Leibler risk

On robust and efficient estimation of the center of. Symmetry.

Regression Analysis for Data Containing Outliers and High Leverage Points

Robust factorial ANCOVA with LTS error distributions

Modied tests for comparison of group means under heteroskedasticity and non-normality caused by outlier(s)

ROBUST TESTS BASED ON MINIMUM DENSITY POWER DIVERGENCE ESTIMATORS AND SADDLEPOINT APPROXIMATIONS

2 Mathematical Model, Sequential Probability Ratio Test, Distortions

Highly Robust Variogram Estimation 1. Marc G. Genton 2

Estimation of Parameters of the Weibull Distribution Based on Progressively Censored Data

STOCHASTIC COVARIATES IN BINARY REGRESSION

Fast and robust bootstrap for LTS

Measuring robustness

Published: 26 April 2016

CONVERTING OBSERVED LIKELIHOOD FUNCTIONS TO TAIL PROBABILITIES. D.A.S. Fraser Mathematics Department York University North York, Ontario M3J 1P3

An Empirical Characteristic Function Approach to Selecting a Transformation to Normality

Lecture 14 October 13

Minimum Hellinger Distance Estimation with Inlier Modification

One-Sample Numerical Data

A bias improved estimator of the concordance correlation coefficient

Outlier Robust Nonlinear Mixed Model Estimation

ISSN Some aspects of stability in time series small sample case

Inferring from data. Theory of estimators

WEIGHTED QUANTILE REGRESSION THEORY AND ITS APPLICATION. Abstract

POWER AND TYPE I ERROR RATE COMPARISON OF MULTIVARIATE ANALYSIS OF VARIANCE

J. W. LEE (Kumoh Institute of Technology, Kumi, South Korea) V. I. SHIN (Gwangju Institute of Science and Technology, Gwangju, South Korea)

Effects of Outliers and Multicollinearity on Some Estimators of Linear Regression Model

Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at

Physics 509: Bootstrap and Robust Parameter Estimation

Robust Variable Selection Through MAVE

Learning Energy-Based Models of High-Dimensional Data

Robustness of location estimators under t- distributions: a literature review

Modern Methods of Data Analysis - WS 07/08

Predicting a Future Median Life through a Power Transformation

Weighted empirical likelihood estimates and their robustness properties

A Brief Overview of Robust Statistics

ON THE CONSEQUENCES OF MISSPECIFING ASSUMPTIONS CONCERNING RESIDUALS DISTRIBUTION IN A REPEATED MEASURES AND NONLINEAR MIXED MODELLING CONTEXT

In Chapter 2, some concepts from the robustness literature were introduced. An important concept was the inuence function. In the present chapter, the

Some Theoretical Properties and Parameter Estimation for the Two-Sided Length Biased Inverse Gaussian Distribution

Application of Parametric Homogeneity of Variances Tests under Violation of Classical Assumption

A NOTE ON ROBUST ESTIMATION IN LOGISTIC REGRESSION MODEL

arxiv: v1 [math.st] 20 May 2014

DESCRIPTIVE STATISTICS FOR NONPARAMETRIC MODELS I. INTRODUCTION

Introduction to Robust Statistics. Elvezio Ronchetti. Department of Econometrics University of Geneva Switzerland.

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Approximate Median Regression via the Box-Cox Transformation

Robust Outcome Analysis for Observational Studies Designed Using Propensity Score Matching

A nonparametric two-sample wald test of equality of variances

Remedial Measures for Multiple Linear Regression Models

A Derivation of the EM Updates for Finding the Maximum Likelihood Parameter Estimates of the Student s t Distribution

Research Article A Nonparametric Two-Sample Wald Test of Equality of Variances

Point and Interval Estimation for Gaussian Distribution, Based on Progressively Type-II Censored Samples

Robust Linear Discriminant Analysis and the Projection Pursuit Approach

Influence Functions of the Spearman and Kendall Correlation Measures Croux, C.; Dehon, C.

Some New Methods for Latent Variable Models and Survival Analysis. Latent-Model Robustness in Structural Measurement Error Models.

Leverage effects on Robust Regression Estimators

Robustness and Distribution Assumptions

Using R in Undergraduate Probability and Mathematical Statistics Courses. Amy G. Froelich Department of Statistics Iowa State University

Contents 1. Contents

Eric Shou Stat 598B / CSE 598D METHODS FOR MICRODATA PROTECTION

-However, this definition can be expanded to include: biology (biometrics), environmental science (environmetrics), economics (econometrics).

A REMARK ON ROBUSTNESS AND WEAK CONTINUITY OF M-ESTEVtATORS

FULL LIKELIHOOD INFERENCES IN THE COX MODEL

STATISTICS 4, S4 (4769) A2

A COMPARISON OF POISSON AND BINOMIAL EMPIRICAL LIKELIHOOD Mai Zhou and Hui Fang University of Kentucky

Statistics - Lecture One. Outline. Charlotte Wickham 1. Basic ideas about estimation

Robust Preprocessing of Time Series with Trends

Midwest Big Data Summer School: Introduction to Statistics. Kris De Brabanter

Robustness. James H. Steiger. Department of Psychology and Human Development Vanderbilt University. James H. Steiger (Vanderbilt University) 1 / 37

A Simulation Comparison Study for Estimating the Process Capability Index C pm with Asymmetric Tolerances

Using Ridge Least Median Squares to Estimate the Parameter by Solving Multicollinearity and Outliers Problems

Confidence intervals for kernel density estimation

Exact Linear Likelihood Inference for Laplace

Improved Ridge Estimator in Linear Regression with Multicollinearity, Heteroscedastic Errors and Outliers

Robust Stochastic Frontier Analysis: a Minimum Density Power Divergence Approach

f(x µ, σ) = b 2σ a = cos t, b = sin t/t, π < t 0, a = cosh t, b = sinh t/t, t > 0,

Application of Variance Homogeneity Tests Under Violation of Normality Assumption

A Robust Strategy for Joint Data Reconciliation and Parameter Estimation

robustness, efficiency, breakdown point, outliers, rank-based procedures, least absolute regression

Introduction Robust regression Examples Conclusion. Robust regression. Jiří Franc

KANSAS STATE UNIVERSITY

Definitions of ψ-functions Available in Robustbase

Outline Lecture 2 2(32)

Research Article The Laplace Likelihood Ratio Test for Heteroscedasticity

A general linear model OPTIMAL BIAS BOUNDS FOR ROBUST ESTIMATION IN LINEAR MODELS

A Modified M-estimator for the Detection of Outliers

Spring 2012 Math 541B Exam 1

Analysis of Type-II Progressively Hybrid Censored Data

Transcription:

(REFEREED RESEARCH) COMPARISON OF THE ESTIMATORS OF THE LOCATION AND SCALE PARAMETERS UNDER THE MIXTURE AND OUTLIER MODELS VIA SIMULATION Hakan S. Sazak 1, *, Hülya Yılmaz 2 1 Ege University, Department of Statistics, İzmir, Turkey 2 Eskişehir Osmangazi University, Department of Biostatistics and Medical Informatics, Eskişehir, Turkey Received: 08.12.2014 Accepted: 04.05.2015 Abstract: Quite a few robust estimators have been proposed by many authors since the well-known estimators of the location and scale parameters, the sample mean and the sample standard deviation, which are not robust to deviations from normality. There are some studies in the literature investigating the robustness of the various methods through simulation but they generally focus on investigating the performance of the estimators of the location parameter. In this study we compared the performance of two types of Huber s M estimators (w24 and BS82), the modified maximum likelihood (MML) estimators, and the sample median and the scaled median absolute deviation (MAD) w.r.t. the sample mean and the sample standard deviation via simulation under the mixture and outlier models. Depending on the simulation results, in the estimation of the location parameter, we can suggest the general usage of the Huber s M estimators. In the estimation of the scale parameter, the MML estimator of the scale parameter can be used unless the sample size and the extremity of contamination (k) are large. In such situations the sample standard deviation should be preferred. Keywords: Modified Maximum Likelihood; Robustness; M Estimators; Mixture Model; Outlier Model 1. INTRODUCTION The most well-known estimators of the location and scale parameters are the sample mean and the sample standard deviation, respectively. They have the optimal properties under normality but they do not possess robustness which means they lose considerable amount of efficiency in the case of deviations from normality or in the presence of outliers [1-3]. Assumption of normality may not be realistic for various real life data sets [3]. Ignoring the violation of the normality assumption can end up with inefficient estimation of the parameters. This may possibly lead to a wrong analysis and interpretation of the situation. Quite a few robust estimators have been proposed to * Corresponding Author: Tel: +90 232 3111725 Fax: +90 232 3881890 E-mail: hakan.savas.sazak@ege.edu.tr 20

alleviate this problem. Wilcox [3] gave the definitions and properties of a variety of estimators in detail. Wilcox [3], Özdemir [4], and Wilcox and Özdemir [5] performed simulation studies to compare the efficiencies of several estimators of the location parameter for different distributions and models. As a common result, they have found that no estimator of the location parameter is the best for all situations but it is clear that the sample mean is the least efficient estimator unless the distribution is normal. This is an expected result since it is known that the sample mean is too sensitive to deviations from normality [1]. In this study, we will first introduce the most popular estimators of the location and scale parameters and then compare their performance through a Monte Carlo simulation study under several situations. In detail, two types of Huber s M estimators (w24 and BS82), the modified maximum likelihood (MML) estimators, and the sample median and the scaled median absolute deviation (MAD), will be compared with the sample mean and the sample standard deviation under the normal and non-normal distributed data sets for various sample sizes. Non-normal conditions are provided with different mixture and outlier models. The paper is organized as follows. We give the descriptions of the mentioned estimators of the location and scale parameters, and the mixture and outlier models in Section 2. Section 3 contains the simulation results which are performed to compare the efficiencies of the introduced methods. Final section includes some concluding remarks and suggestions. 2. METHODOLOGY The usual estimators of the location and scale parameters are the sample mean and the sample standard deviation which are, respectively, n x 1 x i n and n 1 2 s (x i x). n 1 i 1 i 1 21

A. Huber s M Estimators Let y 1,y 2,,y n be a random sample from a distribution of the type (1/ )f((y- )/ ). Huber [6] assumed that f is unknown but a long-tailed symmetric distribution (kurtosis 3), and then proposed a new method to estimate the location and 4 22 scale parameters. Gross [7] investigated 25 estimators of and out of 65 estimators discussed by Andrews et al. [8] and recommended three of them, namely, the wave estimators w24, the bisquare estimators BS82 and the Hampel estimators H22 [1]. In this study, w24 and BS82 estimators were used for comparison. Pairs of equations according to w24 and BS82, respectively, are shown below: T 0 =median(y i ), S 0 =median ( y i T 0 ) and, z i = (y i T 0 ) ( 1 i n. h S 0 ) For w24, μ w24 = T 0 + (hs 0 ) tan 1 [ i sin z i ] and i cos z σ w24 = (hs 0 ) [n i where h=2.4. i sin(z i ) 2]1/2 ( i cos(z i )) For BS82, μ BS82 = T 0 + (hs 0 ) i ψ(z i ) i ψ (z i ) and σ BS82 = (hs 0 ) [n i ψ 2 (z i ) 2]1/2 ( i ψ (z i )). Here, ψ(z) = { z(1 z2 ) 2 ; z 1 0 ; z > 1 where h=8.2. and ψ (z) = 1 6z i 2 + 5z i 4 Remark: Gross [7] tried various h coefficients for the wave and bisquare estimators and finally recommended using h=2.4 and 8.2 for w24 and BS82 estimators, respectively, depending on the Monte Carlo simulations since they possess both high efficiency and robustness for these coefficients. 22

B. Modified Maximum Likelihood (MML) Estimators The normality assumption is too restrictive from applications point of view; see, for example, Huber [9] and Tiku et al. [10]. Hampel et al. [11] pointed out that many real life data can be approximated by Student s t-distribution. Assuming Student s t distribution also provides more robust estimators [12]. Because of these facts we assumed an underlying long tailed symmetric (LTS) distribution which is a scaled Student s t distribution with 2p-1 df. scaled so that its variance is 2. Another advantage of LTS distribution is that it covers normal distribution since it reduces to normal distribution for p=. Let X be a random variable following LTS distribution that is shown below: f(x; p) = p 1 (x μ)2 σ kβ ( 1 2, p 1 (1 + 2 ) kσ 2 ), < x < where k = 2p 3 (p 2); β(a, b) = Γ(a)Γ(b)/Γ(a + b). In order to obtain the MML estimators of the location and scale parameters which originated with Tiku [13], initially the maximum likelihood (ML) equations are expressed in terms of the ordered variates z (i) = x (i) μ σ simply by replacing z i = x i μ σ by z (i) ( 1 i n). The intractable terms in the likelihood equations are linearized by using the first two terms of a Taylor series expansion and the following estimators are obtained for a given value of p. where μ MML = n i=1 β ix (i) n i=1 β i n B = 2p α k i=1 i [X (i) i=1 β i n n and X (i) i=1 β i σ MML = B+ B 2 +4nC 2 n(n 1) n ] and C = 2p β k i=1 i [X (i) i=1 β i n n X (i) i=1 β i 2 ], 23

α i = (1/k)t 3 (i) [1+(1/k)t (i) 2 ] 2 and β i = 1 [1+(1/k)t 2 (i) ] 2. Note that, t (i) = E(z (i) ) where z (i) = x (i) μ For 1 i n, t (i) can be obtained from the equation f(z)dz. σ. t (i) In real life, the parameter p of LTS distribution is not known. In our study we used a calibration technique [14] to estimate p. The likelihood function of LTS distribution is computed for several values of p with the corresponding MML estimates of μ and σ. Then, the value of p, that maximizes the likelihood function, is taken as the estimate of p. C. Median and Median Absolute Deviation (MAD) Median (μ ) is one of the widely known robust estimators of the location parameter. Let y 1,y 2,.,y n be a random sample. Median is the middle order statistic when n is odd. When n is even, then the average of the order statistics with ranks (n/2) and ((n/2)+1) is equal to the median. Median absolute deviation (MAD) is a simple way to calculate the variation of a data set which is median y i median(y i ). It was first used to estimate the unknown scale parameter. Then, MAD was scaled by dividing it by 0.6745 to make it an unbiased estimator of for normal distribution as follows MAD = median y i median(y i ) 0.6745 1.1. Mixture and Outlier Models of Normal Distribution In the mixture model, a sample contains subsamples and each of these subsamples comes from a different population with a specified probability. 24

In this study, for the mixture model, we assume that the sample contains two subsamples that come from normal distributions with mean zero but with different scale parameters with probability π and (1 π), respectively. The mixture model is shown below: obs. with probability π~ N(0, k 2 ) and obs. with probability (1 π)~ N(0,1) This model has mean 0 and variance (1 - π + π k 2 ). Consider that a sample which has outliers contains totally n observations. If we want to model this sample with a theoretical distribution, outliers and the regular observations must be modeled separately. The model that is combined with the distributions of regular and outlying observations is called an outlier model. In this study, for the outlier model, it is assumed that both regular and outlying observations come from a normal distribution with mean zero but with different scale parameters. The outlier model is shown below: a obs. ~ N(0, k 2 ) and (n a) obs. ~ N(0,1) This model has mean 0 and variance (1 - (a/n) + (a/n) k 2 ). For both the mixture and outlier models, k can be considered as the extremity of contamination. Remark: Under regularity conditions, the distributions of the sample mean and the sample standard deviation are approximately normal for large n (see Bain and Engelhardt [15] and Kenney and Keeping [16]). In the same way, under regularity conditions, M estimators have asymptotic normal distribution (see Huber [2]). The MML estimators also have asymptotic normal distribution (like the ML estimators) under very general regularity conditions since they are asymptotically equivalent to ML estimators (see Tiku and Akkaya [1] for details). Since the sample median is a central order statistic (or the average of two central order statistics for the even sample size), it also has asymptotic normal distribution under certain conditions (see Bain and Engelhardt [15]). Hall and Welsh [17] showed that MAD is asymptotically normal under only very mild smoothness conditions on the underlying distribution. It is 25

possible to work out the exact distributions of the mentioned estimators under several situations by using some approximation methods as Edgeworth expansion or saddlepoint techniques but it can be very cumbersome [2]. 3. SIMULATION STUDY In this study, the performance of various estimators of the location and scale parameters are investigated under standard normal distribution and different cases of the mixture and outlier models of normal distribution through simulation. (100,000/n) Monte Carlo runs are performed with MATLAB package program. The simulations are done for the sample sizes n=20, 50 and 100. For the mixture model, the probability that the observations come from N(0, k 2 ) is taken as π = 0.05 and 0.1. For the outlier model, the proportion of the outliers in a sample is taken to be p=0.05 and 0.1. For both model, the extremity of contamination, k, is taken as 5, 10 and 20. After the data sets have been generated, they are standardized by the square root of the variance of the model. Thus, for all the data sets, the expected mean value is zero and the expected standard deviation is 1. Then, the simulated means, biases, variances, mean square errors (mse) and relative efficiencies (eff) are calculated to investigate the efficiency of the mentioned estimators. The estimators of the location parameter, μ w24, μ BS82, μ MML and μ, are compared according to the relative efficiency w.r.t the sample mean x. The formula of the relative efficiency is shown below: eff(θ i x ) = 100x mse(x ) mse(θ i) θ i ( i=1,2,3,4; θ 1 = μ w24, θ 2 = μ BS82, θ 3 = μ MML, θ 4 = μ ) The estimators of the scale parameter, σ w24, σ BS82, σ MML and MAD, are compared according to the relative efficiency w.r.t the standard deviation (s). The formula of the relative efficiency is shown below: eff(θ i s) = 100x mse(s) mse(θ i) 26

θ i (i=1,2,3,4; θ 1 = σ w24, θ 2 = σ BS82, θ 3 = σ MML θ 4 = MAD ) The simulation results are given in Tables 1-7. Tables include simulated means, biases, variances, mse s and efficiency values. The values in the tables are grouped by π and k for Tables 1-6. Table 7 contains the results of the data sets from the standard normal distribution. In general, the Huber s M estimators, w24 and BS82 estimators produce very similar results. Thus, we give comments about the Huber s M estimators without differentiating between them. Table 1 shows the simulation results for the mixture model when the sample size is equal to 20. It is observed that as π or k values increase, efficiencies of the robust estimators of the location parameter increase. Huber s M estimators of the location parameter are the most efficient estimators and the sample mean is the worst estimator of the location parameter in this situation. The MML estimator of the location parameter is more efficient than the median for whereas it is worse than the median for and 20. For the scale parameter estimation, for the situation when π=0.05 and, the Huber s M estimators of the scale parameter are the best. The MML estimator of the scale parameter takes the second place although there is only a marginal difference between the MML estimator and the Huber s M estimators. MAD* takes the third place and the sample standard deviation is the worst. In the other situations the MML estimator is the best. For the situation when π=0.05 and Huber s M estimators are the second best and MAD* takes the third place which is only marginally better than the sample standard deviation. In all other situations, the sample standard deviation is the second best after the MML estimator of the scale parameter. It is seen that as π or k values increase, the bias of Huber s M estimators of the scale parameter and MAD* get larger and this makes them extremely inefficient in estimating the scale parameter. The simulation results of the mixture model for the sample size n=50 are given in Table 2. All the estimators of the location parameter give higher efficiencies than the sample mean and have the same order as they have in Table 1. The MML estimator of the scale parameter is the best for and for π = 0.05 and. As π or k values increase, The MML estimator of the scale parameter produce some bias and it becomes inefficient w.r.t the sample standard deviation. For high k values, the sample standard 27

deviation is the best among all the scale parameter estimators. We should especially note that the Huber s M estimators of the scale parameter and MAD* produce huge bias as π or k values increase. For example for π = 0.1 and, their mean is around 0.17 making an approximate bias of -0.87 which leads to an efficiency around 21%. The last simulation study for the mixture model is done for the sample size n=100, which is shown in Table 3. Results are very similar with Table 1 and 2 for the estimation of the location parameter. In the estimation of the scale parameter, the sample standard deviation is the best estimator except for where the MML scale estimator is the best. Huber s M estimators of the scale parameter and MAD* cannot be used for this situation because of huge bias and extreme inefficiency. In Table 4, the simulation results of the outlier model for the sample size n=20 are given. Again, Huber s M estimators produce the most efficient estimation of the location parameter. They are followed by the MML estimator for low p and k values. For the situation when, the sample median is better than the MML estimator of the location parameter. It is also better than the MML estimator when p=0.1 and whereas it is worse than the MML estimator when p=0.05 and. The sample mean is the worst estimator of the location parameter which is not a surprising result. The MML estimator of the scale parameter dominates this table. The sample standard deviation takes the second place except for p=0.05 and where Huber s M estimators are better. Again, the Huber s M estimators of the scale parameter and MAD* produce huge bias as the values of p and k increase. Table 5 shows the simulation results of the outlier model for the sample size n=50. The results are very similar with the results of Table 4 for the location parameter. In the estimation of the scale parameter, the MML estimator of the scale parameter is the best for and for p=0.05 and. In other situations it takes the second place behind the sample standard deviation. For high k values, the MML estimator of the scale parameter produce some bias but the bias produced by the Huber s M estimators of the scale parameter and MAD* is huge. For example for p=0.1 and, their mean is around 0.17 and their bias is approximately -0.83. This inevitably results in an extremely inefficient estimation of the scale parameter. 28

Table 6 contains the simulation results of the outlier model for the sample size n=100. The results are again very similar with the results of Table 4 and 5 for the location parameter. The MML estimator of the scale parameter is the best for. In all other situations the sample standard deviation is the best estimator of the scale parameter and the MML estimator takes the second place. Huber s M estimators of the scale parameter and MAD* have huge bias in all situations of Table 6 and the bias gets huge as p and k get larger. It is very obvious that they cannot be used in the estimation of the scale parameter for this case. Finally, Table 7 gives the simulation results for the sample size n=20, 50 and 100 when the underlying distribution is standard normal. It is very natural to see that the sample mean and the standard deviation are the most efficient estimators in this situation. The MML estimators of the location and scale parameters take the second place although there is only a marginal difference between them and the sample mean and the sample standard deviation. The Huber s M estimators of the location and scale parameters take the next place. There is no big difference between the Huber s M estimators of the location parameter and the MML estimator of the location parameter but there is a significant difference between the Huber s M estimators and the MML estimator of the scale parameter. The sample median and MAD* have very poor efficiencies in this situation. 29

π = 0. 05 π = 0. 1 LOCATION PARAMETERS SCALE Methods: μ w24 μ BS82 μ MML μ x σ w24 σ BS82 σ MML MAD* S mean -0.0005-0.0005-0.0002 0 0 0.7012 0.7002 0.8840 0.6801 0.9201 bias -0.0005-0.0005-0.0002 0 0-0.2988-0.2998-0.1160-0.3199-0.0799 variance 0.0260 0.0260 0.0292 0.0354 0.0488 0.0215 0.0212 0.1002 0.0324 0.1508 mse 0.0260 0.0260 0.02 0.0354 0.0488 0.1108 0.1111 0.1137 0.1348 0.1572 eff. 187.74 187.92 167.36 137.90 100 141.83 141.49 138.27 116.61 100 mean 0.0012 0.0012 0.0034 0.0019 0.0070 0.4211 0.4207 0.7290 0.4162 0.8324 bias 0.0012 0.0012 0.0034 0.0019 0.0070-0.5789-0.5793-0.2710-0.5838-0.1676 variance 0.0098 0.0098 0.0147 0.0137 0.0520 0.0072 0.0071 0.1815 0.0127 0.3261 mse 0.0098 0.0098 0.0148 0.0137 0.0520 0.3423 0.3427 0.2549 0.3536 0.3542 eff. 532.17 532.99 352.49 380.67 100 103.46 103.33 138.93 100.17 100 mean 0.0004 0.0004 0.0010-0.0003-0.0004 0.2227 0.2227 0.6093 0.2247 0.7595 bias 0.0004 0.0004 0.0010-0.0003-0.0004-0.7773-0.7773-0.3907-0.7753-0.2405 variance 0.0027 0.0027 0.0074 0.0039 0.0516 0.0018 0.0018 0.2407 0.0036 0.4502 mse 0.0027 0.0027 0.0074 0.0039 0.0516 0.6060 0.6060 0.3933 0.6048 0.5081 eff. 1884.6 1882.8 697.87 1335.9 100 83.85 83.46 129.19 84.02 100 mean 0.0003 0.0003 0.0009-0.0014 0.0029 0.6071 0.6052 0.8647 0.5807 0.9147 bias 0.0003 0.0003 0.0009-0.0014 0.0029-0.3929-0.3948-0.1353-0.4193-0.0853 variance 0.0205 0.0204 0.0255 0.0258 0.0496 0.0218 0.0213 0.1030 0.0247 0.1464 mse 0.0205 0.0204 0.0255 0.0258 0.0496 0.1762 0.1772 0.1213 0.2005 0.1537 eff. 242.30 243.56 194.50 192.34 100 87.20 86.70 126.70 76.64 100 mean 0.0017 0.0017 0.0017 0.0020 0.0008 0.3322 0.3313 0.7394 0.3288 0.8564 bias 0.0017 0.0017 0.0017 0.0020 0.0008-0.6678-0.6687-0.2606-0.6712-0.1436 variance 0.0058 0.0057 0.0125 0.0082 0.0480 0.0063 0.0061 0.1575 0.0083 0.2470 mse 0.0058 0.0057 0.0125 0.0082 0.0480 0.4522 0.4532 0.2254 0.4588 0.2676 eff. 833.82 836.90 384.05 585.70 100 59.19 59.05 118.72 58.34 100 mean 0.0003 0.0003 0.0002 0.0003 0 0.1672 0.1669 0.6761 0.1721 0.8338 bias 0.0003 0.0003 0.0002 0.0003 0-0.8328-0.8331 0.3239-0.8279-0.1662 variance 0.0015 0.0015 0.0080 0.0022 0.0487 0.0013 0.0013 0.2001 0.0024 0.3134 mse 0.0015 0.0015 0.0080 0.0022 0.0487 0.6950 0.6953 0.3051 0.6877 0.3410 eff. 3280.7 3286.6 610.4 2203.9 100 49.07 49.04 111.78 49.58 100 Table 1 Simulation results of the mixture model for n=20 30

LOCATION PARAMETERS SCALE Methods: μ w24 μ BS82 μ MML μ x σ w24 σ BS82 σ MML MAD* S mean -0.0003-0.0003 0 0.0003 0 0.7134 0.7128 0.8665 0.6963 0.9571 bias -0.0003-0.0003 0 0.0003 0-0.2866-0.2872-0.1335-0.3037-0.0429 variance 0.0098 0.0098 0.0109 0.0147 0.0194 0.0080 0.0079 0.0341 0.0135 0.0808 mse 0.0098 0.0098 0.0109 0.0147 0.0194 0.0901 0.0904 0.0519 0.1057 0.0826 eff. 197.20 197.28 178.15 132.21 100 91.69 91.46 159.13 78.17 100 π = 0. 05 mean 0.0012 0.0012 0.0022 0.0008 0.0070 0.4287 0.4286 0.6764 0.4257 0.9191 bias 0.0012 0.0012 0.0022 0.0008 0.0070-0.5713-0.5714-0.3236-0.5743-0.0809 variance 0.0039 0.0039 0.0050 0.0056 0.0216 0.0026 0.0026 0.0497 0.0051 0.1740 mse 0.0039 0.0039 0.0050 0.0056 0.0216 0.3289 0.3291 0.1544 0.3349 0.1806 eff. 561.71 561.38 431.04 387.00 100 54.89 54.87 116.94 53.90 100 mean 0.0005 0.0005 0.0009 0.0005-0.0004 0.2272 0.2272 0.5399 0.2291 0.8836 bias 0.0005 0.0005 0.0009 0.0005-0.0004-0.7728-0.7728-0.4601-0.7709-0.1164 variance 0.0011 0.0011 0.0020 0.0016 0.0205 0.0007 0.0007 0.0712 0.0015 0.2467 mse 0.0011 0.0011 0.0020 0.0016 0.0205 0.5980 0.5979 0.2828 0.5957 0.2602 eff. 1824.9 1824.0 1049.3 1250.1 100 43.52 43.53 92.00 43.68 100 mean -0.0005-0.0004 0.0007-0.0007 0.0029 0.6148 0.6132 0.8400 0.5906 0.9554 bias -0.0003-0.0003 0 0.0003 0-0.3852 0.3868-0.1600-0.4094-0.0446 variance 0.0077 0.0076 0.0095 0.0111 0.0201 0.0080 0.0078 0.0351 0.0100 0.0703 mse 0.0077 0.0076 0.0095 0.0111 0.0202 0.1564 0.1574 0.0607 0.1776 0.0723 eff. 263.10 264.10 212.28 182.09 100 46.23 45.92 119.14 40.70 100 π = 0. 1 mean 0.0020 0.0020 0.0017 0.0015 0.0008 0.3348 0.3341 0.6791 0.3348 0.9291 bias 0.0020 0.0020 0.0017 0.0015 0.0008-0.6652-0.6659-0.3209-0.6652-0.0709 variance 0.0024 0.0024 0.0041 0.0034 0.0187 0.0020 0.0020 0.0529 0.0032 0.1171 mse 0.0024 0.0024 0.0041 0.0034 0.0187 0.4445 0.4453 0.1559 0.4457 0.1221 eff. 790.22 792.13 460.01 549.65 100 27.47 27.42 78.33 27.40 100 mean 0.0003 0.0003-0.0002 0.0006 0 0.1694 0.1692 0.6080 0.1745 0.9304 bias 0.0003 0.0003-0.0002 0.0006 0-0.8306-0.8308-0.3920-0.8255-0.0696 variance 0.0006 0.0006 0.0023 0.0009 0.0202 0.0005 0.0004 0.0686 0.0009 0.1412 mse 0.0006 0.0006 0.0023 0.0009 0.0202 0.6904 0.6906 0.2223 0.6824 0.1461 eff. 3615.3 3621.7 881.58 2294 100 21.16 21.15 65.71 21.41 100 Table 2 Simulation results of the mixture model for n=50 31

LOCATION PARAMETERS SCALE Methods: μ w24 μ BS82 μ MML μ x σ w24 σ BS82 σ MML MAD* S mean -0.0038-0.0037-0.0042-0.0026-0.0051 0.7201 0.7195 0.9097 0.7044 0.9832 bias -0.0038-0.0037-0.0042-0.0026-0.0051-0.2799-0.2805-0.0903-0.2956-0.0168 variance 0.0052 0.0052 0.0059 0.0072 0.0096 0.0038 0.0038 0.0203 0.0069 0.0434 mse 0.0052 0.0052 0.0059 0.0072 0.0096 0.0822 0.0825 0.0284 0.0943 0.0437 eff. 184.60 184.80 163.67 132.57 100 53.16 53.00 153.71 46.34 100 π = 0. 05 mean 0.0011 0.0011-0.0012-0.0006-0.0046 0.4322 0.4321 0.7386 0.4305 0.9337 bias 0.0011 0.0011-0.0012-0.0006-0.0046-0.5678-0.5679-0.2614-0.5695-0.0663 variance 0.0020 0.0020 0.0030 0.0028 0.0094 0.0014 0.0014 0.0377 0.0026 0.0915 mse 0.0020 0.0020 0.0030 0.0028 0.0094 0.3238 0.3239 0.1060 0.3269 0.0959 eff. 470.75 470.77 316.72 337.38 100 29.60 29.60 90.43 29.32 100 mean 0.0007 0.0007 0.0012 0.0009 0.0029 0.2267 0.2268 0.6160 0.2284 0.9399 bias 0.0007 0.0007 0.0012 0.0009 0.0029-0.7733-0.7732-0.3840-0.7716-0.0601 variance 0.0005 0.0005 0.0014 0.0008 0.0096 0.0003 0.0003 0.0574 0.0008 0.1300 mse 0.0005 0.0005 0.0014 0.0008 0.0096 0.5984 0.5982 0.2048 0.5961 0.1336 eff. 1861.6 1860.9 692.11 1202.1 100 22.33 22.33 65.22 22.41 100 mean 0-0.0001-0.0002-0.0006-0.0006 0.6187 0.6171 0.8850 0.5967 0.9818 bias 0-0.0001-0.0002-0.0006-0.0006-0.3813-0.3829-0.1150-0.4033-0.0182 variance 0.0037 0.0037 0.0052 0.0051 0.0101 0.0043 0.0042 0.0221 0.0056 0.0394 mse 0.0037 0.0037 0.0052 0.0051 0.0101 0.1497 0.1508 0.0353 0.1683 0.0397 eff. 271.28 272.79 196.41 197.60 100 26.52 26.33 112.41 23.60 100 π = 0. 1 mean -0.0013-0.0012-0.0003-0.0006 0 0.3363 0.3357 0.7731 0.3352 0.9707 bias -0.0013-0.0012-0.0003-0.0006 0-0.6637-0.6643-0.2269-0.6648-0.0293 variance 0.0011 0.0011 0.0028 0.0017 0.0101 0.0010 0.0010 0.0338 0.0017 0.0591 mse 0.0011 0.0011 0.0028 0.0017 0.0101 0.4415 0.4423 0.0852 0.4437 0.0600 eff. 932.40 936.86 355.24 597.11 100 13.58 13.55 70.33 13.51 100 mean 0 0-0.11 0.0002-0.0035 0.1695 0.1693 0.7076 0.1745 0.9619 bias 0 0-0.11 0.0002-0.0035-0.8305-0.8307-0.2924-0.8255-0.0381 variance 0.0003 0.0003 0.0018 0.0004 0.0096 0.0003 0.0003 0.0451 0.0005 0.0666 mse 0.0003 0.0003 0.0018 0.0004 0.0096 0.6900 0.6902 0.1306 0.6820 0.0680 eff. 3261 3263.7 527.69 2219.6 100 9.86 9.85 52.10 9.97 100 Table 3 Simulation results of the mixture model for n=100 32

LOCATION PARAMETERS SCALE Methods: μ w24 μ BS82 μ MML μ x σ w24 σ BS82 σ MML MAD* S mean -0.0022-0.0023-0.0030-0.0006-0.0046 0.6958 0.6952 0.8986 0.6812 0.9497 bias -0.0022-0.0023-0.0030-0.0006-0.0046-0.3042-0.3048-0.1014-0.3188-0.0503 variance 0.0269 0.0269 0.0292 0.0363 0.0502 0.0176 0.0176 0.0643 0.0308 0.1190 mse 0.0269 0.0269 0.0292 0.0363 0.0503 0.1101 0.1104 0.0746 0.1324 0.1215 eff. 186.84 186.97 172.11 138.28 100 110.37 110.04 162.99 91.80 100 p = 0. 05 mean 0.0035 0.0035 0.0032 0.0039 0.0026 0.4184 0.4181 0.7355 0.4147 0.8673 bias 0.0035 0.0035 0.0032 0.0039 0.0026-0.5816-0.5819-0.2645-0.5853-0.1327 variance 0.0098 0.0098 0.0129 0.0138 0.0492 0.0063 0.0063 0.0973 0.0118 0.2259 mse 0.0098 0.0098 0.0129 0.0138 0.0492 0.3446 0.3449 0.1672 0.3544 0.2435 eff. 501.73 501.54 379.89 355.68 100 70.67 70.60 145.61 68.71 100 mean -0.0001-0.0001-0.0008-0.0004-0.0031 0.2204 0.2204 0.6262 0.2219 0.8250 bias -0.0001-0.0001-0.0008-0.0004-0.0031-0.7796-0.7796-0.3748-0.7781-0.1750 variance 0.0027 0.0027 0.0054 0.0039 0.0496 0.0016 0.0016 0.1276 0.0032 0.3049 mse 0.0027 0.0027 0.0054 0.0039 0.0496 0.6094 0.6094 0.2681 0.6087 0.3355 eff. 1838.2 1836.8 925.38 1278.9 100 55.06 55.07 125.18 55.13 100 mean -0.0028-0.0028-0.0036-0.0044-0.0037 0.5987 0.5970 0.8822 0.5760 0.9454 bias -0.0028-0.0028-0.0036-0.0044-0.0037-0.4013-0.4030-0.1178-0.4240-0.0546 variance 0.0194 0.0193 0.0238 0.0253 0.0503 0.0151 0.0149 0.0658 0.0221 0.1114 mse 0.0194 0.0193 0.0238 0.0254 0.0503 0.1762 0.1773 0.0796 0.2019 0.1144 eff. 259.44 260.37 211.33 198.60 100 64.95 64.55 143.69 56.68 100 p = 0. 1 mean -0.0009-0.0009-0.0028-0.0015-0.0059 0.3270 0.3263 0.7609 0.3274 0.9063 bias -0.0009-0.0009-0.0028-0.0015-0.0059-0.6730-0.6737-0.2391-0.6726-0.0937 variance 0.0057 0.0057 0.0108 0.0080 0.0486 0.0046 0.0045 0.0897 0.0073 0.1733 mse 0.0057 0.0057 0.0108 0.0080 0.0486 0.4575 0.4584 0.1469 0.4597 0.1821 eff. 845.16 847.61 451.17 605.13 100 39.80 39.72 123.99 39.62 100 mean 0.0002 0.0002 0.0007 0.0003 0.0005 0.1656 0.1654 0.7011 0.1705 0.8956 bias 0.0002 0.0002 0.0007 0.0003 0.0005-0.8344-0.8346-0.2989-0.8295-0.1044 variance 0.0014 0.0014 0.0062 0.0022 0.0503 0.0010 0.0010 0.1071 0.0019 0.2012 mse 0.0014 0.0014 0.0062 0.0022 0.0503 0.6973 0.6976 0.1964 0.6900 0.2121 eff. 3487.5 3492.7 810.31 2319.3 100 30.41 30.40 107.97 30.73 100 Table 4 Simulation results of the outlier model for n=20 33

LOCATION PARAMETERS SCALE Methods : μ w24 μ BS82 μ MML μ x σ w24 σ BS82 σ MML MAD* S mean 0.0001 0-0.0005 0.0004-0.0029 0.7178 0.7171 0.8993 0.6999 1.0200 bias 0.0001 0-0.0005 0.0004-0.0029-0.2822-0.2829-0.1007-0.3001 0.0200 variance 0.0109 0.0109 0.0118 0.0159 0.0223 0.0071 0.0070 0.0193 0.0126 0.0642 mse 0.0109 0.0109 0.0118 0.0159 0.0223 0.0867 0.0871 0.0294 0.1026 0.0646 eff. 203.97 204.21 188.62 140.00 100 74.44 74.15 219.36 62.89 100 p = 0. 05 mean 0.0015 0.0015 0.0014-0.0003-0.0012 0.4318 0.4316 0.7197 0.4308 1.0121 bias 0.0015 0.0015 0.0014-0.0003-0.0012-0.5682-0.5684-0.2803-0.5692 0.0121 variance 0.0039 0.0039 0.0050 0.0060 0.0226 0.0026 0.0025 0.0264 0.0048 0.1243 mse 0.0039 0.0039 0.0050 0.0060 0.0226 0.3254 0.3256 0.1050 0.3289 0.1245 eff. 575.45 575.64 453.86 376.17 100 38.25 38.22 118.56 37.84 100 mean -0.0001-0.0001-0.0006-0.0008-0.0026 0.2290 0.2290 0.5921 0.2325 1.0038 bias -0.0001-0.0001-0.0006-0.0008-0.0026-0.7710-0.7710-0.4079-0.7675 0.0038 variance 0.0011 0.0011 0.0018 0.0016 0.0234 0.0007 0.0007 0.0375 0.0014 0.1560 mse 0.0011 0.0011 0.0018 0.0016 0.0234 0.5952 0.5952 0.2039 0.5905 0.1560 eff. 2190.9 2191.5 1284.0 1478.1 100 26.22 26.22 76.53 26.43 100 mean 0.0006 0.0006 0.0009-0.0008 0.0029 0.6116 0.6101 0.8454 0.5911 0.9786 bias 0.0006 0.0006 0.0009-0.0008 0.0029-0.3884-0.3899-0.1546-0.4089-0.0214 variance 0.0082 0.0082 0.0097 0.0110 0.0210 0.0063 0.0062 0.0189 0.0098 0.0504 mse 0.0082 0.0082 0.0097 0.0110 0.0210 0.1572 0.1582 0.0428 0.1770 0.0508 eff. 256.40 257.65 217.41 190.19 100 32.33 32.11 118.64 28.70 100 p = 0. 1 mean -0.0011-0.0011-0.0027-0.0011-0.0074 0.3340 0.3333 0.6900 0.3351 0.9613 bias -0.0011-0.0011-0.0027-0.0011-0.0074-0.6660-0.6667-0.3100-0.6649-0.0387 variance 0.0023 0.0022 0.0037 0.0033 0.0197 0.0018 0.0017 0.0263 0.0030 0.0803 mse 0.0023 0.0022 0.0037 0.0033 0.0198 0.4453 0.4462 0.1224 0.4452 0.0818 eff. 876.63 879.35 535.03 597.06 100 18.36 18.33 66.78 18.37 100 mean -0.0003-0.0003 0.0004-0.0004 0.0035 0.1688 0.1687 0.6129 0.1740 0.9562 bias -0.0002-0.0002 0.0004-0.0004 0.0035-0.8312-0.8313-0.3871-0.8260-0.0438 variance 0.0006 0.0006 0.0019 0.0009 0.0194 0.0004 0.0004 0.0314 0.0008 0.0865 mse 0.0006 0.0006 0.0019 0.0009 0.0194 0.6912 0.6914 0.1813 0.6831 0.0884 eff. 3193.2 3196.1 1017.3 2124.0 100 12.79 12.78 48.75 12.94 100 Table 5 Simulation results of the outlier model for n=50 34

LOCATION PARAMETERS SCALE Methods: μ w24 μ BS82 μ MML μ x σ w24 σ BS82 σ MML MAD* S mean 0.0030 0.0030 0.0029 0.0027 0.0025 0.7196 0.7191 0.9129 0.7021 0.9898 bias 0.0030 0.0030 0.0029 0.0027 0.0025-0.2804-0.2809-0.0871-0.2979-0.0102 variance 0.0049 0.0049 0.0057 0.0078 0.0098 0.0033 0.0033 0.0117 0.0063 0.0298 mse 0.0050 0.0049 0.0057 0.0078 0.0098 0.0819 0.0822 0.0193 0.0951 0.0299 eff. 198.32 198.44 172.53 126.45 100 36.46 36.35 154.97 31.41 100 p = 0. 05 mean -0.0007-0.0007 0.0002 0.0002 0.001 0.4298 0.4298 0.7516 0.4299 0.9740 bias -0.0007-0.0007 0.0002 0.0002 0.001-0.5702-0.5702-0.2484-0.5701-0.0260 variance 0.0018 0.0018 0.0028 0.0028 0.0097 0.0011 0.0011 0.0196 0.0023 0.0642 mse 0.0018 0.0018 0.0028 0.0028 0.0097 0.3262 0.3263 0.0812 0.3273 0.0649 eff. 531.86 531.73 350.71 347.15 100 19.89 19.89 79.87 19.82 100 mean -0.0004-0.0004 0.0007-0.0002 0.0049 0.2284 0.2284 0.6214 0.2315 0.9809 bias -0.0004-0.0004 0.0007-0.0002 0.0049-0.7716-0.7716-0.3786-0.7685-0.0191 variance 0.0005 0.0005 0.0014 0.0008 0.0113 0.0003 0.0003 0.0294 0.0007 0.0896 mse 0.0005 0.0005 0.0014 0.0008 0.0113 0.5957 0.5957 0.1727 0.5913 0.0900 eff. 2341.9 2339.5 833.27 1465.5 100 15.10 15.10 52.09 15.21 100 mean 0.0034 0.0034 0.0021 0.0028 0.0009 0.6135 0.6118 0.8850 0.5927 0.9879 bias 0.0034 0.0034 0.0021 0.0028 0.0009-0.3865-0.3882-0.1150-0.4073-0.0121 variance 0.0035 0.0035 0.0046 0.0048 0.0090 0.0029 0.0028 0.0123 0.0047 0.0265 mse 0.0035 0.0035 0.0046 0.0048 0.0090 0.1523 0.1535 0.0255 0.1706 0.0266 eff. 252.07 253.07 194.92 186.82 100 17.49 17.35 104.32 15.61 100 p = 0. 1 mean 0 0-0.0003 0.0004-0.0009 0.3354 0.3349 0.7662 0.3340 0.9665 bias 0 0-0.0003 0.0004-0.0009-0.6646-0.6651-0.2338-0.6660-0.0335 variance 0.0012 0.0012 0.0029 0.0018 0.0095 0.0009 0.0009 0.0189 0.0015 0.0393 mse 0.0012 0.0012 0.0029 0.0018 0.0095 0.4425 0.4433 0.0736 0.4450 0.0405 eff. 815.38 818.15 331.29 539.53 100 9.14 9.13 55.00 9.09 100 mean 0.0005 0.0005-0.0004 0.0006-0.0023 0.1700 0.1698 0.7248 0.1755 0.9775 bias 0.0005 0.0005-0.0004 0.0006-0.0023-0.8300-0.8302-0.2752-0.8245-0.0225 variance 0.0003 0.0003 0.0018 0.0005 0.0097 0.0002 0.0002 0.0246 0.0004 0.0474 mse 0.0003 0.0003 0.0018 0.0005 0.0097 0.6891 0.6894 0.1004 0.6802 0.0479 eff. 3435.6 3436.8 549.28 2075.0 100 6.96 6.95 47.76 7.05 100 Table 6 Simulation results of the outlier model for n=100 35

n=20 LOCATION PARAMETERS SCALE Methods: μ w24 μ BS82 μ MML μ x σ w24 σ BS82 σ MML MAD* S mean 0.0016 0.0017 0.0012 0.0007 0.0013 0.9737 0.9742 0.9958 0.9550 0.9855 Bias 0.0016 0.0017 0.0012 0.0007 0.0013-0.0263-0.0258-0.0042-0.0450-0.0145 variance 0.0539 0.0541 0.0521 0.0747 0.0511 0.0301 0.0305 0.0271 0.0639 0.0264 mse 0.0539 0.0541 0.0521 0.0747 0.0511 0.0308 0.0311 0.0272 0.0659 0.0266 eff. 94.80 94.51 98.11 68.38 100 86.39 85.36 97.86 40.31 100 n=50 n=100 mean 0.0012 0.0011 0.0013 0.0011 0.0015 1.0008 1.0019 1.0022 0.9864 0.9968 bias 0.0012 0.0011 0.0013 0.0011 0.0015 0.0008 0.0019 0.0022-0.0136-0.0032 variance 0.0198 0.0199 0.0194 0.0300 0.0191 0.0118 0.0119 0.0106 0.0276 0.0104 mse 0.0198 0.0199 0.0194 0.0300 0.0191 0.0118 0.0119 0.0106 0.0277 0.0105 eff. 96.33 96.06 98.35 63.71 100 88.52 87.71 98.61 37.67 100 mean -0.0005-0.0005 0.0003-0.0017 0.0003 1.0009 1.0021 0.9968 0.9858 0.9931 bias -0.0005-0.0005 0.0003-0.0017 0.0003 0.0009 0.0021-0.0032-0.0142-0.0069 variance 0.0107 0.0107 0.0105 0.0157 0.0104 0.0052 0.0052 0.0047 0.0119 0.0046 mse 0.0107 0.0107 0.0105 0.0157 0.0104 0.0052 0.0052 0.0047 0.0121 0.0047 eff. 97.05 96.82 99.05 66.20 100 90.69 89.87 99.43 38.73 100 Table 7 Simulation results for standard normal distribution 4. CONCLUSION In this paper we have done a simulation study to compare the performance of some well-known estimation methods for the location and scale parameters under several conditions including the mixture and outlier models and the normal distribution. For the mixture and outlier models, similar results are observed. In the estimation of the location parameter, the Huber s M estimators give the best results. For low k values the MML estimator takes the second place whereas for high k values the sample median is the second best estimator of the location parameter. The worst estimator of the location parameter is the sample mean. In the estimation of the scale parameter, for the sample size n=20, the MML estimator of the scale parameter is always the best estimator except the case when p or π=0.05 and where the Huber s M estimators are the best. In other situations the efficiency of the Huber s M estimators and MAD* are very close to each other but both are worse than the MML estimators and the sample standard deviation. It is easily observed that as the values of π or p and k get higher, the Huber s M estimators of the scale parameter and MAD* produce great bias and become very inefficient. For the sample size n=50, the MML estimators are still the best for and 10 except the case when p or π=0.1 and where the sample standard deviation is the 36

best. In other situations the MML estimator takes the second place after the sample standard deviation. Thus, the MML estimator of the scale parameter and the sample standard deviation takes the first two places in estimating the scale parameter. Again, as the values of π or p and k get higher, the Huber s M estimators of the scale parameter and MAD* produce great bias and become very inefficient. For the sample size n=100, the MML estimator of the scale parameter is the best estimator just in the case when. The second best is the sample standard deviation. In all other cases the sample standard deviation is the best and the MML estimator of the scale parameter is the second best estimator of the scale parameter. For this case both the Huber s M estimators of the scale parameter and MAD* are extremely inefficient w.r.t. the sample standard deviation. This is because of the fact that they produce huge bias and the bias gets larger as π or p and k get higher. Finally, in the simulation for the standard normal distribution, as expected, the best results are produced by the sample mean and the sample standard deviation. The MML estimators of the location and scale parameters take the second place but there is just a marginal difference between them and the sample mean and the sample standard deviation. The Huber s M estimators take the third place. The sample median and MAD* are the worst estimators of the location and the scale parameter, respectively, for this situation. They are extremely inefficient and cannot be used. If we have to give a suggestion for the usage of the estimator of the location parameter, we can suggest the usage of the Huber s M estimators. In the estimation of the scale parameter, the MML estimator of the scale parameter can be used unless the sample size and the extremity of contamination (k) are large. In such situations the sample standard deviation should be preferred. REFERENCES [1] M. L. Tiku and A. D. Akkaya, Robust Estimation and Hypothesis Testing, 2004, New Delhi. [2] P.J. Huber, Robust Statistics, Wiley, New York,1981. [3] R.R. Wilcox, Introduction to Robust Estimation and Hypothesis Testing, 2005, Elsevier Academic Press, Second Edition. 37

[4] A.F. Özdemir, Comparing measures of location when the underlying distribution distribution has heavier tails than normal, İstatistikçiler Dergisi, 2010, 3, pp. 8-16. [5] A.F. Özdemir and R. Wilcox, New results on the small-sample properties of some robust univariate estimators of location, Communications in Statistics - Simulation and Computation, 2012, 41(9), pp. 1544-1556. [6] P.J. Huber, Robust estimation of a location parameter, Annals Math. Stat., 1964, 35, pp. 73-101. [7] A.M. Gross, Confidence interval robustness with long-tailed symmetric distributions, J. Amer. Stat. Assoc., 1976, 71, pp. 409-416. [8] D.F. Andrews, P.J. Bickel, F.R. Hampel, P.J. Huber, W.H. Rogers and J.W. Tukey, Robust Estimates of Location: Survey and Advances, 1972, Princeton, NJ: Princeton University Press. [9] P.J. Huber, Robust Statistics, Wiley, New York, 1981. [10] M.L. Tiku, W.Y. Tan and N. Balakrishnan, Robust Inference, 1986, Marcel Dekker, New York. [11] F.R. Hampel, E.M. Ronchetti, and P.J. Rousseeuw, Robust Statistics, 1986, Wiley, New York. [12] K.L. Lange, R.J.A. Little, J.M.G. Taylor, Robust statistical modeling using the t-distribution, Journal of the American Statistical Association, 1989, 84 (408), pp. 881-896. [13] M.L. Tiku, Estimating the mean and standard deviation from a censored normal sample, Biometrika, 1967, 54, pp. 155-165. [14] H. Yilmaz and H.S. Sazak, Double-looped maximum likelihood estimation for the parameters of the generalized gamma distribution, Mathematics and Computers in Simulation, 2014, 98, pp. 18-30. [15] L. J. Bain and M. Engelhardt, Introduction to Probability and Mathematical Statistics, Second edition, PWS-Kent, Boston. [16] J. F. Kenney and E. S. Keeping, Mathematics of Statistics, Part 2, Second edition, Princeton, 1951. [17] P. Hall and A. H. Welsh, Limit theorems for the median deviation, Annals of the Institute of Statistical Mathematics, 1985, 37 (1), pp. 27-36. 38