CALCULATION METHOD FOR NONLINEAR DYNAMIC LEAST-ABSOLUTE DEVIATIONS ESTIMATOR

Similar documents
The properties of L p -GMM estimators

For right censored data with Y i = T i C i and censoring indicator, δ i = I(T i < C i ), arising from such a parametric model we have the likelihood,

Quantile methods. Class Notes Manuel Arellano December 1, Let F (r) =Pr(Y r). Forτ (0, 1), theτth population quantile of Y is defined to be

11. Bootstrap Methods

Working Paper No Maximum score type estimators

Lecture 2: Consistency of M-estimators

Does k-th Moment Exist?

Supplement to Quantile-Based Nonparametric Inference for First-Price Auctions

Regensburger DISKUSSIONSBEITRÄGE zur Wirtschaftswissenschaft

Robust Backtesting Tests for Value-at-Risk Models

What s New in Econometrics? Lecture 14 Quantile Methods

Statistical Properties of Numerical Derivatives

UNIVERSITY OF CALIFORNIA Spring Economics 241A Econometrics

Lasso Maximum Likelihood Estimation of Parametric Models with Singular Information Matrices

Econ 583 Final Exam Fall 2008

A Bootstrap Test for Conditional Symmetry

Simulating Properties of the Likelihood Ratio Test for a Unit Root in an Explosive Second Order Autoregression

Supplemental Material for KERNEL-BASED INFERENCE IN TIME-VARYING COEFFICIENT COINTEGRATING REGRESSION. September 2017

The loss function and estimating equations

Bootstrapping Heteroskedasticity Consistent Covariance Matrix Estimator

New Developments in Econometrics Lecture 16: Quantile Estimation

POSTERIOR ANALYSIS OF THE MULTIPLICATIVE HETEROSCEDASTICITY MODEL

Regensburger DISKUSSIONSBEITRÄGE zur Wirtschaftswissenschaft

Short Questions (Do two out of three) 15 points each

Steven Cook University of Wales Swansea. Abstract

ECO Class 6 Nonparametric Econometrics

A better way to bootstrap pairs

Single Index Quantile Regression for Heteroscedastic Data

IV Quantile Regression for Group-level Treatments, with an Application to the Distributional Effects of Trade

Analogy Principle. Asymptotic Theory Part II. James J. Heckman University of Chicago. Econ 312 This draft, April 5, 2006

Can we do statistical inference in a non-asymptotic way? 1

Multiscale Adaptive Inference on Conditional Moment Inequalities

Analysis of least absolute deviation

Quantile regression and heteroskedasticity

GMM and SMM. 1. Hansen, L Large Sample Properties of Generalized Method of Moments Estimators, Econometrica, 50, p

Asymptotic inference for a nonstationary double ar(1) model

Econometric Analysis of Cross Section and Panel Data

Econ 273B Advanced Econometrics Spring

INFERENCE APPROACHES FOR INSTRUMENTAL VARIABLE QUANTILE REGRESSION. 1. Introduction

Nonlinear minimization estimators in the presence of cointegrating relations

A Primer on Asymptotics

Problem Set 3: Bootstrap, Quantile Regression and MCMC Methods. MIT , Fall Due: Wednesday, 07 November 2007, 5:00 PM

Dynamic time series binary choice

Econometrics I, Estimation

Least Squares Estimation of a Panel Data Model with Multifactor Error Structure and Endogenous Covariates

Testing Restrictions and Comparing Models

Economics 583: Econometric Theory I A Primer on Asymptotics

Single Index Quantile Regression for Heteroscedastic Data

Comments on: Panel Data Analysis Advantages and Challenges. Manuel Arellano CEMFI, Madrid November 2006

Specification Test on Mixed Logit Models

A Test of Cointegration Rank Based Title Component Analysis.

A Comparison of Robust Estimators Based on Two Types of Trimming

This chapter reviews properties of regression estimators and test statistics based on

Inference For High Dimensional M-estimates: Fixed Design Results

Integrated Likelihood Estimation in Semiparametric Regression Models. Thomas A. Severini Department of Statistics Northwestern University

Simultaneous Equations and Weak Instruments under Conditionally Heteroscedastic Disturbances

WEIGHTED QUANTILE REGRESSION THEORY AND ITS APPLICATION. Abstract

Independent and conditionally independent counterfactual distributions

IDENTIFICATION OF THE BINARY CHOICE MODEL WITH MISCLASSIFICATION

Inference For High Dimensional M-estimates. Fixed Design Results

Introduction to Quantile Regression

Practical Bayesian Quantile Regression. Keming Yu University of Plymouth, UK

Incentives Work: Getting Teachers to Come to School. Esther Duflo, Rema Hanna, and Stephen Ryan. Web Appendix

Likelihood Ratio Based Test for the Exogeneity and the Relevance of Instrumental Variables

Stat 5101 Lecture Notes

ECONOMETRICS FIELD EXAM Michigan State University May 9, 2008

Quick Review on Linear Multiple Regression

Online Appendix. j=1. φ T (ω j ) vec (EI T (ω j ) f θ0 (ω j )). vec (EI T (ω) f θ0 (ω)) = O T β+1/2) = o(1), M 1. M T (s) exp ( isω)

Christopher Dougherty London School of Economics and Political Science

Linear Model Under General Variance Structure: Autocorrelation

E 4101/5101 Lecture 9: Non-stationarity

Max. Likelihood Estimation. Outline. Econometrics II. Ricardo Mora. Notes. Notes

A Local Generalized Method of Moments Estimator

Closest Moment Estimation under General Conditions

The Glejser Test and the Median Regression

Introduction to Estimation Methods for Time Series models. Lecture 1

Estimation and Inference with Weak Identi cation

The Estimation of Simultaneous Equation Models under Conditional Heteroscedasticity

Syllabus. By Joan Llull. Microeconometrics. IDEA PhD Program. Fall Chapter 1: Introduction and a Brief Review of Relevant Tools

NUCLEAR NORM PENALIZED ESTIMATION OF INTERACTIVE FIXED EFFECT MODELS. Incomplete and Work in Progress. 1. Introduction

MA Advanced Econometrics: Applying Least Squares to Time Series

Quantile Regression for Panel/Longitudinal Data

Economics 536 Lecture 21 Counts, Tobit, Sample Selection, and Truncation

Informational Content in Static and Dynamic Discrete Response Panel Data Models

Density Estimation. Seungjin Choi

Problem Set 2 Solution

9. Robust regression

Birkbeck Working Papers in Economics & Finance

ORTHOGONALITY TESTS IN LINEAR MODELS SEUNG CHAN AHN ARIZONA STATE UNIVERSITY ABSTRACT

CURRENT STATUS LINEAR REGRESSION. By Piet Groeneboom and Kim Hendrickx Delft University of Technology and Hasselt University

Fall 2017 STAT 532 Homework Peter Hoff. 1. Let P be a probability measure on a collection of sets A.

Exogeneity tests and weak-identification

Nonparametric regression with rescaled time series errors

ASYMPTOTIC NORMALITY OF THE QMLE ESTIMATOR OF ARCH IN THE NONSTATIONARY CASE

1. GENERAL DESCRIPTION

A New Test in Parametric Linear Models with Nonparametric Autoregressive Errors

Optimal bandwidth selection for the fuzzy regression discontinuity estimator

A strong consistency proof for heteroscedasticity and autocorrelation consistent covariance matrix estimators

Inference on distributions and quantiles using a finite-sample Dirichlet process

Generated Covariates in Nonparametric Estimation: A Short Review.

Transcription:

J. Japan Statist. Soc. Vol. 3 No. 200 39 5 CALCULAION MEHOD FOR NONLINEAR DYNAMIC LEAS-ABSOLUE DEVIAIONS ESIMAOR Kohtaro Hitomi * and Masato Kagihara ** In a nonlinear dynamic model, the consistency and asymptotic normality of the Nonlinear Least-Absolute Deviations (NLAD) estimator were proved by Weiss (99), even though they are difficult to compute in practice. Overcoming this difficulty will be critical if the NLAD estimator is to become practical. We propose an approximated NLAD estimator with the same asymptotic properties as the original with the exception that ours is easier to use. Key words and phrases: Calculation method, Least absolute deviations estimator, Nonlinear dynamic model.. Introduction In parametric regression models, the Least-Squares (LS) estimator is usually used for parameter estimates. If the error term is distributed as normal, then the LS estimator is a Maximum Likelihood Estimator (MLE) and attains minimum variance within unbiased estimators. At the same time, however, it is well known that only one outlier may cause a large error in an LS estimator. his occurs in the case of fatter tail distributions of the error term. In such a case, more robust estimators are desirable. One is the Least-Absolute Deviations (LAD) estimator. In a linear regression model, a linear programming method is available as a calculation method for the LAD estimator. On the other hand, in the nonlinear dynamic model, no computational method is proposed, although it has been shown theoretically by Weiss (99) that a Nonlinear LAD (NLAD) estimator is consistent and asymptotically normal. herefore, we seek an approximate estimator of the NLAD estimator that is practically computable. In a linear case, Hitomi (997) proposed an estimator of this kind called the Smoothed LAD (SLAD). In order to obtain an SLAD estimator, he approximated the non-differentiable original objective function by the smoothed function which is differentiable. hat is, he replaced x with x 2 + d 2 where d>0is the distance from the origin Received August 29, 2000. Revised February 9, 200. Accepted March 7, 200. *Department of Architecture and Design, Kyoto Institute of echnology; E-mail: hitomi@hiei.kit.ac.jp. **Graduate School of Economics, Kyoto University; E-mail: e60x0032@ip.media.kyoto-u.ac.jp. Horowitz (998) also proposed another SLAD estimator, which has different purposes than ours. In his article, the main target is not a calculation method of the LAD estimator. In practice, his SLAD estimator seems to be difficult to compute, as he implied, because it involves an integral of kernel for nonparametric density estimation.

40 J. JAPAN SAIS. SOC. Vol.3No.200 Figure. 6 y=abs(x) y=sqrt(x^2+) 5 4 3 2 0-4 -2 0 2 4 Figure 2. 0.7 Standard Normal distribution Laplace distribution 0.6 0.5 0.4 0.3 0.2 0. 0-3 -2-0 2 3

CALCULAION MEHOD FOR NONLINEAR DYNAMIC LAD ESIMAOR 4 (see Figure ). hen the distance between the original and smoothed objective functions is equal to or smaller than d for all x. Hitomi (997) controlled the parameter d connected with the sample size so that the SLAD estimator has the same asymptotic properties as the original LAD estimator. In this article, we extend this method to the nonlinear dynamic model. In conclusion, we obtained the general calculation method of the NLAD estimator. We call it the Nonlinear SLAD (NSLAD) estimator. We introduce a nonlinear dynamic model in Section 2, which was investigated by Weiss (99). In Section 3, we prove that the NSLAD estimator has the same asymptotic properties as the original NLAD estimator under the nonlinear dynamic model and Section 4 presents the results of the Monte Carlo study. Concludingremarks are given in Section 5. Finally, our assumptions are described in Appendix A and the sketch of proof that our model in Section 4 meets assumptions is given in Appendix B. 2. Model Weiss (99) considered the followingnonlinear dynamic model. (2.) y t = g(x t,β 0 )+u t g : known function x t =(y t,...,y t p,z t ) z t : vector of exogenous variables β 0 :(k ) vector of unknown parameter u t : unobserved error term which satisfies Median(u t I t )=0 I t : σ-algebra (information set at period t) generated by {x t i } (i 0) and {u t i } (i ). hen the NLAD estimator is defined as the solution of the followingproblem: (2.2) min Q (β) min β β y t g(x t,β). In these basic settings, Weiss (99) proved that the NLAD estimator ˆβ was consistent and asymptotically normal under the set of assumptions in Appendix A. Under the assumptions described in Ap- heorem (Weiss(99)) pendix A, (2.3) ˆβ p β 0 and ( ˆβ β0 ) d N(0,V),

42 J. JAPAN SAIS. SOC. Vol.3No.200 where (2.4) (2.5) V = D AD, A = lim E D = 2 lim E [ gt β (β 0) ] β (β 0) [ f t (0 I t ) β (β 0) β (β 0), ], f t ( I t ) : density function of u t conditional on I t. However, no computational method was proposed by Weiss (99). his is critical, and we solved this problem. FollowingHitomi (997), y g(x, β) is approximated by (y g(x, β))2 + d 2 (d > 0). he NSLAD estimator is defined as the solution of the followingproblem which minimizes the smoothed objective function: 2 (2.6) min Q s (β) min β β (yt g(x t,β)) 2 + d 2. First and second derivatives are (where g t (β) :=g(x t,β)) (2.7) and (2.8) Q s β = 2 Q s β β = y t g t (β) (yt g t (β)) 2 + d 2 β (β) { d 2 (β) (β) ((y t g t (β)) 2 + d 2 ) 3/2 β(β) β } y t g t (β) 2 g t (β). (yt g t (β)) 2 + d 2 β β 3. Asymptotic properties 3.. Consistency he difference between the original and smoothed objective functions defined in (2.2) and (2.6), respectively, is (3.) 0 Q s (β) Q (β) d x, y and β. herefore, if we control the parameter d such that d 0as, (3.2) sup{q (β) Q (β)} 0 as. β 2In appropriate conditions, Q converges in probability to a smooth function as to which Q s also converges in probability if d 0as. herefore it is understandable that the original NLAD and NSLAD estimator have the same asymptotic properties.

CALCULAION MEHOD FOR NONLINEAR DYNAMIC LAD ESIMAOR 43 his implies the followingtheorem. heorem. Assume the original objective function Q (β) converges in probability uniformly in β to a function that is uniquely minimized at β 0. 3 If d 0 as,then the NSLAD estimator is consistent. 3.2. Asymptotic normality he asymptotic normality of the NLAD estimator ˆβ is mainly based on the followingasymptotic first order condition: (3.3) sgn(y t g t ( ˆβ)) β ( ˆβ) =o p (). hus, if the NSLAD estimator also satisfies (3.3), it is asymptotically normal with the same asymptotic covariance matrix of the original NLAD, accordingto Weiss (99) results. herefore, we have proved that the NSLAD estimator ˆβ s satisfies the condition (3.3) in the followings. Now, note the NSLAD estimator ˆβ s satisfies the first order condition: (3.4) y t g t ( ˆβ s ) g t (y t g t ( ˆβ s )) 2 + d 2 β ( ˆβ s )=0. hen, all we have to do is show that the followingequation is satisfied: A := sgn(y t g t ( ˆβ s y t g t ( )) ˆβ s ) g t (y t g t ( ˆβ s )) 2 + d 2 β ( ˆβ (3.5) s ) = o p (). Let δ be a positive number. Divide the data set into two groups such that D = {t : y t g t ( ˆβ s ) >δ } and D 2 = {t : y t g t ( ˆβ s ) δ }. hen the followinginequality holds. (3.6) (3.7) t D sgn(y t g t ( ˆβ s y t g t ( )) ˆβ s ) (y t g t ( ˆβ s )) 2 + d 2 β ( ˆβ s ) + t D 2 sgn(y t g t ( ˆβ s y t g t ( )) ˆβ s ) (y t g t ( ˆβ s )) 2 + d 2 β ( ˆβ s ) ( ) < +d 2 /δ 2 β ( ˆβ s ) + t D β ( ˆβ s ) t D 2 =: A + A 2. A 3Weiss (99) demonstrated this fact in his nonlinear dynamic model described in Appendix A.

44 J. JAPAN SAIS. SOC. Vol.3No.200 First, we focus on the term A. By the aylor expansion, it follows that = ( ) ( 2 ( ) ) 2 d d (3.8) + o. +d 2 /δ 2 2 δ δ Hence, ( ) 2 ( d ( ) ) A < 2 δ β ( ˆβ 2 s ) d (3.9) + o. δ hus, A = o p () when ( ) 2 d 0( ) and δ β ( ˆβ (3.0) s ) p c : a finite constant. Next, consider the term A 2, which we regard as a function of β, A 2 = A 2 ( ˆβ s )= ( y t g t ( ˆβ s ) δ ) β ( ˆβ (3.) s ) = w t ( ˆβ s ) where w t (β) :=( δ + {g t (β) g t (β 0 )} u t δ + {g t (β) g t (β 0 )}) β (β) and ( ) represents the indicator function. Here, assume that the conditional density function of u t on I t is bounded from above, (3.2) f t (u I t ) <M t, I t and the next condition is satisfied in an open neighborhood B 0 of β 0, lim E β (β) (3.3) <M 2 β B 0 for some real values 0 <M,M 2 <. hen, for β B 0,weget (3.4) E(w t (β)) = E[E(w t (β) I t )] [ <E β (β) ( δ + {g t (β) g t (β 0 )} u t ] δ + {g t (β) g t (β 0 )})f(u t I t )du t < 2M δ E β (β).

CALCULAION MEHOD FOR NONLINEAR DYNAMIC LAD ESIMAOR 45 hus, by the Markov inequality, (3.5) P ( A 2 (β) ɛ) E A 2(β) ɛ < 2 ɛ M δ E β (β). herefore, when δ 0as, β B 0 A 2 (β) 0. Hence, invokingthe consistency of ˆβ s (heorem ), we can conclude that A 2 ( ˆβ s ) p 0. From the discussions presented above, A = o p (), when the convergence rate of d satisfies p (3.6) ( d δ ) 2 0 and δ 0( ) and (3.0), (3.2) and (3.3) are satisfied. heorem 2. Suppose the following conditions are satisfied: (i) y t = g(x t,β 0 )+u t, and the conditional median of u t on I t is zero. (ii) he conditional density of u t on I t is bounded from above for all t, I t, thus, M such that for all t, I t f t (u I t ) <M. (iii) In an open neighborhood B 0 of β 0 for a finite constant M 2, lim E β (β) <M 2 β B 0. and β ( ˆβ s ) E β (β 0) p 0 Furthermore, assume the conditions described in Appendix A, then ( ˆβ β 0 ) d N(0,V) for the NLAD estimator ˆβ (See Section 2). If the smoothing parameter d satisfies that 3/4 d 0,then the NSLAD estimator ˆβ s is asymptotically normal. ( ˆβs β 0 ) d N(0,V). Proof. By Weiss (99) proof method of the asymptotic normality of the NLAD estimator, 4 we can see that all we have to do is show (3.3) for the NSLAD estimator because the other conditions are implied by nonlinear dynamic model described in Appendix A. Hence, we have already completed our proof in the above discussions. 4. Monte Carlo experiments In this section, we present some examples of the NSLAD estimator and compare the performance of the NSLAD and nonlinear least-squares (NLS) 4See Weiss (99) pp.6-63, proof of heorem 3, especially p.62.

46 J. JAPAN SAIS. SOC. Vol.3No.200 able. = 500 u Normal λ(= 0.5) β (= ) β 2 (= ) φ(= 0.5) Mean (SLAD) 0.5053 0.9939 0.9969 0.502 (NLS) 0.5020.0069 0.9969 0.4985 Standard Deviation (SLAD) 0.059 0.540 0.0622 0.0340 (NLS) 0.0860 0.25 0.0496 0.0278 Median (SLAD) 0.502 0.9952 0.9989 0.5029 (NLS) 0.5029.0028 0.9990 0.4985 st Quartile (SLAD) 0.4307 0.8829 0.9587 0.4788 3rd Quartile 0.572.0878.0367 0.5242 (NLS) 0.447 0.998 0.963 0.4783 0.5563.0898.0292 0.57 u Laplace Mean (SLAD) 0.508 0.9930.0009 0.5006 (NLS) 0.5005.0052 0.9995 0.4983 Standard Deviation (SLAD) 0.069 0.0965 0.0394 0.023 (NLS) 0.0884 0.253 0.05 0.027 Median (SLAD) 0.5002 0.9923.007 0.5008 (NLS) 0.4977.0032.0022 0.499 st Quartile (SLAD) 0.4543 0.93 0.9776 0.4874 3rd Quartile 0.5457.0536.0262 0.554 (NLS) 0.4376 0.98 0.9652 0.4808 0.560.0865.0345 0.575 estimators. hese experiments are conducted in the followingnonlinear dynamic model, which satisfies the assumptions given in Appendix A. 5 We set β = β 2 =,λ= φ =0.5 and y 0 =0. zt λ (4.) y t = φy t + β + β 2 + u t. λ We generated u t from two distributions. One is the standard normal distribution, where the NLS estimator becomes MLE, and the other is the Laplace distribution, where the NLAD estimator is MLE. he density of the Laplace distribution, whose variance is adjusted to, is f u (u) = exp( 2 u )/ 2. Median(u) =0, V(u) =, where V ( ) indicates a variance. Next, let z be distributed as a uniform distribution: z U(0, 9/2). Note: this makes V ((zt λ )/λ) ==V (u). he experiments are examined under the followingconditions: () he number of replications is,000. (2) he sample sizes ( ) are 50, 00, 200, 5See Appendix B.

CALCULAION MEHOD FOR NONLINEAR DYNAMIC LAD ESIMAOR 47 able 2. = 200 u Normal λ β β 2 φ Mean (SLAD) 0.529 0.9849 0.9905 0.505 (NLS) 0.533.04 0.9908 0.4955 Standard Deviation (SLAD) 0.834 0.228 0.053 0.055 (NLS) 0.429 0.884 0.0824 0.0430 Median (SLAD) 0.5060 0.9756 0.9995 0.500 (NLS) 0.5063.059 0.9952 0.4945 st Quartile (SLAD) 0.402 0.8226 0.923 0.4683 3rd Quartile 0.6397.380.063 0.5369 (NLS) 0.426 0.893 0.9364 0.4655 0.627.408.0466 0.5262 u Laplace Mean (SLAD) 0.5079 0.9845.0008 0.509 (NLS) 0.5077.0049 0.9972 0.4977 Standard Deviation (SLAD) 0.44 0.58 0.0653 0.0347 (NLS) 0.459 0.895 0.083 0.048 Median (SLAD) 0.507 0.9768.0033 0.5029 (NLS) 0.4994.0002 0.9990 0.4973 st Quartile (SLAD) 0.4323 0.8748 0.9589 0.4783 3rd Quartile 0.5837.0808.0434 0.5248 (NLS) 0.422 0.8792 0.9429 0.4704 0.5983.238.056 0.5270 and 500. (3) he smoothingparameter is, d =, which satisfies the conditions of heorems and 2. he results are reported in the followingtables. Each table is reported in the same format. he upper block of this table shows a case with an error term distribution that is standard normal, and the lower block shows a case with the Laplace distribution. In each block, for the NLS and NSLAD estimators, the sample mean, standard deviation, median, and st and 3rd quartiles are reported. he bias becomes negligible as the sample size increases, and there are no differences between the NSLAD and NLS estimators if attention is focused exclusively on this point. In terms of standard deviations, the NLS estimator has a smaller standard deviation when the error term s distribution is standard normal, and the NSLAD estimator has a smaller standard deviation when the error term s distribution is the Laplace distribution. When the error term is distributed as standard normal, standard deviations of the NSLAD estimators are about 20% larger than those of the NLS estimators and at most 28.3% larger in the estimate of λ as = 200. When

48 J. JAPAN SAIS. SOC. Vol.3No.200 able 3. = 00 u Normal λ β β 2 φ Mean (SLAD) 0.540 0.9847 0.9834 0.4984 (NLS) 0.5237.0274 0.9839 0.4899 Standard Deviation (SLAD) 0.2565 0.3252 0.427 0.0742 (NLS) 0.2069 0.2640 0.58 0.0596 Median (SLAD) 0.552 0.947 0.994 0.5040 (NLS) 0.5098.0269 0.9882 0.4907 st Quartile (SLAD) 0.3756 0.7428 0.8982 0.4557 3rd Quartile 0.687.952.0854 0.5484 (NLS) 0.3835 0.8395 0.900 0.4496 0.637.20.0622 0.5300 u Laplace Mean (SLAD) 0.5235 0.9760 0.9930 0.5028 (NLS) 0.589.087 0.9865 0.4938 Standard Deviation (SLAD) 0.730 0.297 0.07 0.050 (NLS) 0.202 0.262 0.48 0.059 Median (SLAD) 0.560 0.9640 0.998 0.504 (NLS) 0.5055.035 0.9923 0.4944 st Quartile (SLAD) 0.4206 0.8257 0.9369 0.4699 3rd Quartile 0.690.27.0572 0.5387 (NLS) 0.3899 0.8478 0.956 0.4520 0.6302.828.0673 0.5329 the error term is distributed as the Laplace distribution, standard deviations of the NLS estimators are 0% 30% larger than those of the NSLAD estimators, and the largest difference is 65.7% larger in the estimate of λ as = 50. he second largest is 29.8% larger in the estimate of β as = 500. With respect to median and quartiles, there are no differences between the NLS and NSLAD estimators regarding their performances. Finally, we compare the computingtime of the NLS and NSLAD estimators. he average computing time of the NLS and NSLAD estimators is 0.35 and 0.47 seconds, respectively, when = 500 with the standard normal error, and also 0.35 and 0.42 seconds when = 500 with the Laplace error. In the same manner, the average computing time of the NLS and NSLAD estimators is 0.28 and 0.37 seconds, respectively, when = 200 with standard normal, and also 0.28 and 0.33 seconds when = 200 with the Laplace distribution. Similarly, when = 00 and 50, the NSLAD estimate takes about 30% as much time as the NLS estimate with standard normal, and takes about 20% as much time as the NLS estimate with the Laplace distribution. We use GAUSS for Windows (32 bit), Version 3.2.38., and GAUSS

CALCULAION MEHOD FOR NONLINEAR DYNAMIC LAD ESIMAOR 49 able 4. =50 u Normal λ β β 2 φ Mean (SLAD) 0.5520 0.9905 0.9705 0.5002 (NLS) 0.528.0380 0.9808 0.4887 Standard Deviation (SLAD) 0.4773 0.4344 0.2496 0.064 (NLS) 0.393 0.377 0.2094 0.0874 Median (SLAD) 0.4895 0.934 0.9956 0.502 (NLS) 0.4759.0267 0.9895 0.4939 st Quartile (SLAD) 0.2550 0.6938 0.8074 0.4308 3rd Quartile 0.7687.248.46 0.5777 (NLS) 0.2828 0.7783 0.863 0.437 0.7274.2903.226 0.550 u Laplace Mean (SLAD) 0.533 0.9624.0026 0.5053 (NLS) 0.5455.0264 0.984 0.499 Standard Deviation (SLAD) 0.3300 0.2952 0.95 0.073 (NLS) 0.5468 0.3445 0.2029 0.0826 Median (SLAD) 0.4993 0.959.054 0.504 (NLS) 0.5000.03 0.9880 0.4943 st Quartile (SLAD) 0.2964 0.7483 0.8967 0.4608 3rd Quartile 0.6853.3.273 0.5574 (NLS) 0.2747 0.782 0.8560 0.4369 0.7255.2624.93 0.5485 Applications Optimization to conduct these experiments in an environment with a CPU that has a PentiumII, 400MHz, and 64MB memory. From these results, we can conclude that the NSLAD estimator performed very well and equal to the NLS estimator. 5. Concluding remarks he aim of this article is on the actual usage of the NLAD estimator, which has attractive properties such as robustness. he problem lies in computational difficulty. herefore, we proposed an NSLAD estimator, a generalization of Hitomi s(997) SLAD estimator, which is practically computable and has the same asymptotic properties as the NLAD estimator in Weiss (99) nonlinear dynamic model. he Monte Carlo experiment implies a good performance of the NSLAD estimator, at least equal to the NLS estimator. Consequently we obtained a computable approximate to the NLAD estimator, which we called the NSLAD estimator, and the NLAD and NSLAD estimators have the same asymptotic properties.

50 J. JAPAN SAIS. SOC. Vol.3No.200 Appendix A: Set of Assumptions In this section, we described the set of assumptions under which Weiss (99) demonstrated the consistency and asymptotic normality of the NLAD estimator. he basic model has already been introduced in Section 2. Assumption. In the model (2.), (i) (Probability space) (Υ, F, P) is a complete probability space and {u t,x t } are random vectors on this space. (ii) (Parameter space) Let B be a compact subset of Euclidean space R k, and β 0 is interior to B. (iii) (Regression function) g(x, β) is measurable in x for each β B and is A-smooth in β with variables A 0t and function ρ. 6 i g t (β) isasmooth with variables A it and functions ρ i,i =,...,k. In addition, max i ρ i (s) s for s>0 small enough. 7 (iv) (Conditional distribution of u t ) f t ( I t ) is Lipschitz continuous uniformly in t, and for each t, u, f t (u I t ) is continuous in its parameter. Furthermore there exist h>0 such that for all t, and some constant p > 0,P(f t (0 I t ) h) >p. 8 (v) (Unconditional density) An unconditional density f t of (u t,x t ) is continuous in its parameter at every point for each t. (vi) here exists < and r > 2 such that α(m) m λ for some λ < 2r/(r 2), where α( ) stands the usual measure of dependence for α-mixingrandom variables on (Υ, F, P) with respect to the σ- algebras generated by {u t,x t }. (vii) (Dominance conditions) 9 For all t, i, E sup β A it r <. here exist measurable functions a t,a 2t such that i g t (β) a t,i=,..., k, f t a 2t and for all t, a 2t du t dx t < and a 3 ta 2t du t dx t <. (viii) (Asymptotic stationarity) here exists a matrix A such that a+ t=a+ E[ g t(β 0 ) g t (β 0 )] A as, uniformly in a. (ix) (Identification condition) here exists δ>0 such that for all β B and sufficiently large, det( E[ g t (β) g t (β) f t (0 I t ) >h]) >δ. Appendix B: ConfirmingAssumptions in Our Model In this section we briefly sketch a proof that our model in Section 4 meets the assumptions described in Appendix A 6g(x t,β)isa-smooth with variables A t and function ρ if, for each β B, there is a constant τ>0 such that β β τ implies that g t( β) g t(β) A t(x t)ρ( β β ) for all t a.s., where A t and ρ are nonrandom functions such that lim sup E[A t(x t)] <,ρ(y) > 0 for y>0,ρ(y) 0as y 0. 7 i := / β i (i =,...,k), := / β. 8f t( I t) is a density function of u t conditional on I t. 9For a consistent result, these conditions are replaced by the following: here exist measurable functions a 0t,a t such that g t(β) a 0t, i g t(β) a t,i=,...,k, where E a jt r j <,j =0, for some r 0 >,r > 2. he other conditions can be relaxed.

CALCULAION MEHOD FOR NONLINEAR DYNAMIC LAD ESIMAOR 5 First, we begin with Assumption 2. We set +c φ c, c λ c (0 < c < ) and C β,β 2 C (0 < C < ). Note that the parameter space and the support of z are bounded. For Assumption 3, we use the aylor expansions, and for Assumption 4, we can show f(u 2 ) f(u ) < u 2 u and f(u) >his trivial. As to Assumption 6, we transformed our model as follows: yt = 2 y t + η t = i=0 2 η i t i s.t. yt = y t 2(2 2 ), η t = u t +2( z t 2) i.i.d. o this transformed model, we apply heorem 4.9 in Davidson (994) p.29. For Assumption 7, note an unconditional density f t = f because y t is stationary. We can see that E[ g t (β) g t (β)] < for Assumption 8, and E[ g t (β) g t (β)] is nonsingular in Assumption 9. References Buchinsky, M. (995). Quantile regression, Box-Cox transformation model, and the U.S. wage structure, 963 987. Journal of Econometrics, 65, 09 54. Davidson, J. (994). Stochastic Limit heory. Oxford University Press. Hitomi, K. (997). Smoothed least absolute deviations estimator and smoothed censored least absolute deviations estimator. mimeo. Horowitz, J. L.(998). Bootstrap methods for median regression models. Econometrica, 6, 327 35. Newey, W. K. and McFadden, D. (994). Large sample estimation and hypothesis testing. Handbook of Econometrics (ed. Engle, R. F. and McFadden, D. L.), volume 4. Elsevier. Pollard, D. (985). New ways to prove central limit theorems. Econometric heory,, 295 34. Pollard, D. (99). Asymptotics for least absolute deviation regression estimators. Econometric heory, 7, 86 99. Portnoy, S. and Koenker, R. (997). he Gaussian hare and the Laplacian tortoise: Computability of squared-error versus absolute-error estimators. Statistical Science, 2, 279 300. Sposito, V. A. (990). Some properties of l p estimation. Robust Regression: Analysis and Applications (ed. Lawrence, K. D. and Arthur, J. L.). Marcel Dekker, Inc. Weiss, A. A. (99). Estimating nonlinear dynamic models using least absolute error estimation. Econometric heory, 7, 46 68.