Evaluating Latent and Observed Factors in Macroeconomics and Finance. September 15, Preliminary: Comments Welcome

Similar documents
R = µ + Bf Arbitrage Pricing Model, APM

Ross (1976) introduced the Arbitrage Pricing Theory (APT) as an alternative to the CAPM.

Econ671 Factor Models: Principal Components

A PANIC Attack on Unit Roots and Cointegration. July 31, Preliminary and Incomplete

Network Connectivity and Systematic Risk

Multivariate GARCH models.

INFERENTIAL THEORY FOR FACTOR MODELS OF LARGE DIMENSIONS. Jushan Bai. May 21, 2002

ASSET PRICING MODELS

Regression: Ordinary Least Squares

Econ 423 Lecture Notes: Additional Topics in Time Series 1

Inference on Risk Premia in the Presence of Omitted Factors

This paper develops a test of the asymptotic arbitrage pricing theory (APT) via the maximum squared Sharpe

Introduction to Algorithmic Trading Strategies Lecture 3

MFE Financial Econometrics 2018 Final Exam Model Solutions

Financial Econometrics Short Course Lecture 3 Multifactor Pricing Model

RECENT DEVELOPMENTS IN LARGE DIMENSIONAL FACTOR ANALYSIS 58

Discussion of Bootstrap prediction intervals for linear, nonlinear, and nonparametric autoregressions, by Li Pan and Dimitris Politis

Testing the APT with the Maximum Sharpe. Ratio of Extracted Factors

A Test of APT with Maximum Sharpe Ratio

APT looking for the factors

Bootstrapping factor models with cross sectional dependence

University of Pretoria Department of Economics Working Paper Series

Bootstrapping factor models with cross sectional dependence

Beta Is Alive, Well and Healthy

Network Connectivity, Systematic Risk and Diversification

TESTING FOR CO-INTEGRATION

Econometría 2: Análisis de series de Tiempo

Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8]

NBER WORKING PAPER SERIES ARE MORE DATA ALWAYS BETTER FOR FACTOR ANALYSIS? Jean Boivin Serena Ng. Working Paper

Lecture 6: Univariate Volatility Modelling: ARCH and GARCH Models

Principal Component Analysis (PCA) Our starting point consists of T observations from N variables, which will be arranged in an T N matrix R,

Chapter 6. Maximum Likelihood Analysis of Dynamic Stochastic General Equilibrium (DSGE) Models

A PANIC Attack on Unit Roots and Cointegration

Estimating the Number of Common Factors in Serially Dependent Approximate Factor Models

Financial Econometrics Return Predictability

Forecast comparison of principal component regression and principal covariate regression

Specification Test Based on the Hansen-Jagannathan Distance with Good Small Sample Properties

ZHAW Zurich University of Applied Sciences. Bachelor s Thesis Estimating Multi-Beta Pricing Models With or Without an Intercept:

Volatility. Gerald P. Dwyer. February Clemson University

A Test of Cointegration Rank Based Title Component Analysis.

Financial Econometrics Lecture 6: Testing the CAPM model

A Guide to Modern Econometric:

Warwick Business School Forecasting System. Summary. Ana Galvao, Anthony Garratt and James Mitchell November, 2014

Errata for Campbell, Financial Decisions and Markets, 01/02/2019.

This note introduces some key concepts in time series econometrics. First, we

The Slow Convergence of OLS Estimators of α, β and Portfolio. β and Portfolio Weights under Long Memory Stochastic Volatility

Online Appendix. j=1. φ T (ω j ) vec (EI T (ω j ) f θ0 (ω j )). vec (EI T (ω) f θ0 (ω)) = O T β+1/2) = o(1), M 1. M T (s) exp ( isω)

Identifying Financial Risk Factors

A Non-Parametric Approach of Heteroskedasticity Robust Estimation of Vector-Autoregressive (VAR) Models

11. Further Issues in Using OLS with TS Data

ECON4515 Finance theory 1 Diderik Lund, 5 May Perold: The CAPM

Financial Econometrics

Economic modelling and forecasting

Topic 4 Unit Roots. Gerald P. Dwyer. February Clemson University

The Bond Pricing Implications of Rating-Based Capital Requirements. Internet Appendix. This Version: December Abstract

Backtesting Marginal Expected Shortfall and Related Systemic Risk Measures

Predicting bond returns using the output gap in expansions and recessions

Likelihood-based Estimation of Stochastically Singular DSGE Models using Dimensionality Reduction

A formal statistical test for the number of factors in. the approximate factor models

Empirical properties of large covariance matrices in finance

An estimate of the long-run covariance matrix, Ω, is necessary to calculate asymptotic

Circling the Square: Experiments in Regression

Econ 424 Time Series Concepts

Notes on empirical methods

GARCH Models. Eduardo Rossi University of Pavia. December Rossi GARCH Financial Econometrics / 50

Applications of Random Matrix Theory to Economics, Finance and Political Science

17 Factor Models and Principal Components

Determining the Number of Factors When the Number of Factors Can Increase with Sample Size

ECON3327: Financial Econometrics, Spring 2016

Factor models. March 13, 2017

Inference in VARs with Conditional Heteroskedasticity of Unknown Form

A TIME SERIES PARADOX: UNIT ROOT TESTS PERFORM POORLY WHEN DATA ARE COINTEGRATED

ECON 4160, Lecture 11 and 12

Asymptotic distribution of the sample average value-at-risk

Prof. Dr. Roland Füss Lecture Series in Applied Econometrics Summer Term Introduction to Time Series Analysis

Unexplained factors and their effects on second pass R-squared s

13. Time Series Analysis: Asymptotics Weakly Dependent and Random Walk Process. Strict Exogeneity

Factor models. May 11, 2012

PANEL DISCUSSION: THE ROLE OF POTENTIAL OUTPUT IN POLICYMAKING

CONFIDENCE INTERVALS FOR DIFFUSION INDEX FORECASTS AND INFERENCE FOR FACTOR-AUGMENTED REGRESSIONS

Model Mis-specification

Inflation Revisited: New Evidence from Modified Unit Root Tests

EC821: Time Series Econometrics, Spring 2003 Notes Section 9 Panel Unit Root Tests Avariety of procedures for the analysis of unit roots in a panel

Linear Factor Models and the Estimation of Expected Returns

Bootstrap prediction intervals for factor models

Assessing Structural VAR s

An overview of applied econometrics

Robust Factor Models with Explanatory Proxies

Equity risk factors and the Intertemporal CAPM

Understanding Regressions with Observations Collected at High Frequency over Long Span

A new test on the conditional capital asset pricing model

On Generalized Arbitrage Pricing Theory Analysis: Empirical Investigation of the Macroeconomics Modulated Independent State-Space Model

Quaderni di Dipartimento. Estimation Methods in Panel Data Models with Observed and Unobserved Components: a Monte Carlo Study

A Course in Applied Econometrics Lecture 4: Linear Panel Data Models, II. Jeff Wooldridge IRP Lectures, UW Madison, August 2008

New meeting times for Econ 210D Mondays 8:00-9:20 a.m. in Econ 300 Wednesdays 11:00-12:20 in Econ 300

1. Fundamental concepts

Linear Factor Models and the Estimation of Expected Returns

Time-Varying Comovement of Foreign Exchange Markets

9) Time series econometrics

Transcription:

Evaluating Latent and Observed Factors in Macroeconomics and Finance Jushan Bai Serena Ng September 15, 2003 Preliminary: Comments Welcome Abstract Common factors play an important role in many disciplines of social science. In economics, the factors are the common shocks that underlie the co-movements of the large number of economic time series. he question of interest is whether some observable economic variables are in fact the underlying unobserved factors. We consider statistics to determine if the observed and the latent factors are exactly the same. We also provide simple to construct statistics that indicate the extent to which the two sets of factors differ. he key to the analysis is that the space spanned by the latent factors can be consistently estimated when the sample size is large in both the cross-section and the time series dimensions. he tests are used to assess how well the Fama and French factors as well as several business cycle indicators approximate the factors in portfolio and individual stock returns, and a large panel of macroeconomic data. Department of Economics, 269 Mercer St, NYU, New York, NY 10022 Email: Jushan.Bai@nyu.edu. Department of Economics, University of Michigan, Ann Arbor, MI 48109 Email: Serena.Ng@umich.edu he authors acknowledge financial support from the NSF (grants SBR-9896329, SES-0137084, SES- 0136923).

1 Introduction Many economic theories have found it useful to explain the behavior of the observed data by means of a small number of fundamental factors. For example, the arbitrage pricing theory (AP) of Ross (1976) is built upon the existence of a set of common factors underlying all asset returns. In the capital asset pricing theory (CAPM) of Sharpe (1964), Lintner (1965), and Merton (1973), the market return is the common risk factor that has pervasive effects on all assets. In consumption based CCAPM models of Breeden (1989) and Lucas (1978), aggregate consumption is the source of systematic risk in one-good exchange economies. Because interest rate of different maturities are highly correlated, most models of interest rates have a factor structure. Indeed, Stambaugh (1988) showed the conditions under which an affine-yield model implies a latent-variable structure for bond returns. A fundamental characteristic of business cycles is the comovement of a large number of series, which is possible only if economic fluctuations are driven by common sources. While the development of theory can proceed without a complete specification of how many and what the factors are, empirical testing does not have this luxury. o test the multifactor AP, one has to specify the empirical counterpart of the theoretical risk factors. o test CAPM, one must specify what the market is, and to test CCAPM, one must specifically define consumption. o test term structure models, the common practice is to identify instruments that are correlated with the underlying state variables, which amounts to finding proxies for the latent factors. o be able to asses the sources of business cycle fluctuations, one has to identify the candidates for the primitive shocks. A small number of applications have proceeded by replacing the unobserved factors with statistically estimated ones. For example, Lehmann and Modest (1988) used factor analysis, while Connor and Korajzcyk (1998) adopt the method of principal components. he drawback is that the statistical factors do not have immediate economic interpretation. A more popular approach is to rely on intuition and theory as guide to come up with a list of observed variables as proxy or mimicking factors in place of the theoretical factors that are unobserved. Variables such as the unemployment rate in excess of the natural rate, the deviation of output from its potential, are all popular candidates for the state of the economy. In CAPM analysis, the equal-weighted and value-weighted market returns are often used in place of the theoretical market return. In CCAPM analysis, non-durable consumption is frequently used as the systematic risk factor. By regressing asset returns on a large number of financial and macroeconomic variables and analyzing their explanatory power, Chen, Roll 1

and Ross (1986) found that the factors in AP are related to expected and unexpected inflation, interest rate risk, term structure risk, and industrial production. Perhaps the most well-known of observable risk factors are the three Fama and French factors:- the market excess return, the small minus big factor, and the high minus low factor. here is a certain appeal in associating the latent factors with observed variables as this facilitates economic interpretation. But empirical testing using the observed factors is valid only if the observed and the fundamental factors span the same space. o date, there does not exist a formal test that provides information about the adequacy of the observed variables as proxies for the unobserved factors. he problem is not so much that the fundamental factors are unobserved, but that consistent estimates of them cannot be obtained under the traditional assumption that is large and N is fixed, or vice versa. In this paper, we develop several procedures to compare the (individual or set of) observed with the unobserved factors. he point of departure is that we work with large dimensional panels. hat is, datasets with a large number of cross-section units (N) and time series observations ( ). By allowing both N and to tend to infinity, the space spanned by the common factors can be estimated consistently. Our analysis thus combines the statistical approach of Lehmann and Modest (1988) and Connor and Korajzcyk (1998), with the economic approach of using observed variables as proxies. We begin in Section 2 by considering estimation of the factors. Because the estimated factors can be treated as though they are known, Section 4 presents tests to compare the observed variables with the estimated factors. In section 4, we use the procedures to compare observed variables with factors estimated from portfolio returns, individual stock returns, and a large set of macroeconomic time series. Proofs are given in the Appendix. 2 Preliminaries Consider the factor representation for a panel of data x it, (i = 1,... N, t = 1,... ) x it = λ if t + e it, where F t (r 1) is the factor process, and λ i (r 1) is the factor loading for unit i. In classical factor analysis, the number of units, N, is fixed and the number of observations,, tends to infinity. With macroeconomic and financial applications, this assumption is not fully satisfactory because data for a large number of cross-section units are often available over a long time span, and in some cases, N can be much larger than (and vice versa). For example, daily data on returns for well over one hundred stocks are available since 1960, and 2

a large number of price and interest rate series are available for over forty years on a monthly basis. Classical analysis also assumes e it is iid over t and independent over i. his implies a diagonal variance covariance matrix for e t = (e 1t,..., e Nt ), an assumption that is rather restrictive. o overcome these two limitations, we work with high dimensional approximate factor models that allow both N and to tend to infinity, and in which e it may be serially and cross-sectionally correlated so that the covariance matrix of e t does not have to be a diagonal matrix. In fact, all that is needed is that the largest eigenvalue of the covariance matrix of e it is bounded as N tends to infinity. Suppose we observe G t, an (m 1) vector of economic variables. We are ultimately interested in the relationship between G t and F t, but we do not observe F t. It would seem natural to proceed by regressing x it on G t, and then use some metric to assess the explanatory power of G t 1. he idea is that G t is a good proxy for F t, it should explain x it. his, however, is not a satisfactory test because even if G t equals F t exactly, G t might still only be weakly correlated with x it if the variance of the idiosyncratic error e it is large. In other words, low explanatory power of G t by itself may not be the proper criterion for deciding if G t corresponds to the true factors. In CAPM analysis, several approaches have been used to check if inference is sensitive to the use of a proxy in place of the market portfolio. Stambaugh (1982) considered returns from a number of broadly defined markets and conclude that inference is not sensitive to the choice of proxies. his does not, however, suggest that the unobservability of the market portfolio has no implication for inference, an issue raised by Roll (1977). Another approach, used in Kandel and Stambaugh (1987) and Shanken (1987), is to obtain the cut-off correlation between the market proxy return and the true market return that would change the conclusion on the hypothesis being tested. hey find that if the correlation between G t and F t is at or above.7, inference will remain intact. However, this begs the question of whether the correlation between the unobserved market return and the proxy variable is high. While these approaches provide for more cautions inference, the basic problem remains that we do not know how correlated are F t and G t. o be able to test F t and G t directly, we must first confront the problem that F t is not observed. We use the method of principal components to estimate the factors. Let X be the by N matrix of observations such that the ith column is the time series of the ith cross section. Let V N be a r r diagonal matrix consisting of the r largest eigenvalues of XX /N. Let F = ( F 1,..., F ) be the principal component estimates of F under the normalization that 1 Such an approach was adopted in Chen et al. (1986), for example 3

F F = I r. hen F is comprised of the r eigenvectors (multiplied by ) associated with the r largest eigenvalues of the matrix XX /(N ) in decreasing order. Let Λ = (λ 1,..., λ N ) be the matrix of factor loadings. he principal components estimator of Λ is Λ = X F /. Define ẽ it = x it λ i F t. Denote the norm of a matrix A by A = [tr(a A)] 1/2. he notation M stands for a finite positive constant, not depending on N and. he following assumptions are needed for consistent estimation of the factors by the method of principal components. Assumptions: Assumption A: Common factors 1. E F t 4 M and 1 t=1 F tf t Assumption B: Heterogeneous factor loadings p Σ F for a r r positive definite matrix Σ F. he loading λ i is either deterministic such that λ i M or it is stochastic such that E λ i 4 M. In either case, Λ Λ/N p Σ Λ as N for some r r positive definite non-random matrix Σ Λ. Assumption C: ime and cross-section dependence and heteroskedasticity 1. E(e it ) = 0, E e it 8 M; 2. E(e it e js ) = τ ij,ts, τ ij,ts τ ij for all (t, s) and τ ij,ts γ ts for all (i, j) such that 1 N N τ ij M, i,j=1 1 t,s=1 γ ts M, and 1 N 3. For every (t, s), E N [ ] 1/2 N i=1 e is e it E(e is e it ) 4 M. i,j,t,s=1 τ ij,ts M Assumption D: {λ i }, {u t }, and {e it } are three groups of mutually independent stochastic variables. Assumptions A and B together imply r common factors. Assumption C allows for limited time series and cross section dependence in the idiosyncratic component. Heteroskedasticity in both the time and cross section dimensions is also allowed. Under stationarity in the time dimension, γ N (s, t) = γ N (s t), though the condition is not necessary. Given Assumption C1, the remaining assumptions in C are easily satisfied if the e it are independent for all i and t. he allowance for weak cross-section correlation in the idiosyncratic components leads to the approximate factor structure of Chamberlain and Rothschild (1983). It is more 4

general than a strict factor model which assumes e it is uncorrelated across i. Assumption D is standard in factor analysis. As is well known, the factor model is fundamentally unidentified because λ ihh 1 F t = λ if t for any invertible matrix H. In economics, exact identification of the factors, F t, may not always be necessary. If the estimated F t is used for forecasting as in Stock and Watson (2002), the distinction between F t and HF t is immaterial because they will give the same forecast. When stationarity or the cointegrating rank of F t is of interest, knowing HF t is sufficient, as F t has the same cointegrating rank as HF t. In these situations as well as addressing the question we are interested in, namely, determining if F t is close to G t, what is important is consistent estimation of the space spanned by the factors. Lemma 1 Let H = V 1 N ( F F/ )(Λ Λ/N). Under Assumptions A-D, and as N,, ( i min[n, ] t=1 F ) t HF t 2 = O p (1); 1 ii N( F t HF t ) V d 1 QN(0, Γ t ), where F F/ Q, V N V, and Γ t = lim N 1 N N i=1 N j=1 E(λ iλ j e it e jt ). Part (i), shown in Bai and Ng (2002), establishes that the squared difference between the estimated and the scaled true factors vanish as N and tend to infinity. p p As shown in Bai (2003), N( F t HF t ) = V 1 N ( F F/ ) 1 N N i=1 λ ie it + o p (1), from which the sampling distribution of the factor estimates given in (ii) follows. hese limiting distributions are asymptotically independent across t if e it are serially uncorrelated. he tests to be developed are built upon Lemma 1 and the fact that Γ t is consistently estimable. 3 Comparing the Estimated and the Observed Factors We observe G t, and want to know if its m elements are generated by (or is a linear combination of) the r latent factors, F t. In general, r is an unknown parameter. Consider estimating r using one of the two panel information criterion: r = argmax P CP (k) where P CP (k) = σ 2 (k) + σ 2 (kmax)k g(n, ), k=0,...kmax r = argmax ICP (k) where ICP (k) = log σ 2 (k) + k g(n, ), k=0,...kmax where σ 2 (k) = 1 N N i=1 t=1 ẽ2 it, ẽ it = x it λ i F t, with λ i and F t estimated by the method of principal components. In Bai and Ng (2002), we showed that prob( r = r) 1 as N, 5

if g(n, ) is chosen such that g(n, ) 0 and min[n, ]g(n, ). Because r can be consistently estimated, in what follows, we simply treat it as though it is known. Obviously, if m < r, the m observed variables cannot span the space of the r latent factors. esting for the adequacy of G as a set is meaningful only if m r. Nonetheless, regardless of the dimension of G t and F t, it is still of interest to know if a particular G t is in fact a fundamental factor. Section 3.1 therefore begins with testing the observed variables one by one. Section 3.2 considers test of the observed variables as a whole. 3.1 esting G t One at a ime Let G jt be an element of the m vector G t. he null hypothesis is that G jt is an exact factor, or more precisely, that there exists a δ j such that G jt = δ jf t for all t. Consider the regression G jt = γ j F t + error. Let γ j be the least squares estimate of γ j and let Ĝjt = γ j F t. Consider the t-statistic τ t (j) = N( Ĝ jt G jt ) ( ) 1/2. var(ĝjt) Let Φ τ α be the α percentage point of the limiting distribution of τ t (j). hen A(j) = 1 t=1 1( τ t(j) > Φ τ α) is the frequency that τ t (j) exceeds the α percent critical value in a sample of size. he A(j) statistic allows G jt to deviate from Ĝjt for a pre-specified number of time points as specified by α. A stronger test is to require G jt not to deviate from Ĝjt by more than sampling error at every t. o this end, we also consider the statistic M(j) = max 1 t τ t (j). he M(j) statistic is a test of how far is the Ĝjt curve from G jt. It is a stronger test than A(j) which tests G jt point by point. Proposition 1 (exact tests) Let G t be a vector of m observed factors, and F t be a vector of r latent factors. Let τ t (j) be obtained with var(ĝjt) replaced by its consistent estimate, var(ĝjt). Consider the statistics: A(j) = 1 1( τ t (j) > Φ α ) (1) t=1 M(j) = max 1 t τ t(j). (2) Under the null hypothesis that G jt = δ F t and as N,, (i) A(j) p 2α, and (ii) P (M(j) x) [2Φ(x) 1], where Φ(x) is the cdf of a standard normal random variable. 6

If F t was observed, τ t (j) is the usual t-statistic which is approximately normal for large N. Although our tests are based on F t, it is close to HF t by Lemma 1. hus, Ĝ jt = γ j F t is close to G jt = δ jf t with γ j = δ jh 1. For this reason, τ t (j) is asymptotically normal. his allows us to use the critical values from the standard normal distribution for inference. Since the probability that a standard normal random variable z t exceeds Φ α equals 2α, we have A(j) 2α as stated. Furthermore, the probability that the standardized deviation between G jt and Ĝjt should not exceed the maximum value in a sample of normal random variables. Extreme value distribution can also be used to approximate the distribution of M(j) after proper centering and scaling. As shown in the Appendix, the asymptotic variance of Ĝjt is var(ĝjt) = γ jv 1 N ( F F )Γ t( F F 1 )VN γ j. An estimate of it can be obtained by first substituting F for F, and noting that F F / is an r-dimensional identity matrix by construction. hus, var(ĝjt) = γ jv 1 N Γ t V 1 N γ j, where Γ t is a consistent estimate of Γ t = 1 N N i=1 N j=1 λ i λ jẽ it ẽ jt. For fixed, Γ t is not consistently estimable. he problem is akin to the problem in time series that summing the autocovariances will not yield a consistent estimate of the spectrum. With cross-section data, the data have no natural ordering, and the time series solution of truncation is neither intuitive nor possible without a precise definition of distance between observations. However, as shown in Bai and Ng (2003), the following estimator is consistent for Γ t : Γ t = 1 N N i=1 N j=1 λ i λ j 1 ẽ it ẽ jt t. (3) his covariance estimator is robust to cross-correlation, and accordingly, we refer to it as CS- HAC. It makes use of repeated observations over time, so that under covariance stationarity, the time series observations can be used to estimate the cross-section correlation matrix. When e ij is cross-sectionally uncorrelated, an estimate of Γ t is Γ t = 1 N t=1 N ẽ 2 it λ i λ i. (4) i=1 If, in addition, time series homoskedasticity is assumed, then Γ t = Γ does not depend on t, and a consistent estimate of Γ is Γ = σ 2 e 7 Λ Λ N, (5)

where σ 2 e = 1 N N i=1 t=1 ẽ2 it, and Λ is the N r matrix of estimated factor loadings. esting if G jt is an exact factor is then quite simple. For example, if α = 0.025, the fraction of τ t (j) that exceeds 1.96 in absolute value should be close to 5% for large N and. hus, A(j) should be close to.05. Furthermore, M(j) should not exceed the maximum of a vector of N(0, 1) random variables (in absolute values) of length. his maximum value increases with, but can be easily tabulated by simulations or theoretical computations. he 5% critical values for the M(j) test when = 50, 100, 200, 400 are, 3.283, 3.474, 3.656, and 3.830, respectively. Requiring that G t be an exact linear combination of the latent factors is rather strong. An observed series might match the variations of the latent factors very closely, and yet is not an exact factor in a statistical sense. Measurement error, for example, could be responsible for deviations between the observed variables and the latent factors. In such instances, it would be useful to gauge how large the departures are. Proposition 2 (approximate tests) Suppose G jt = δ jf t + ε jt, with ε jt (0, σ 2 ε(j)). Let Ĝ jt = γ j F t, where γ j is obtained by least squares from a regression of G jt on F t. Let ε jt = G jt Ĝjt. hen (i) σ ε(j) 2 = 1 t=1 ε2 jt σ ε(j), 2 and (ii) ( εjt ε jt ) d N(0, 1), s jt where s 2 jt = F tσ 1 F F tσ 2 ε + (/N)var(Ĝjt). We generically refer to ε jt as a measurement error, even though it might be due to systematic differences between F t and G jt. If F t was observed, the error in fitting G jt would be confined to the first term of s 2 jt, as is standard of regression analysis. But because we regress G t on F t, ε jt now consists of the error from estimation of F t. Proposition 2 is implied by results in Bai and Ng (2003), where prediction intervals for models augmented with factors estimated by the method of principal components are derived. A consistent estimator of s 2 jt is ŝ 2 jt = F t F t σ 2 ε(j) + N var(ĝjt), where var(ĝjt) can be constructed as discussed above. Notably, the error from having to estimate F t is decreasing in N. p Explicit consideration of measurement error allows us to construct, for each t, a confidence interval for ε jt. For example, at the 95% level, the confidence interval is (ε jt, ε+ jt ) = ( ε jt 1.96ŝjt, ε jt + 1.96ŝjt ). (6) 8

If G jt is an exact factor, zero should lie in the confidence interval for each t. It could be of economic interest to know at which time points G jt deviates from Ĝjt. Instead of comparing G jt with Ĝjt at each t, two overall statistics can also be constructed. NS(j) = var( ε(j)) var(ĝ(j)) (7) R 2 (j) = var(ĝ(j)) var(g(j)). (8) he NS(j) statistic is simply the noise-to-signal ratio. If G jt is an exact factor, the population value of NS(j) is zero. A large NS(j) thus indicates important departures of G jt from the latent factors. A limitation of this statistic is that it leaves open the question of what is small and what is large. For this reason, we also consider R 2 (j), which should be unity if G jt is an exact factor, and zero if the observed variable is irrelevant. he practitioner can draw inference as to how good is G jt as a proxy factor by picking cut off points for NS(j) and R 2 (j), in the same way we select the size of the test. hese statistics are useful because instead of asking how large is the measurement error in the proxy variable that would overturn hypothesis, we now have consistent estimates for the size of the measurement error. 3.2 esting G t as a set Suppose there are r latent and m observed factors. Whether r exceeds m or vice versa, it is useful to gauge the general coherency between F t and G t. o this end, we consider the canonical correlations between F t and G t. Let S F F and S GG be the sample variance-covariance matrix of F and G respectively. he sample squared-canonical correlations, denoted by ρ 2 k, k = 1,... min[m, r], are the largest eigenvalues of the r r matrix S 1 F F S GF S 1 GG S F G. It is well known that if F and G are observed and are normally distributed, z k = (bρ 2 k ρ2 k ) 2ρ k (1 ρ 2 k ) d N(0, 1) for k = 1,... min[m, r], see Anderson (1984). Muirhead and Waternaux (1980) provide results for non-normal distributions. For elliptical distributions, they showed that z k = 1 1+κ/3 the distribution. 2 (bρ 2 k ρ2 k ) 2ρ k (1 ρ 2 k ) d N(0, 1), where κ is the excess kurtosis of heir results cover the multivariate normal, some contaminated normal (mixture normal), and the multivariate t, which are all elliptical distributions. Our analysis is complicated by the fact that F is not observed but estimated. Nevertheless, the following holds. 2 A random vector Y is said to have an elliptical distribution if its density is of the form c Ω 1/2 g(y Ω 1 y) for some constant c, positive-definite matrix Ω, and nonnegative function g. 9

Proposition 3 Let ρ 2 1..., ρ 2 p be the largest p = min[m, r] sample squared canonical correlations between F and G, where F t is the principal components estimate of F t. Let N, with /N 0. (i) Suppose that (F t, G t ) are iid normally distributed, z k = ( ρ 2 k ρ 2 k ) d 2 ρ k (1 ρ 2 k ) N(0, 1), k = 1,..., min[m, r]. (9) (ii) Suppose that (F t, G t ) are iid and elliptically distributed. hen 1 ( ρ 2 z k = k ρ 2 k ) d (1 + κ/3) 2 ρ k (1 ρ 2 k ) N(0, 1), k = 1,..., min[m, r]. (10) where κ is the excess kurtosis. Proposition 3 establishes that z k is asymptotically the same as z k, so that having to estimate F has no effect on the sampling distribution of the canonical correlations. his allows us to construct (1 α) percent confidence intervals for the population canonical correlations as follows. For k = 1,... min[m, r], ( ) ( ρ k, ρ+ k = ρ 2 ρ i (1 ρ 2 i ) k Φ α, ρ 2 ρ i (1 ρ 2 i ) k + Φ α ). (11) If every element of G t is an exact factor, all the non-zero population canonical correlations should be unity. he confidence interval for the smallest non-zero canonical correlation is thus a bound for the weakest correlation between F t and G t. he only non-zero canonical correlation between a single series, say, G jt and F t is ρ 2 1. But this is simply the coefficient of determination from a projection of G jt onto F t, and thus coincide with R 2 (j) as defined in (8). he formula in (11) can therefore be used to obtain a confidence interval for R 2 (j) also. 4 Simulations We use simulations to assess the finite sample properties of the tests. hroughout, we assume F kt N(0, 1), k = 1,... r, and e it N(0, σ 2 e(i)), where e it is uncorrelated with e jt for i j, i, j = 1,... N. When σ 2 e(i) = σ 2 e for all i, we have the case of homogeneous data. he factor loadings are standard normal, i.e. λ ij N(0, 1), j = 1,... r, i = 1,... N. he data are generated as x it = λ if t + e it. In the experiments, we assume that there are r = 2 factors. he data are standardized to have mean zero and unit variance prior to estimation of the factors by the method of principal components. 10

he observed factors are generated as G jt = δ jf t + ε jt, where δ j is a r 1 vector of weights, and ε jt σ ε (j)n(0, var(δ jf t )). We test m = 7 observed variables parameterized as follows: Parameters for G jt j 1 2 3 4 5 6 7 δ j1 1 1 1 1 1 1 0 δ j2 1 0 1 0 1 0 0 σ ε 0 0.2.2 2 2 1 he first two factors, G 1t and G 2t are exact factors since σ ε = 0. Factors three to six are linear combinations of the two latent factors but are contaminated by errors. he variance of this error is small relative to the variations of the factors for G 3t and G 4t, but is large for G 5t and G 6t. Finally, G 7t is an irrelevant factor as it is simply a N(0, 1) random variable. Prior to testing, the G t s are also standardized to have mean zero and unit variance. We conduct 1000 replications using Matlab 6.5. We report results with α set to 0.025. Results for homoskedastic and heteroskedastic errors with var(ĝt) defined as in (5), (4) and (3) are given in able 1a-c, respectively. According to theory, A(j) should both be 2α if G jt is a true factor, and unity if the factor is irrelevant. Furthermore, M(j) should exceed the the critical value at percentage point α with probability 2α. Columns four and five report the properties of these tests averaged over 1000 replications. Indeed, for G 1t and G 2t, the rejection rates are close to the nominal size of 5%, even for small samples. For the irrelevant factor G 7t, the tests reject the null hypothesis with high probabilities showing the tests have power. he power of the M(j) test is especially impressive. Even when heteroskedasticity in the errors has to be accounted for, the rejection rate is 100 percent, for all sample sizes considered. his means that even with (N, ) = (50, 50), the test can very precisely determine if an observed variable is an exact factor. he NS(j) test reinforces the conclusion that G 1t and G 2t are exact factors, and that G 7t has no coherence with the latent factors. able 1 also reports the point estimate of R 2 (j). he two exact factors have estimates of R 2 (j) well above 0.95, while uninformative factors have R 2 (j) well below 0.05, with tight confidence intervals. 3 Because we know the data generating process, we can assess whether or not the confidence intervals around ε jt give correct inference. In the column labelled CI ε (j) in able 1, we report the probability that the true ε jt lies inside the two-tailed 95% confidence interval light. 3 Since R 2 (j) is bounded between zero and one, the estimated lower bound should be interpreted in this 11

defined by (6). Evidently, the coverage is excellent for ε 1t, ε 2t, and ε 7t. he result for ε 7t might seem surprising at first, but this is in fact showing that the measurement error can be precisely estimated even when F t and G jt are totally unrelated. In theory, τ(j) and M(j) should always reject the null hypothesis when G 3, G 4, G 5 and G 6 are being tested since none of these are exact factors. able 1 shows that this is the case when N and both exceed 100. For smaller N and/or, the power of the tests depend on how large are the measurement errors. For G 5t and G 6t, which have a high noise-to-signal ratio of 4, M(j) still rejects with probability one when N and are small, while the A(j) has a respectable rejection rate of 0.85. However, when the signal-to-noise ratio is only.04 as in G 3t and G 4t, both the A(j) and the M(j) under-reject the null hypothesis, and the problem is much more severe for A(j). Notice that when the noise-to-signal ratio is.04, R 2 (j) remains at around.95. his would be judged high in a typical regression analysis, and yet we would reject the null hypothesis that G jt is an exact factor. It is for cases such as this that having a sense of how big is the measurement error is useful. In our experience, an NS(j) above.5, and/or a R 2 (j) below 0.95 is symptomatic of non-negligible measurement errors. By these guides, G 3t and G 4t are strong proxies for the latent factors. It is of interest to remark that whether the measurement error has large or small variance, the confidence intervals constructed according to (6) bracket the true error quite precisely. his is useful in empirical work since we can learn if the discrepancy between G jt and the latent factors are systematic or occasional. able 2 reports results for testing four sets of observed factors using canonical correlations. hese are formed from G 1t to G 6t as defined above, plus four N(0, 1) random variables unrelated to F t, labelled G 7t to G 10t. he four sets of factors are defined as follows: Set 1: G 3t, G 4t, and G 7t Set 2: G 5t, G 6t, and G 7t Set 3: G 1t, G 2t, and G 7t Set 4: G 7t, G 8t, G 9t, and G 10t. Because min[m, r] is 2 in each of the four case, the smaller two of the four canonical correlations should be zero, while the largest two should be unity if every G t is an exact factor. We use (11) to construct confidence intervals for ρ 2 2, i.e. the smaller of the two non-zero canonical correlations. able 2 shows that Set 3 has maximal correlation with F t as should be the case since G 1t and G 3t are exact factors, and a weight of zero on G 7t would indeed maximize the correlation between G and F. When Set 4 is being tested, zero is in the confidence interval as should be the case since this is a set of irrelevant factors. For Set 2 which has factors contaminated by large measurement errors, the test also correctly detects a very 12

small canonical correlation. When measurement errors are small but non-zero, the sample correlations are non-zero but also not unity. he practitioner again has to take a stand on whether a set of factors is useful. he values of ρ 2 2 for Set 1 are around.9, below our cut-off point of.95. We would thus be more concerned with accepting Set 1 as a valid set than accepting its elements (i.e. G 3t and G 4t ) as individually valid factors. Finally, Figures 1 and 2 depict the performance of exact factor testing and measurement error testing. Data are generated with two factors. Figure 1 displays the confidence intervals for the factor processes along with true factor processes under exact factor testing. he confidence intervals are ([Ĝjt 1.96N 1/2 var(ĝjt) 1/2, Ĝjt + 1.96N 1/2 var(ĝjt) 1/2 ] for t = 1,..., and j = 1, 2. he left panel is for the first factor with N = 50, 100, and the right panel is for the second factor with different N s. he confidence intervals become narrower for larger N. Next, we do not assume exact factors are observable, instead, G jt = δ jf t + ε jt for j = 1, 2 are observable. he measurement errors ε jt are estimated for t = 1,..., and j = 1, 2. Confidence intervals are constructed as, according to Proposition 2: [ ε jt 1.96 1/2 s jt, ε jt + 1.96 1/2 s jt ]. hese confidence intervals are plotted in Figure 1. In simulations, the true error processes ε jt are known, and they are also plotted in Figure 2. It is clear that the confidence intervals cover the true process. In contrast with exact factor testing, the confidence intervals do not become narrow as N increases. his is due to parameter uncertainty about δ, which is estimated with observations. When increases, the confidence band will be narrower. 5 Empirical Applications In this section, we take our tests to the data. Factors estimated from portfolios, stock returns, and a large set of economic variables will be tested against various G jt s. he base factors are the three (FF) Fama and French factors, denoted Market, SMB, and HML. 4 For annual data, we also consider aggregate consumption growth (DC) and the CAY variable suggested by Lettau and Ludvigson (2001). For monthly data, we include, in addition to the FF factors, 4 Small minus big is the difference between the average return of three small portfolios and three big portfolios. High minus low is the average return on two value and two growth portfolios. See Fama and French (1993). Market = R m R f is value weighted return on all NYSE, AMEX, and NASDAQ minus the one month treasury bill rate from Ibbotson Associates. 13

variables considered in Chen et al. (1986). hese are the first lag of annual consumption growth DC, inflation DP, the growth rate of industrial production DIP, a term premia ERM, and a risk premia RISK. 5 In each case, we analyze the data from 1960-1996. We also split the sample at various points to look for changes in relations between the observed and the latent factors over the forty years. he data are standardized to be mean zero with unit variances prior to estimation by the method of principal components. he G t s are likewise standardized prior to implementation of the tests. In view of the properties of the data, we only report results for heteroskedastic errors with var(ĝt) defined as in (4). We determine the number of factors with g(n, ) specified as: ( ) N N + g 1 (N, ) = log N + N g 2 (N, ) = (N + ) log(min[n, ]). N 5.1 Portfolios In this application, x it are monthly or annual observations on 100 portfolios available from Kenneth French s web site. 6 hese are the intersections of 10 portfolios formed on size (market equity) and 10 portfolios formed on the ratio of book to market equity. A total of 89 portfolios are continuously available for the full sample. Depending on the sample period, the PCP and ICP select between 4 and 6 factors. We set r = 6 in all subsequent tests. he results are reported in able 3. For annual data, the A(j) test rejects the null hypothesis of exact factors in more than 5% of the sample. he critical value for the M(j) test is 3.28. It cannot reject the null hypothesis that SMB is an exact factor at the 5% level, and HML and Market at around the 10% level. However, the evidence does not support CAY and aggregate consumption growth as exact factors, as NS(j) and R 2 (j) indicate the presence of non-trivial measurement errors. he canonical correlations suggest only three well defined relations between F t and G t. he remaining relations are extremely weak. he canonical correlations between the three FF factors alone, and F t are.970,.962, and.950, respectively. Compared with the results when 5 he data are taken from citibase. DC is the growth rate of PCEND, DIP is the growth rate of IP, DP is the growth rate of PUNEW. he risk premium, RISK, is the BAA rated bond rate (FYBAAC) minus the 10 year government bond rate (FYG10). he term premia, ERM is 10 year government bond FYG10 minus the three month treasury bill rate FYGM3. 6 he url is mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library/det_100_port_sz. html. 14

CAY and DC are included, little is gained by including CAY and DC. his suggests that the FF factors underlie the three non-zero canonical correlations in the five variable set. Results for testing monthly data are given in able 4. Several features are noteworthy. First, the FF factors continue to be strong proxies for systematic risks, and there is some weak evidence that the SMB is an exact factor in the 1988-1996 sub-sample. Of the three FF factors, Market has the highest R 2 (j) with the unobserved factors. Second, the macroeconomic variables have unstable relations with the unobserved factors over time, but the relations have become stronger since the 1980s. Inflation was a good proxy factor between 1960-82, but over the entire sample, RISK has the highest R 2 (j) and the lowest NS(j). In contrast, the relation between the FF factors and the latent factors appears to be quite stable, displaying little variation in both R 2 (j) and NS(j) over time. hird, three of the eight sample canonical correlations between F t and G t are practically zero, from which we can conclude that the eight observed factors considered cannot span the true factor space. Because the five non-zero canonical correlations are in fact far from unity, we can also conclude that measurement errors are significant enough that the eight observed variables cannot even span a five dimensional subspace of the true factors. 5.2 Stock Returns We next apply our tests to monthly stock returns. Because portfolios are aggregated from stocks, individual stock returns should have larger idiosyncratic variances than portfolios. hus, the common components in the individual returns can be expected to be smaller than those in the portfolios. It is thus of interest to see if good proxy variables can be found when the data have larger idiosyncratic noises. Data for 190 firms are available from CRSP over the entire sample. he two P CP criteria select 6 and 5 factors in this panel of data, while the ICP always selects 4. We set r to 6 in the analysis. he results in able 5 reveal that Market continues to be a strong proxy factor with a low noise to signal ratio. However, SMB, HML, and especially the macroeconomic variables now appear to be poor proxies for the factors in the returns data. he best of the proxy macroeconomic variable RISK still has a noise-to-signal ratio that is ten times larger than the Market factor. he above analysis indicates that the factors in annual portfolios are better approximated by observed variables than the factors in monthly portfolios, and finding proxies for the factors in the monthly portfolios is in turn a less challenging task than finding observed variables to proxy the factors in individual returns. his is because high frequency and/or 15

disaggregated data are more likely to be contaminated by noise. hus, even though more data are available at the high frequency and disaggregated levels, they are less reliable proxies for the systematic variations in the data. Inference using observed variables as proxies for the common factors could be inaccurate. Of all the variables considered, the most satisfactory proxy for the latent factors in both portfolios and individual stock returns appears to be the Market factor as described in Fama and French (1993). Its signal to noise ratio is systematically high, and its coherence with the latent factors is robust across sample periods. 5.3 Macroeconomic Factors A fundamental characteristic of business cycles is the co movement of a large number of economic series. Such a phenomenon can be rationalized by common factors being the driving force of economic fluctuations. his has been used as a justification to represent the unobserved state of the economy by variables thought to be dominated by variations arising from common sources. Industrial production, unemployment rate, and various interest rate spreads have been used for this purpose. However, there has been no formal analysis of how good the proxy variables are. We estimate the latent factors from 150 monthly series considered in Stock and Watson (2002). his data consist of series from output, consumption, employment, investment, prices, wages, interest rate, and other financial series such as exchange rates. In addition to the five macroeconomic variables considered throughout, we also tested if unemployment rate (LHUR) is a common factor. For this application, we also consider a post Bretton Woods sub-sample. he results are reported in able 6. Given the noise in monthly data, rejecting the null hypothesis that the variables are exact factors is hardly surprising. Although industrial production is a widely used indicator of economic activity, it is systematically dominated by DC, DP, RISK, and UR. As in the results for asset returns, the relations between the macroeconomic variables and the factors also appear to be unstable. While inflation was the best proxy factor before 1983, the unemployment rate has done extremely well since. he non-zero canonical correlations also reveal instability in the relations, as they are higher in the later than the earlier subsamples. One interpretation of this result is that idiosyncratic shocks were more important in the sixties and seventies, but common shocks have become more important in recent years as sources of economic fluctuations. o the extent that the FF factors are good proxies for the factors in portfolios, one might 16

wonder if the common shocks to economic activity are also the common shocks to portfolio returns. o shed some light on this issue, we also tested if the FF factors are related to the factors in macroeconomic data. As seen from able 5, there is hardly any evidence for a relation between the FF factors and the panel of macroeconomic variables. 6 Conclusion It is common practice in empirical work to proxy unobserved common factors by observed variables, yet hardly any formal procedures exist to assess if the observed variables equal, or are close to the factors. his paper exploits the fact that the space spanned by the common factors can be consistently estimated from large dimensional panels. We develop several tests that can serve as guides as to which variables are close to the factors. he tests have good properties in simulations. We estimate the common factors in portfolios, individual returns, as well as a large set of macroeconomic data. he Fama and French factors approximate the factors in portfolios and individual stock returns much better than any single macroeconomic variable. Although industrial production is a widely used indicator of economic activity, inflation, unemployment rate, and the risk premium have stronger coherence with the macroeconomic factors. Inflation was a good proxy of the factors in portfolios and macroeconomic data prior to the 1980s, but its importance has diminished since. 17

Proof of Proposition 2 Rewrite G t = δ F t + ε t as G t = δ H 1 Ft + ε t + δ H 1 (HF t F t ), or where γ = H 1 δ. In matrix notation, he least squares estimator of γ is Substituting G of (12) into γ, we have G t = γ Ft + ε t + γ (HF t F t ), G = F γ + ε + (F H F )γ. (12) γ = ( F F / ) 1 ( F G/ ) = 1 F G. γ = γ + 1 F ε + 1 F (F H F )γ, ( γ γ) = 1 F ε + 1 F (F H F )γ. From Lemma B.3 of Bai (2003), the last term is O p (min[n, ] 1 ) 0 provided /N 0. hus, ( γ γ) = 1 F ε + o p (1). From 1 F ε = H 1 F ε + 1 ( F F H ) ε = H 1 F ε + o p (1) 1 where we use the fact that ( F F H ) ε = o p (1) if /N 0. By the CL, 1/2 F ε = 1/2 t=1 F tε t is asymptotically normal. he asymptotic variance is the probability limit of σε 2 F F. So the asymptotic variance of H 1/2 F ε is the limit of H F F H. But H F F H = I + o p (1), see Bai (2003). hus ( γ γ) N(0, d σεi). 2 Now G t = γ Ft + ε t, and we also have G t = γ Ft + ε t + γ (HF t F t ). Equating, we have which implies γ Ft + ε t = γ Ft + ε t + γ (HF t F t ), ε t ε t = (γ γ ) F t + γ (HF t F t ). hus, ( ε t ε t ) = F t ( γ γ) γ /N N( F t HF t ). But ( γ γ) N(0, d σεi) 2 and γ N(HF t F t ) N(0, d γ V 1 QΓ t Q V 1 γ), and F Ft = F th HF t +o p (1) = F t(f F/ ) 1 F t + o p (1), the stated result follows. 18

Proof of Proposition 3 Let B = S 1 ef F es F e G S 1 GG S G F e, and B = S 1 F F S F GS 1 GG S GF. Let ρ k and ρ k be the canonical correlations of B and B, respectively. Because eigenvalues are continuous functions, we will have ( ρ 2 k ρ2 k ) 0 p provided that ( B B) 0. p hat is, the canonical correlations of B have the same limiting distributions as those of B when B and B are asymptotically equivalent. We next establish ( B B) p 0. First note that because H is full rank, the canonical correlations of B is the same as the canonical correlations of B, where B = S 1 HF HF S HF GS 1 GG S G HF. hus, it suffices to show that B B ) 0. p But this is implied by (S 1 ef F e S 1 HF HF ) 0 p (13) (S ef G S HF G ) 0. p (14) Consider (13). Now (S 1 ef e F S 1 S HF HF ) = S 1 ef e F (S HF HF S e F e F )S 1 HF HF. hus, (13) is implied by (S HF HF S F e F e) 0, p or that ( HF F H F ) F p 0. (15) By Lemma B.2 and B.3 of Bai (2003), F ( F F H ) F ( F F H ) Adding and subtracting terms, (15) becomes ( F F H ) F if /N 0. For (14), ( F G HF ( F F H ) HF ) G = = O p (min[n, ] 1 ) = O p (min[n, ] 1 ). = O p (min[n, ] 1 ) 0 ( ( F F H ) G But 1 ( F F H ) G = O p (min[n, ] 1 ), see Lemma B.2 of Bai (2003). hus, (14) is O p ( / min[n, ]) 0, p establishing ( B B ) = o p (1) and thus the proposition. 19 ).

able 1a: ests for G jt : Homoskedasticity Variance N j A(j) M(j) NS(j) CI ε (j) R2(j) R2 (j) R2 + (j) 50 50 1 0.03 0.02 0.03 0.97 0.97 0.96 0.98 50 50 2 0.03 0.03 0.03 0.97 0.97 0.96 0.98 50 50 3 0.17 0.64 0.07 0.97 0.94 0.92 0.95 50 50 4 0.16 0.60 0.07 0.97 0.94 0.92 0.95 50 50 5 0.85 1.00 5.05 0.95 0.22 0.13 0.32 50 50 6 0.85 1.00 5.08 0.95 0.22 0.12 0.32 50 50 7 0.95 1.00 339.24 0.96 0.04-0.01 0.09 100 50 1 0.03 0.01 0.01 0.97 0.99 0.98 0.99 100 50 2 0.03 0.01 0.01 0.97 0.99 0.98 0.99 100 50 3 0.27 0.94 0.05 0.97 0.95 0.94 0.96 100 50 4 0.27 0.94 0.05 0.97 0.95 0.94 0.96 100 50 5 0.89 1.00 4.79 0.95 0.23 0.13 0.33 100 50 6 0.89 1.00 4.77 0.95 0.22 0.13 0.32 100 50 7 0.96 1.00 290.41 0.94 0.04-0.00 0.09 50 100 1 0.03 0.01 0.03 0.97 0.97 0.97 0.98 50 100 2 0.03 0.01 0.03 0.97 0.97 0.97 0.98 50 100 3 0.17 0.74 0.07 0.97 0.94 0.92 0.95 50 100 4 0.17 0.73 0.07 0.97 0.94 0.92 0.95 50 100 5 0.85 1.00 4.52 0.96 0.21 0.14 0.28 50 100 6 0.86 1.00 4.49 0.96 0.21 0.14 0.28 50 100 7 0.96 1.00 1780.38 0.97 0.02-0.00 0.04 200 100 1 0.03 0.01 0.01 0.97 0.99 0.99 0.99 200 100 2 0.03 0.01 0.01 0.97 0.99 0.99 0.99 200 100 3 0.40 1.00 0.05 0.97 0.96 0.95 0.96 200 100 4 0.40 1.00 0.05 0.97 0.96 0.95 0.96 200 100 5 0.93 1.00 4.32 0.95 0.21 0.14 0.28 200 100 6 0.92 1.00 4.27 0.95 0.21 0.14 0.28 200 100 7 0.98 1.00 431.39 0.96 0.02-0.00 0.04 100 200 1 0.03 0.01 0.01 0.97 0.99 0.98 0.99 100 200 2 0.03 0.00 0.01 0.97 0.99 0.98 0.99 100 200 3 0.27 1.00 0.05 0.97 0.95 0.94 0.96 100 200 4 0.27 1.00 0.05 0.97 0.95 0.94 0.96 100 200 5 0.90 1.00 4.23 0.96 0.20 0.15 0.25 100 200 6 0.90 1.00 4.15 0.96 0.21 0.16 0.25 100 200 7 0.98 1.00 946.21 0.96 0.01-0.00 0.02 he 5% critical values for the M(j) test when = 50, 100, 200, 400 are, 3.283, 3.474, 3.656, and 3.830, respectively. he A(j) and the confidence intervals are based on the critival value 1.96. he tests are based on var(ĝt) defined in (5). 20

able 1b: ests for G jt : Heteroskedastic Variance N j A(j) M(j) NS(j) CI ε (j) R2(j) R2 (j) R2 + (j) 50 50 1 0.05 0.07 0.01 0.95 0.99 0.99 0.99 50 50 2 0.05 0.07 0.01 0.95 0.99 0.99 0.99 50 50 3 0.41 1.00 0.05 0.95 0.96 0.94 0.97 50 50 4 0.42 1.00 0.05 0.95 0.95 0.94 0.97 50 50 5 0.92 1.00 4.92 0.94 0.23 0.13 0.32 50 50 6 0.92 1.00 4.89 0.94 0.23 0.13 0.32 50 50 7 0.97 1.00 193.91 0.94 0.04-0.01 0.09 100 50 1 0.05 0.05 0.00 0.95 1.00 0.99 1.00 100 50 2 0.05 0.05 0.00 0.96 1.00 0.99 1.00 100 50 3 0.55 1.00 0.04 0.95 0.96 0.95 0.97 100 50 4 0.55 1.00 0.04 0.95 0.96 0.95 0.97 100 50 5 0.95 1.00 4.75 0.94 0.23 0.13 0.33 100 50 6 0.95 1.00 4.69 0.95 0.23 0.13 0.32 100 50 7 0.98 1.00 177.17 0.93 0.04-0.00 0.09 50 100 1 0.05 0.07 0.01 0.95 0.99 0.99 0.99 50 100 2 0.05 0.06 0.01 0.95 0.99 0.99 0.99 50 100 3 0.42 1.00 0.05 0.95 0.95 0.95 0.96 50 100 4 0.42 1.00 0.05 0.95 0.95 0.95 0.96 50 100 5 0.93 1.00 4.41 0.95 0.21 0.14 0.28 50 100 6 0.93 1.00 4.39 0.95 0.21 0.14 0.28 50 100 7 0.98 1.00 408.56 0.95 0.02-0.00 0.04 200 100 1 0.05 0.05 0.00 0.95 1.00 1.00 1.00 200 100 2 0.05 0.04 0.00 0.95 1.00 1.00 1.00 200 100 3 0.67 1.00 0.04 0.95 0.96 0.95 0.97 200 100 4 0.67 1.00 0.04 0.95 0.96 0.95 0.97 200 100 5 0.96 1.00 4.29 0.95 0.21 0.14 0.28 200 100 6 0.96 1.00 4.24 0.94 0.22 0.15 0.28 200 100 7 0.99 1.00 465.28 0.95 0.02-0.00 0.04 100 200 1 0.05 0.07 0.00 0.95 1.00 1.00 1.00 100 200 2 0.05 0.07 0.00 0.95 1.00 1.00 1.00 100 200 3 0.55 1.00 0.04 0.95 0.96 0.95 0.96 100 200 4 0.56 1.00 0.04 0.95 0.96 0.95 0.96 100 200 5 0.95 1.00 4.17 0.95 0.21 0.16 0.25 100 200 6 0.95 1.00 4.09 0.95 0.21 0.16 0.26 100 200 7 0.99 1.00 751.77 0.95 0.01-0.00 0.02 he 5% critical values for the M(j) test when = 50, 100, 200, 400 are, 3.283, 3.474, 3.656, and 3.830, respectively. he A(j) and the confidence intervals are based on the critical value 1.96. 21

able 1c: ests for G jt : Cross Correlated Errors N j A(j) M(j) NS(j) CI ε (j) R2(j) R2 (j) R2 + (j) 50 50 1 0.01 0.02 0.03 0.99 0.97 0.96 0.98 50 50 2 0.01 0.02 0.03 0.99 0.97 0.96 0.98 50 50 3 0.08 0.25 0.07 0.99 0.94 0.92 0.95 50 50 4 0.08 0.25 0.07 0.99 0.94 0.92 0.95 50 50 5 0.66 1.00 5.05 0.98 0.22 0.13 0.32 50 50 6 0.66 1.00 5.08 0.98 0.22 0.12 0.32 50 50 7 0.72 1.00 339.24 0.98 0.04-0.01 0.09 100 50 1 0.04 0.10 0.01 0.96 0.99 0.98 0.99 100 50 2 0.04 0.09 0.01 0.96 0.99 0.98 0.99 100 50 3 0.22 0.63 0.05 0.96 0.95 0.94 0.96 100 50 4 0.23 0.63 0.05 0.97 0.95 0.94 0.96 100 50 5 0.77 1.00 4.79 0.98 0.23 0.13 0.33 100 50 6 0.77 1.00 4.77 0.98 0.22 0.13 0.32 100 50 7 0.80 1.00 290.41 0.97 0.04-0.00 0.09 50 100 1 0.00 0.00 0.03 1.00 0.97 0.97 0.98 50 100 2 0.00 0.00 0.03 1.00 0.97 0.97 0.98 50 100 3 0.04 0.05 0.07 1.00 0.94 0.92 0.95 50 100 4 0.04 0.05 0.07 1.00 0.94 0.92 0.95 50 100 5 0.64 1.00 4.52 0.99 0.21 0.14 0.28 50 100 6 0.64 1.00 4.49 0.99 0.21 0.14 0.28 50 100 7 0.72 1.00 1780.38 0.99 0.02-0.00 0.04 200 100 1 0.04 0.10 0.01 0.96 0.99 0.99 0.99 200 100 2 0.04 0.10 0.01 0.96 0.99 0.99 0.99 200 100 3 0.34 0.89 0.05 0.96 0.96 0.95 0.96 200 100 4 0.35 0.89 0.05 0.96 0.96 0.95 0.96 200 100 5 0.84 1.00 4.32 0.98 0.21 0.14 0.28 200 100 6 0.83 1.00 4.27 0.97 0.21 0.14 0.28 200 100 7 0.86 1.00 431.39 0.98 0.02-0.00 0.04 100 200 1 0.00 0.00 0.01 1.00 0.99 0.98 0.99 100 200 2 0.00 0.00 0.01 1.00 0.99 0.98 0.99 100 200 3 0.09 0.35 0.05 1.00 0.95 0.94 0.96 100 200 4 0.09 0.33 0.05 1.00 0.95 0.94 0.96 100 200 5 0.75 1.00 4.23 0.99 0.20 0.15 0.25 100 200 6 0.74 1.00 4.15 1.00 0.21 0.16 0.25 100 200 7 0.81 1.00 946.21 0.99 0.01-0.00 0.02 he 5% critical values for the M(j) test when = 50, 100, 200, 400 are, 3.283, 3.474, 3.656, and 3.830, respectively. he A(j) and the confidence intervals are based on the critical value 1.96. he tests are based on var(ĝt) defined in (5). 22