A distance measure between cointegration spaces
|
|
- Silas Howard
- 6 years ago
- Views:
Transcription
1 Economics Letters 70 (001) locate/ econbase A distance measure between cointegration spaces Rolf Larsson, Mattias Villani* Department of Statistics, Stockholm University S Stockholm, Sweden Received 1 October 1; accepted 15 July 000 Abstract A distinguishing feature of cointegration models, and many other multivariate models, is that only spaces spanned by parameter vectors are identified. We point out that traditional distance measures, such as the Euclidean measure, are not reasonable to use when measuring distances between spaces. This point has been either missed or ignored in many simulation studies where inappropriate distance measures have been used. We propose a simple measure based on the idea that the space spanned by the orthogonal complement of a matrix lies as far away as possible from the space spanned by the matrix itself. Several properties of this measure are derived. 001 Elsevier Science B.V. All rights reserved. Keywords: Cointegration; Distance measure; Simulation studies JEL classification: C15; C 1. Introduction The analysis of cointegration in multivariate time series has been the object of an impressively large body of both theoretical and empirical research in econometrics during the last decade. The possibility to empirically estimate long run equilibrium relationships, the so called cointegration vectors, is one of the most important developments in modern macroeconometrics. Many estimators of the space spanned by the cointegration vectors (the cointegration space), which is what we can estimate uniquely, have been suggested and the maximum likelihood estimator of Johansen (15) is the most widely used. Only large sample results are available for these estimators, however, and many simulation studies have therefore been conducted to compare their properties in smaller samples, see e.g. Ahn and Reinsel (10); Gonzalo (14) and Jacobson (15). To measure the performance of an estimator of the unknown cointegration space, a measure of the distance between two cointegration *Corresponding author. Tel.: ; fax: address: mattias.villani@stat.su.se (M. Villani) / 01/ $ see front matter 001 Elsevier Science B.V. All rights reserved. PII: S (00)0034-
2 R. Larsson, M. Villani / Economics Letters 70 (001) 1 7 spaces is clearly needed. Our purpose here is twofold; first, we point out that traditionally used distance measures, like the Euclidean metric, are not appropriate for this problem. Second, an alternative measure with desirable properties is proposed.. Motivation Consider the error correction model (ECM) k1 DX 5 abx 1O G DX 1 FD 1, (1) t t1 i ti t t i51 where t 5 1,...,T, hxtj is a p-dimensional process, h tj are independent p-dimensional normal errors with expectation zero and covariance matrix V, the parameter matrices a and b are p 3 r, G 1,...,Gk1 are p 3 p, F is p 3 m and the dummy matrices hd t j are m 3 1. It is well known that a and b are unidentified without restrictions and that only sp(a) and sp(b ) are estimable. Estimation of sp(b ), the cointegration space, is the central part in the statistical analysis of cointegration models. Analytical expressions for the (small sample) bias and standard error of the estimators of sp(b ) are very rare and we are therefore referred to simulation studies if such estimators are to be compared. A typical simulation study proceeds as follows. All parameters of the ECM are given fixed values by the investigator and a sequence of processes are then generated a specified number of times. For each generated process, an estimate of b, denoted by ˆ b in general, is computed by each estimation method. A distance measure, m(b,b ˆ ), which measures the closeness of an estimate to the true, and known, b is computed for each method and then averaged over all generated processes. The estimation method which produces the smallest average distance is preferred over its competitors, everything else equal. There is always some controversy about the correct distance measure to use in simulation studies, but, as we will argue, the fact that only spaces spanned by vectors is estimable poses a new problem that has not received its deserved attention. To see why the Euclidean metric is inappropriate for measuring the distance between spaces, let b1 and b be two p-dimensional vectors of unit length with an angle u between them, where 0 #u,p. The squared Euclidean distance between b1 and b can be written ib1 bi 5 (b1 b )(b1 b ) 5 (1 cos u ), since b1b15 bb5 1 and b1b5 cos u. Thus, the Euclidean distance is strictly increasing as u varies between 0 and p and therefore has the awkward consequence that as u approaches p, and thus sp(b ) approaches sp(b 1), ib1 bi does not approach zero but instead approaches its maximal value. Since most simulation studies have been based on the Euclidean metric (or other distance measures with the same deficiency), it is likely that their results have been distorted by focusing directly on estimates of the elements in b instead of what we actually can estimate, or even interpret, which is sp(b ). The importance of a distance measure between cointegration spaces goes far beyond its use in simulation studies, see e.g. Villani (000), where the metric presented in the next section is used to derive posterior location and variation measures for a Bayesian analysis of cointegration.
3 3. An alternative distance measure R. Larsson, M. Villani / Economics Letters 70 (001) Let b1 and b be two arbitrary orthonormal p 3 r matrices of full rank. Since any arbitrary full rank matrix, b, can be made orthonormal and still belong to sp(b ), the restriction to orthonormal matrices causes no reduction in generality. Our distance measure between b1 and b is based on the following decomposition of b b 5 bg 1 b g, () 1 1 where g is r 3 r and g is and s 1 1d b1b5 b1b g 5 b b 1 s d g 5 b b b b 5 b b, sp rd3 r. Explicitly, where b is the p 3 ( p r) orthogonal complement of b 1, which is also normalized by b b 5 I pr. In some sense, b is as far as we can get from b1 and it seems reasonable therefore to base the distance measure between b1 and b on some measure of the size of g, see (Golub and van Loan, 16, p. 76) for a similar idea. Note that, because of the normalizations, Ir 5 g 1g11 g g, and so, there is no need to take g1 into account. The most common size measure for a matrix is the (Frobenius) matrix norm (Harville, 17, chapter 6) 1/ iai;tr AA, s d and a natural distance measure between two cointegration spaces is therefore d b,b ;ig i;tr g g 1/ s 1 d s d. Thus, we suggest the following definition. Definition 1. The distance between sp(b 1) and sp(b ) is d b,b ; tr bb b b 1/ s d s d, 1 where all matrices involved have been made orthonormal. Note that the definition of dsb 1,bd is based on the decomposition of b in () when it could equally well have been based on the decomposition of b 1. It is therefore essential to prove that this choice is without consequence for dsb 1,b d, or, in other words, that the proposed distance is symmetric in its two arguments. This and other properties are proved in the next theorem. Theorem. d(b,b ) has the following properties: 1 (i) dsb 1,bd is invariant under the choice of different orthonormal versions of b. (ii) d(b 1,b ) 5 0 if.f. b 1[ sp(b ). (iii) d(b 1,b ) 5 d(b,b 1).
4 4 R. Larsson, M. Villani / Economics Letters 70 (001) 1 7 (iv) d(b 1,b 3) # d(b 1,b ) 1 d(b,b 3). (v) d(b 1,b ) 5 d(b,b ' ). (vi) 0 # d(b 1,b ) # minsr, p r d. (vii) For r # p r, maximum of d(b 1,b ) is obtained if.f. b [ sp(b ). For r. p r, the maximum is reached if.f. b [ sp(b ). Thus, the upper bound in (vi) is always attainable. (viii) r r 1 ij i51 j51 d (b,b ) 5 r OOcos u, where u is the angle between the ith column of b and the jth column of b. ij 1 Proof. (i) Any other orthonormal version of b may be written b 5 b A, where A is p r -square and of full rank. Further, because b and b are orthonormal, we have s d I 5 b b 5 Ab b A 5 AA, pr 1 and so, we must have A5A. In other words, A is orthogonal and g ; b b fulfills g g 5 b b b b 5 bb AAb b 5 bb b b 5 gg, which proves the result. (ii) Assume that b [ sp(b 1). Then, b5 bg 1 1 and thus g5 0, which implies that d(b 1,b ) 5 0. Conversely, assume that d(b 1,b ) 5 0, then g5 0 and therefore b5 bg. 1 1 Thus, b [ sp(b 1). (iii) From the definition we have d(b,b ) 5 tr bb b b 1/ s d. Using the relation bb 1 11 b b 5 Ip we can write 1 1/ 1/ 1/ d(b 1,b ) 5 trfb (I p bb 1 1)b g 5 tr(ir bb1b1b ) 5 tr(ir b1bbb 1) 5 d(b,b 1). (iv) We have to show the inequality ib b3i # ib bi 1 ib' b3i. (3) The idea is to square both sides of (3) to get ib b i # ib b i 1 ib b iib b i 1 ib b i, (4) 3 ' 3 ' 3 and then to show this inequality. Now, using Ip 5 bb 1 b' b ', we get implying b b35 b bbb31 b b' b' b 3,
5 3 3 3 R. Larsson, M. Villani / Economics Letters 70 (001) ib b i 5 trhsb b dsb b dj 5 trhb bbb3b3bb b j1 trhb bbb3b3b' b' b j 1 trhb b b b bb b b j. (5) ' ' 3 3 ' ' Here, because bb 3 35 Ip b3' b 3', the first term on the r.h.s. may be expressed as trhb bbsi p b3' b3' dbbb j5 trhb bbb j trhb bbb3' b3' bbb j # trhb b bb j5 ib b i, and it can similarly be shown that the third term on the r.h.s. of (5) is bounded by ib remains to show that b i. Thus, it trhb b bb bb b b j# ib b iib b i. (6) 3 3 ' ' ' 3 To this end, the Cauchy-Schwarz inequality (Harville (17), chapter 6) implies trsab d# iaiibi trhsb b bb dsbb b b dj # ibb bb iibb b b i. (7) 3 3 ' ' 3 3 ' ' But by definition, ib3bbb i equals the first term on the r.h.s. of (5),which we have shown to be bounded by ib bi. Similarly, we may relate ib3b' b' b i to the third term on the r.h.s. of (5) to see that it is bounded by ib b i. Thus, by (7), ' 3 trhsb b bb dsbb b b dj # ib b iib b i, 3 3 ' ' ' 3 and the proof of (6), and hence of (4), is completed. (v) Since (b )' 5 b1 d(b,b ) 5 tr b b bb 1/5tr bb b b 1/ s d s d 5 d(b,b ) 5 d(b,b ). ' ' 1 1 ' 1 ' ' (vi) As above, we have the representation b5 bg 1 11 b g where g5 b b. As usual, b 1, b and their orthogonal complements are assumed to be of full rank and orthonormal. It follows that r 5 trsbb d5 trsgg d1 trsgg d, 1 1 and so, d(b 1,b ) ; trsgg d# r with equality if.f. g15 0, which is possible if.f. r # p r (otherwise, b 5 b g is not of full rank). On the other hand, we may write b 5 b' h1 1 bh where h 5 bb 5 g. Hence, p r 5 trsb b d5 trshh d1 trshh d, 1 1 implying d(b 1,b ) ; trsgg d5 trshh d# p r with equality if.f. h15 0, which is possible if.f. p r # r. ' 3
6 6 R. Larsson, M. Villani / Economics Letters 70 (001) 1 7 (vii) From (vi), if r # p r, the maximum is obtained if.f g15 0, which implies that b [ sp(b ). For r. p r, the maximum is reached if.f. h15 0, which implies that b [ sp(b ). (viii) From the definition Define b d (b,b ) 5 tr(bb b b ) 5 tr[b (I bb )b ] 5 r tr(bb bb ). (8) ij 1 p as the jth column of b, i 5 1,. Then i b11b1 b11 b??? b11 br cosu11 cosu 1??? cosu 1 1r b b b b??? b br cosu1 cosu??? cosur b1b 5? 5?, () : :?? : : :?? : b b b b??? b b cosu cosu??? cosu 1r 1 1r 1r r r1 r rr since ab 5 iaiibicosu, for any vectors a and b with angle u and ib i 5 1 for i 5 1, and j 5 1,...,r. ij Thus, from (8) r r 1 ij i51 j51 d (b,b ) 5 r OOcos u. h Note that properties (ii), (iii) and (iv) in Theorem together imply that d(b 1,b ) is a metric. It is interesting to compare property (viii) to the Euclidean distance between b and b ib b i 5 tr[(b b )(b b )] 5 r tr(bb ), since tr(b1b 1) 5 tr(bb ) 5 r. Using () we obtain S r 1 ii i51 D ib b i 5 r O cos u. 1 Thus, the Euclidean measure only takes the angles between b1i and b i, i 5 1,,...,r, into account, whereas our measure considers the angles between all pairs of columns of b1 and b. In addition, the way the angles enter the two measures differ considerably; ib1 bi is strictly increasing in each of the u ii, whereas d(b 1,b ) decreases as any of the uij approaches either 0 or p, given that all other angles remain the same (which is not always possible since the uij are sometimes linked to each other). To see why the latter behavior is more appropriate, let us return to the case of a single cointegration vector discussed in Section, where u was the angle between b1 and b. From Theorem (viii), we know that d(b,b ) 5Œ ]]] 1 cos u 5 sin u. Thus, d(b,b ) approaches zero as u approaches either 0 or 1 1 p, which makes sense since in both cases sp(b ) approaches sp(b ). Note also that d(b,b ) attains 1 1 its maximum for u 5p/, which is the angle that makes b and b orthogonal Discussion We have proposed a way of measuring distances between cointegration spaces, and shown that this measure fulfills many desired properties. If the proposed distance measure is applied to a system of time series with widely differing scales
7 R. Larsson, M. Villani / Economics Letters 70 (001) then a normalization of time series, for example by the transformation Yt5 V X t, may be needed. If V is unavailable then an estimate can be used in its place. No properties of the cointegration model have been used in the derivation of our measure, and so it is, of course, applicable to many other multivariate models where only the space spanned by a set of vectors is estimable, e.g. the common factor model (Anderson, 184). However, in certain situations, for example if the cointegrating space is restricted, our measure will need to be modified. To work out such modifications, as well as to apply our measure empirically and in simulation studies, are interesting topics for future research. 1/ Acknowledgements The authors would like to thank Daniel Thorburn for valuable comments. Mattias Villani was financially supported by the Swedish Council of Research in Humanities and Social Sciences (HSFR). References Ahn, S.K., Reinsel, G.C., 10. Estimation for partially nonstationary multivariate autoregressive models. J. Am. Statist. Assoc. 85, Anderson, T.W., 184. An Introduction To Multivariate Statistical Analysis. Wiley, New York. Golub, G.H., van Loan, C.F., 16. Matrix Computations, 3rd Edition. John Hopkins University Press, Baltimore. Gonzalo, J., 14. Five alternative methods of estimating long-run equilibrium relationships. Journal of Econometrics 60, Harville, D.A., 17. Matrix Algebra From A Statistician s Perspective. Springer-Verlag, New York. Jacobson, T., 15. Simulating small sample properties of the maximum likelihood cointegration model: estimation and testing. Finnish Economic Papers 8, Johansen, S., 15. Likelihood-Based Inference in Cointegrated Vector Autoregressive Models. Oxford University Press, Oxford. Villani, M., 000. Aspects of Bayesian Cointegration. Ph.D. Thesis, Department of Statistics, Stockholm University.
An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation AXB =C
Journal of Computational and Applied Mathematics 1 008) 31 44 www.elsevier.com/locate/cam An iterative method for the skew-symmetric solution and the optimal approximate solution of the matrix equation
More informationLecture Notes 1: Vector spaces
Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector
More informationReview of Linear Algebra
Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=
More informationVector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.
Vector spaces DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Vector space Consists of: A set V A scalar
More informationLinear Algebra Review
January 29, 2013 Table of contents Metrics Metric Given a space X, then d : X X R + 0 and z in X if: d(x, y) = 0 is equivalent to x = y d(x, y) = d(y, x) d(x, y) d(x, z) + d(z, y) is a metric is for all
More information(Y jz) t (XjZ) 0 t = S yx S yz S 1. S yx:z = T 1. etc. 2. Next solve the eigenvalue problem. js xx:z S xy:z S 1
Abstract Reduced Rank Regression The reduced rank regression model is a multivariate regression model with a coe cient matrix with reduced rank. The reduced rank regression algorithm is an estimation procedure,
More informationReduced rank regression in cointegrated models
Journal of Econometrics 06 (2002) 203 26 www.elsevier.com/locate/econbase Reduced rank regression in cointegrated models.w. Anderson Department of Statistics, Stanford University, Stanford, CA 94305-4065,
More informationNumerically Stable Cointegration Analysis
Numerically Stable Cointegration Analysis Jurgen A. Doornik Nuffield College, University of Oxford, Oxford OX1 1NF R.J. O Brien Department of Economics University of Southampton November 3, 2001 Abstract
More informationBasic Calculus Review
Basic Calculus Review Lorenzo Rosasco ISML Mod. 2 - Machine Learning Vector Spaces Functionals and Operators (Matrices) Vector Space A vector space is a set V with binary operations +: V V V and : R V
More informationSection 3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices
3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices 1 Section 3.2. Multiplication of Matrices and Multiplication of Vectors and Matrices Note. In this section, we define the product
More informationMath 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.
Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,
More informationEndogenous binary choice models with median restrictions: A comment
Available online at www.sciencedirect.com Economics Letters 98 (2008) 23 28 www.elsevier.com/locate/econbase Endogenous binary choice models with median restrictions: A comment Azeem M. Shaikh a, Edward
More informationReproducing Kernel Hilbert Spaces Class 03, 15 February 2006 Andrea Caponnetto
Reproducing Kernel Hilbert Spaces 9.520 Class 03, 15 February 2006 Andrea Caponnetto About this class Goal To introduce a particularly useful family of hypothesis spaces called Reproducing Kernel Hilbert
More informationStochastic Design Criteria in Linear Models
AUSTRIAN JOURNAL OF STATISTICS Volume 34 (2005), Number 2, 211 223 Stochastic Design Criteria in Linear Models Alexander Zaigraev N. Copernicus University, Toruń, Poland Abstract: Within the framework
More informationMath Linear Algebra
Math 220 - Linear Algebra (Summer 208) Solutions to Homework #7 Exercise 6..20 (a) TRUE. u v v u = 0 is equivalent to u v = v u. The latter identity is true due to the commutative property of the inner
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationLecture # 3 Orthogonal Matrices and Matrix Norms. We repeat the definition an orthogonal set and orthornormal set.
Lecture # 3 Orthogonal Matrices and Matrix Norms We repeat the definition an orthogonal set and orthornormal set. Definition A set of k vectors {u, u 2,..., u k }, where each u i R n, is said to be an
More informationMATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y.
as Basics Vectors MATRIX ALGEBRA An array of n real numbers x, x,, x n is called a vector and it is written x = x x n or x = x,, x n R n prime operation=transposing a column to a row Basic vector operations
More informationFinancial Econometrics
Financial Econometrics Long-run Relationships in Finance Gerald P. Dwyer Trinity College, Dublin January 2016 Outline 1 Long-Run Relationships Review of Nonstationarity in Mean Cointegration Vector Error
More informationA Vector Space Justification of Householder Orthogonalization
A Vector Space Justification of Householder Orthogonalization Ronald Christensen Professor of Statistics Department of Mathematics and Statistics University of New Mexico August 28, 2015 Abstract We demonstrate
More informationMore Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order Restriction
Sankhyā : The Indian Journal of Statistics 2007, Volume 69, Part 4, pp. 700-716 c 2007, Indian Statistical Institute More Powerful Tests for Homogeneity of Multivariate Normal Mean Vectors under an Order
More information1 Singular Value Decomposition and Principal Component
Singular Value Decomposition and Principal Component Analysis In these lectures we discuss the SVD and the PCA, two of the most widely used tools in machine learning. Principal Component Analysis (PCA)
More informationWe first repeat some well known facts about condition numbers for normwise and componentwise perturbations. Consider the matrix
BIT 39(1), pp. 143 151, 1999 ILL-CONDITIONEDNESS NEEDS NOT BE COMPONENTWISE NEAR TO ILL-POSEDNESS FOR LEAST SQUARES PROBLEMS SIEGFRIED M. RUMP Abstract. The condition number of a problem measures the sensitivity
More informationForecasting Levels of log Variables in Vector Autoregressions
September 24, 200 Forecasting Levels of log Variables in Vector Autoregressions Gunnar Bårdsen Department of Economics, Dragvoll, NTNU, N-749 Trondheim, NORWAY email: gunnar.bardsen@svt.ntnu.no Helmut
More informationA characterization of consistency of model weights given partial information in normal linear models
Statistics & Probability Letters ( ) A characterization of consistency of model weights given partial information in normal linear models Hubert Wong a;, Bertrand Clare b;1 a Department of Health Care
More informationON THE HÖLDER CONTINUITY OF MATRIX FUNCTIONS FOR NORMAL MATRICES
Volume 10 (2009), Issue 4, Article 91, 5 pp. ON THE HÖLDER CONTINUITY O MATRIX UNCTIONS OR NORMAL MATRICES THOMAS P. WIHLER MATHEMATICS INSTITUTE UNIVERSITY O BERN SIDLERSTRASSE 5, CH-3012 BERN SWITZERLAND.
More informationLecture 1: Review of linear algebra
Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations
More informationElements of Multivariate Time Series Analysis
Gregory C. Reinsel Elements of Multivariate Time Series Analysis Second Edition With 14 Figures Springer Contents Preface to the Second Edition Preface to the First Edition vii ix 1. Vector Time Series
More informationFunctional Analysis Review
Outline 9.520: Statistical Learning Theory and Applications February 8, 2010 Outline 1 2 3 4 Vector Space Outline A vector space is a set V with binary operations +: V V V and : R V V such that for all
More informationChapter 2. Vectors and Vector Spaces
2.1. Operations on Vectors 1 Chapter 2. Vectors and Vector Spaces Section 2.1. Operations on Vectors Note. In this section, we define several arithmetic operations on vectors (especially, vector addition
More informationSingular Value Decomposition
Chapter 6 Singular Value Decomposition In Chapter 5, we derived a number of algorithms for computing the eigenvalues and eigenvectors of matrices A R n n. Having developed this machinery, we complete our
More informationb 1 b 2.. b = b m A = [a 1,a 2,...,a n ] where a 1,j a 2,j a j = a m,j Let A R m n and x 1 x 2 x = x n
Lectures -2: Linear Algebra Background Almost all linear and nonlinear problems in scientific computation require the use of linear algebra These lectures review basic concepts in a way that has proven
More informationA robust Hansen Sargent prediction formula
Economics Letters 71 (001) 43 48 www.elsevier.com/ locate/ econbase A robust Hansen Sargent prediction formula Kenneth Kasa* Research Department, Federal Reserve Bank of San Francisco, P.O. Box 770, San
More informationOn The Belonging Of A Perturbed Vector To A Subspace From A Numerical View Point
Applied Mathematics E-Notes, 7(007), 65-70 c ISSN 1607-510 Available free at mirror sites of http://www.math.nthu.edu.tw/ amen/ On The Belonging Of A Perturbed Vector To A Subspace From A Numerical View
More informationBootstrapping Autoregressive and Moving Average Parameter Estimates of Infinite Order Vector Autoregressive Processes
journal of multivariate analysis 57, 7796 (1996) article no. 0034 Bootstrapping Autoregressive and Moving Average Parameter Estimates of Infinite Order Vector Autoregressive Processes Efstathios Paparoditis
More informationAn EM algorithm for Gaussian Markov Random Fields
An EM algorithm for Gaussian Markov Random Fields Will Penny, Wellcome Department of Imaging Neuroscience, University College, London WC1N 3BG. wpenny@fil.ion.ucl.ac.uk October 28, 2002 Abstract Lavine
More informationA Brief Introduction to Tensors
A Brief Introduction to Tensors Jay R Walton Fall 2013 1 Preliminaries In general, a tensor is a multilinear transformation defined over an underlying finite dimensional vector space In this brief introduction,
More informationLinear Algebra Methods for Data Mining
Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 1. Basic Linear Algebra Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki Example
More informationOn the Error Correction Model for Functional Time Series with Unit Roots
On the Error Correction Model for Functional Time Series with Unit Roots Yoosoon Chang Department of Economics Indiana University Bo Hu Department of Economics Indiana University Joon Y. Park Department
More informationDefinition 2.3. We define addition and multiplication of matrices as follows.
14 Chapter 2 Matrices In this chapter, we review matrix algebra from Linear Algebra I, consider row and column operations on matrices, and define the rank of a matrix. Along the way prove that the row
More informationTotal Least Squares Approach in Regression Methods
WDS'08 Proceedings of Contributed Papers, Part I, 88 93, 2008. ISBN 978-80-7378-065-4 MATFYZPRESS Total Least Squares Approach in Regression Methods M. Pešta Charles University, Faculty of Mathematics
More information14 Singular Value Decomposition
14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing
More informationInformation theoretic solutions for correlated bivariate processes
Economics Letters 97 (2007) 201 207 www.elsevier.com/locate/econbase Information theoretic solutions for correlated bivariate processes Wendy K. Tam Cho a,, George G. Judge b a Departments of Political
More informationSample ECE275A Midterm Exam Questions
Sample ECE275A Midterm Exam Questions The questions given below are actual problems taken from exams given in in the past few years. Solutions to these problems will NOT be provided. These problems and
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationON ANGLES BETWEEN SUBSPACES OF INNER PRODUCT SPACES
ON ANGLES BETWEEN SUBSPACES OF INNER PRODUCT SPACES HENDRA GUNAWAN AND OKI NESWAN Abstract. We discuss the notion of angles between two subspaces of an inner product space, as introduced by Risteski and
More informationStat 5101 Lecture Notes
Stat 5101 Lecture Notes Charles J. Geyer Copyright 1998, 1999, 2000, 2001 by Charles J. Geyer May 7, 2001 ii Stat 5101 (Geyer) Course Notes Contents 1 Random Variables and Change of Variables 1 1.1 Random
More informationSOME PROPERTIES OF THE MULTIVARIATE SPLIT NORMAL DISTRIBUTION
SOME PROPERTIES OF THE MULTIVARIATE SPLIT NORMAL DISTRIBUTION ROLF LARSSON AND MATTIAS VILLANI Abstract The multivariate split normal distribution extends the usual multivariate normal distribution by
More informationChapter 6. Orthogonality
6.4 The Projection Matrix 1 Chapter 6. Orthogonality 6.4 The Projection Matrix Note. In Section 6.1 (Projections), we projected a vector b R n onto a subspace W of R n. We did so by finding a basis for
More informationBootstrapping the Grainger Causality Test With Integrated Data
Bootstrapping the Grainger Causality Test With Integrated Data Richard Ti n University of Reading July 26, 2006 Abstract A Monte-carlo experiment is conducted to investigate the small sample performance
More informationSOME INEQUALITIES FOR THE EUCLIDEAN OPERATOR RADIUS OF TWO OPERATORS IN HILBERT SPACES
SOME INEQUALITIES FOR THE EUCLIDEAN OPERATOR RADIUS OF TWO OPERATORS IN HILBERT SPACES SEVER S DRAGOMIR Abstract Some sharp bounds for the Euclidean operator radius of two bounded linear operators in Hilbert
More informationGeneralized Impulse Response Analysis: General or Extreme?
Auburn University Department of Economics Working Paper Series Generalized Impulse Response Analysis: General or Extreme? Hyeongwoo Kim Auburn University AUWP 2012-04 This paper can be downloaded without
More informationTesting Some Covariance Structures under a Growth Curve Model in High Dimension
Department of Mathematics Testing Some Covariance Structures under a Growth Curve Model in High Dimension Muni S. Srivastava and Martin Singull LiTH-MAT-R--2015/03--SE Department of Mathematics Linköping
More informationR = µ + Bf Arbitrage Pricing Model, APM
4.2 Arbitrage Pricing Model, APM Empirical evidence indicates that the CAPM beta does not completely explain the cross section of expected asset returns. This suggests that additional factors may be required.
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationLinear Models 1. Isfahan University of Technology Fall Semester, 2014
Linear Models 1 Isfahan University of Technology Fall Semester, 2014 References: [1] G. A. F., Seber and A. J. Lee (2003). Linear Regression Analysis (2nd ed.). Hoboken, NJ: Wiley. [2] A. C. Rencher and
More informationMyopic and perfect foresight in the OLG model
Economics Letters 67 (2000) 53 60 www.elsevier.com/ locate/ econbase a Myopic and perfect foresight in the OLG model a b, * Philippe Michel, David de la Croix IUF, Universite de la Mediterranee and GREQAM,
More informationInner Product and Orthogonality
Inner Product and Orthogonality P. Sam Johnson October 3, 2014 P. Sam Johnson (NITK) Inner Product and Orthogonality October 3, 2014 1 / 37 Overview In the Euclidean space R 2 and R 3 there are two concepts,
More informationFast principal component analysis using fixed-point algorithm
Pattern Recognition Letters 28 (27) 1151 1155 www.elsevier.com/locate/patrec Fast principal component analysis using fixed-point algorithm Alok Sharma *, Kuldip K. Paliwal Signal Processing Lab, Griffith
More informationEE731 Lecture Notes: Matrix Computations for Signal Processing
EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten
More informationby Søren Johansen Department of Economics, University of Copenhagen and CREATES, Aarhus University
1 THE COINTEGRATED VECTOR AUTOREGRESSIVE MODEL WITH AN APPLICATION TO THE ANALYSIS OF SEA LEVEL AND TEMPERATURE by Søren Johansen Department of Economics, University of Copenhagen and CREATES, Aarhus University
More informationComputer Vision Group Prof. Daniel Cremers. 2. Regression (cont.)
Prof. Daniel Cremers 2. Regression (cont.) Regression with MLE (Rep.) Assume that y is affected by Gaussian noise : t = f(x, w)+ where Thus, we have p(t x, w, )=N (t; f(x, w), 2 ) 2 Maximum A-Posteriori
More informationSome inequalities for sum and product of positive semide nite matrices
Linear Algebra and its Applications 293 (1999) 39±49 www.elsevier.com/locate/laa Some inequalities for sum and product of positive semide nite matrices Bo-Ying Wang a,1,2, Bo-Yan Xi a, Fuzhen Zhang b,
More informationVectors. Vectors and the scalar multiplication and vector addition operations:
Vectors Vectors and the scalar multiplication and vector addition operations: x 1 x 1 y 1 2x 1 + 3y 1 x x n 1 = 2 x R n, 2 2 y + 3 2 2x = 2 + 3y 2............ x n x n y n 2x n + 3y n I ll use the two terms
More informationSparse orthogonal factor analysis
Sparse orthogonal factor analysis Kohei Adachi and Nickolay T. Trendafilov Abstract A sparse orthogonal factor analysis procedure is proposed for estimating the optimal solution with sparse loadings. In
More informationForecasting 1 to h steps ahead using partial least squares
Forecasting 1 to h steps ahead using partial least squares Philip Hans Franses Econometric Institute, Erasmus University Rotterdam November 10, 2006 Econometric Institute Report 2006-47 I thank Dick van
More informationLinear Algebra in Computer Vision. Lecture2: Basic Linear Algebra & Probability. Vector. Vector Operations
Linear Algebra in Computer Vision CSED441:Introduction to Computer Vision (2017F Lecture2: Basic Linear Algebra & Probability Bohyung Han CSE, POSTECH bhhan@postech.ac.kr Mathematics in vector space Linear
More informationMath 113 Solutions: Homework 8. November 28, 2007
Math 113 Solutions: Homework 8 November 28, 27 3) Define c j = ja j, d j = 1 j b j, for j = 1, 2,, n Let c = c 1 c 2 c n and d = vectors in R n Then the Cauchy-Schwarz inequality on Euclidean n-space gives
More information2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian
FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian
More information7 Principal Component Analysis
7 Principal Component Analysis This topic will build a series of techniques to deal with high-dimensional data. Unlike regression problems, our goal is not to predict a value (the y-coordinate), it is
More informationUNIVERSITY OF CALIFORNIA, SAN DIEGO DEPARTMENT OF ECONOMICS
2-7 UNIVERSITY OF LIFORNI, SN DIEGO DEPRTMENT OF EONOMIS THE JOHNSEN-GRNGER REPRESENTTION THEOREM: N EXPLIIT EXPRESSION FOR I() PROESSES Y PETER REINHRD HNSEN DISUSSION PPER 2-7 JULY 2 The Johansen-Granger
More informationThe profit function system with output- and input- specific technical efficiency
The profit function system with output- and input- specific technical efficiency Mike G. Tsionas December 19, 2016 Abstract In a recent paper Kumbhakar and Lai (2016) proposed an output-oriented non-radial
More informationAccounting for Missing Values in Score- Driven Time-Varying Parameter Models
TI 2016-067/IV Tinbergen Institute Discussion Paper Accounting for Missing Values in Score- Driven Time-Varying Parameter Models André Lucas Anne Opschoor Julia Schaumburg Faculty of Economics and Business
More informationNORMS ON SPACE OF MATRICES
NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system
More informationAbsolute value equations
Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West
More informationJournal of Multivariate Analysis. Sphericity test in a GMANOVA MANOVA model with normal error
Journal of Multivariate Analysis 00 (009) 305 3 Contents lists available at ScienceDirect Journal of Multivariate Analysis journal homepage: www.elsevier.com/locate/jmva Sphericity test in a GMANOVA MANOVA
More informationGeneralized Impulse Response Analysis: General or Extreme?
MPRA Munich Personal RePEc Archive Generalized Impulse Response Analysis: General or Extreme? Kim Hyeongwoo Auburn University April 2009 Online at http://mpra.ub.uni-muenchen.de/17014/ MPRA Paper No. 17014,
More informationCorrigendum to Inference on impulse. response functions in structural VAR models. [J. Econometrics 177 (2013), 1-13]
Corrigendum to Inference on impulse response functions in structural VAR models [J. Econometrics 177 (2013), 1-13] Atsushi Inoue a Lutz Kilian b a Department of Economics, Vanderbilt University, Nashville
More informationThe relationship between treatment parameters within a latent variable framework
Economics Letters 66 (2000) 33 39 www.elsevier.com/ locate/ econbase The relationship between treatment parameters within a latent variable framework James J. Heckman *,1, Edward J. Vytlacil 2 Department
More informationThe Hilbert Space of Random Variables
The Hilbert Space of Random Variables Electrical Engineering 126 (UC Berkeley) Spring 2018 1 Outline Fix a probability space and consider the set H := {X : X is a real-valued random variable with E[X 2
More informationThe reflexive and anti-reflexive solutions of the matrix equation A H XB =C
Journal of Computational and Applied Mathematics 200 (2007) 749 760 www.elsevier.com/locate/cam The reflexive and anti-reflexive solutions of the matrix equation A H XB =C Xiang-yang Peng a,b,, Xi-yan
More informationAuerbach bases and minimal volume sufficient enlargements
Auerbach bases and minimal volume sufficient enlargements M. I. Ostrovskii January, 2009 Abstract. Let B Y denote the unit ball of a normed linear space Y. A symmetric, bounded, closed, convex set A in
More informationThe Bayesian Approach to Multi-equation Econometric Model Estimation
Journal of Statistical and Econometric Methods, vol.3, no.1, 2014, 85-96 ISSN: 2241-0384 (print), 2241-0376 (online) Scienpress Ltd, 2014 The Bayesian Approach to Multi-equation Econometric Model Estimation
More informationLecture 5: Unit Roots, Cointegration and Error Correction Models The Spurious Regression Problem
Lecture 5: Unit Roots, Cointegration and Error Correction Models The Spurious Regression Problem Prof. Massimo Guidolin 20192 Financial Econometrics Winter/Spring 2018 Overview Defining cointegration Vector
More informationHomework 2. Solutions T =
Homework. s Let {e x, e y, e z } be an orthonormal basis in E. Consider the following ordered triples: a) {e x, e x + e y, 5e z }, b) {e y, e x, 5e z }, c) {e y, e x, e z }, d) {e y, e x, 5e z }, e) {
More informationCS 143 Linear Algebra Review
CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see
More informationCointegrated VAR s. Eduardo Rossi University of Pavia. November Rossi Cointegrated VAR s Financial Econometrics / 56
Cointegrated VAR s Eduardo Rossi University of Pavia November 2013 Rossi Cointegrated VAR s Financial Econometrics - 2013 1 / 56 VAR y t = (y 1t,..., y nt ) is (n 1) vector. y t VAR(p): Φ(L)y t = ɛ t The
More informationPrincipal Component Analysis (PCA) Our starting point consists of T observations from N variables, which will be arranged in an T N matrix R,
Principal Component Analysis (PCA) PCA is a widely used statistical tool for dimension reduction. The objective of PCA is to find common factors, the so called principal components, in form of linear combinations
More informationRoss (1976) introduced the Arbitrage Pricing Theory (APT) as an alternative to the CAPM.
4.2 Arbitrage Pricing Model, APM Empirical evidence indicates that the CAPM beta does not completely explain the cross section of expected asset returns. This suggests that additional factors may be required.
More informationj=1 [We will show that the triangle inequality holds for each p-norm in Chapter 3 Section 6.] The 1-norm is A F = tr(a H A).
Math 344 Lecture #19 3.5 Normed Linear Spaces Definition 3.5.1. A seminorm on a vector space V over F is a map : V R that for all x, y V and for all α F satisfies (i) x 0 (positivity), (ii) αx = α x (scale
More informationarxiv: v1 [math.na] 1 Sep 2018
On the perturbation of an L -orthogonal projection Xuefeng Xu arxiv:18090000v1 [mathna] 1 Sep 018 September 5 018 Abstract The L -orthogonal projection is an important mathematical tool in scientific computing
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More information4 Linear Algebra Review
4 Linear Algebra Review For this topic we quickly review many key aspects of linear algebra that will be necessary for the remainder of the course 41 Vectors and Matrices For the context of data analysis,
More informationPart 1a: Inner product, Orthogonality, Vector/Matrix norm
Part 1a: Inner product, Orthogonality, Vector/Matrix norm September 19, 2018 Numerical Linear Algebra Part 1a September 19, 2018 1 / 16 1. Inner product on a linear space V over the number field F A map,
More informationA Test of Cointegration Rank Based Title Component Analysis.
A Test of Cointegration Rank Based Title Component Analysis Author(s) Chigira, Hiroaki Citation Issue 2006-01 Date Type Technical Report Text Version publisher URL http://hdl.handle.net/10086/13683 Right
More informationSubset selection for matrices
Linear Algebra its Applications 422 (2007) 349 359 www.elsevier.com/locate/laa Subset selection for matrices F.R. de Hoog a, R.M.M. Mattheij b, a CSIRO Mathematical Information Sciences, P.O. ox 664, Canberra,
More informationMacroeconometric Modelling
Macroeconometric Modelling 4 General modelling and some examples Gunnar Bårdsen CREATES 16-17 November 2009 From system... I Standard procedure see (?), Hendry (1995), Johansen (1995), Johansen (2006),
More informationLecture 8: Linear Algebra Background
CSE 521: Design and Analysis of Algorithms I Winter 2017 Lecture 8: Linear Algebra Background Lecturer: Shayan Oveis Gharan 2/1/2017 Scribe: Swati Padmanabhan Disclaimer: These notes have not been subjected
More informationA revisit to a reverse-order law for generalized inverses of a matrix product and its variations
A revisit to a reverse-order law for generalized inverses of a matrix product and its variations Yongge Tian CEMA, Central University of Finance and Economics, Beijing 100081, China Abstract. For a pair
More informationAlgebra II. Paulius Drungilas and Jonas Jankauskas
Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive
More information