VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where:

Similar documents
LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Principal Components Analysis (PCA)

Review of Linear Algebra

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

Eigenvalues, Eigenvectors, and an Intro to PCA

Study Guide for Linear Algebra Exam 2

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

7. Symmetric Matrices and Quadratic Forms

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Definition (T -invariant subspace) Example. Example

Eigenvalues, Eigenvectors, and an Intro to PCA

Basic Concepts in Matrix Algebra

A primer on Structural VARs

Math 3191 Applied Linear Algebra

Eigenvalues, Eigenvectors, and an Intro to PCA

2. Every linear system with the same number of equations as unknowns has a unique solution.

Review problems for MA 54, Fall 2004.

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Principal Components Theory Notes

and let s calculate the image of some vectors under the transformation T.

Multivariate Time Series: VAR(p) Processes and Models

Eigenvalues and Eigenvectors

Chapter 5. The multivariate normal distribution. Probability Theory. Linear transformations. The mean vector and the covariance matrix

I. Multiple Choice Questions (Answer any eight)

Linear Algebra Practice Problems

TBP MATH33A Review Sheet. November 24, 2018

Linear Algebra Review. Vectors

Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

LINEAR ALGEBRA SUMMARY SHEET.

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

«Random Vectors» Lecture #2: Introduction Andreas Polydoros

Mathematical foundations - linear algebra

MATH 221, Spring Homework 10 Solutions

Eigenvalues and Eigenvectors

Matrix Vector Products

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to:

MAC Module 12 Eigenvalues and Eigenvectors

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Reduction to the associated homogeneous system via a particular solution

Multivariate Time Series Analysis and Its Applications [Tsay (2005), chapter 8]

Title. Description. var intro Introduction to vector autoregressive models

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

Chapter 3 Transformations

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

Math 1553, Introduction to Linear Algebra

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Diagonalization of Matrix

Eigenvalues and Eigenvectors

Diagonalizing Matrices

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Linear Algebra- Final Exam Review

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

AMS526: Numerical Analysis I (Numerical Linear Algebra)

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

Math 217: Eigenspaces and Characteristic Polynomials Professor Karen Smith

Hebbian Learning II. Robert Jacobs Department of Brain & Cognitive Sciences University of Rochester. July 20, 2017

Properties of Matrices and Operations on Matrices

Name Solutions Linear Algebra; Test 3. Throughout the test simplify all answers except where stated otherwise.

Lecture 15, 16: Diagonalization

Linear Systems. Class 27. c 2008 Ron Buckmire. TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5.4

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Introduction to Matrix Algebra

Multivariate Time Series

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

ICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization

Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

Solutions to Final Exam

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Inverse of a Square Matrix. For an N N square matrix A, the inverse of A, 1

Cayley-Hamilton Theorem

Question 7. Consider a linear system A x = b with 4 unknown. x = [x 1, x 2, x 3, x 4 ] T. The augmented

Check that your exam contains 30 multiple-choice questions, numbered sequentially.

Online Exercises for Linear Algebra XM511

spring, math 204 (mitchell) list of theorems 1 Linear Systems Linear Transformations Matrix Algebra

Multivariate Statistical Analysis

Multivariate Distributions

Computational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science

Edexcel GCE A Level Maths Further Maths 3 Matrices.

Next is material on matrix rank. Please see the handout

Image Registration Lecture 2: Vectors and Matrices

Homework sheet 4: EIGENVALUES AND EIGENVECTORS. DIAGONALIZATION (with solutions) Year ? Why or why not? 6 9

Diagonalization. Hung-yi Lee

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

235 Final exam review questions

Math Matrix Algebra

Eigenvalue and Eigenvector Homework

Quantum Computing Lecture 2. Review of Linear Algebra

Diagonalization. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Matrix Algebra: Summary

Review (Probability & Linear Algebra)

Whitening and Coloring Transformations for Multivariate Gaussian Data. A Slecture for ECE 662 by Maliha Hossain

Vectors and Matrices Statistics with Vectors and Matrices

Lecture 10 - Eigenvalues problem

Example Linear Algebra Competency Test

Exercise Set 7.2. Skills

Multivariate Gaussian Distribution. Auxiliary notes for Time Series Analysis SF2943. Spring 2013

9. Multivariate Linear Time Series (II). MA6622, Ernesto Mordecki, CityU, HK, 2006.

Transcription:

VAR Model (k-variate VAR(p model (in the Reduced Form: where: Y t = A + B 1 Y t-1 + B 2 Y t-2 + + B p Y t-p + ε t Y t = (y 1t, y 2t,, y kt : a (k x 1 vector of time series variables A: a (k x 1 vector of intercepts B i (i=1, 2,, p: (k x k coefficient matrices ε t : a (k x 1 vector of unobservable i.i.d. zero mean error term (vector white noise (Note: the components of ε t are correlated, for the same t, in this reduced form representation of the VAR model. 1

On the simplest VAR model: Bivariate VAR(1 without intercept In the Reduced Form, this model is: [ ] where WN( Σ Σ [ ] 1. Derive the marginal sequences Method 1. Use the backshift operator B: [ ] [ ] [ ] [ ] [ ] [ ] [ ] 2

Method 2. Do not use the backshift operator: [ ] Here, we assume and. Based on equations 3 and 4, we can also derive the expressions for and as shown below: After substituting } in 1 and } in 2 using equations 3, 4, 5 and 6, we can derive the following expressions: Finally, according to 7 and 8, the marginal models for } and } should be: 3

2. Derive whether the new terms for the marginal sequences } and } are white noise or MA(1 or else. Now, let s use } and } to represent the corresponding error terms of } and } as shown below, and compute and : (1 If we assume, then we can derive the expressions of as: In this case }. Similarly } Thus, both } and } should follow the model. (2 If we assume, where C is a constant, then we can derive the expressions of as: 4

Thus, still }. Same as (1, } and both } and } should follow the model. The general theorem is that (see page 427 of our textbook, for a k-dimensional ARMA(p,q model, the marginal models are ARMA[kp, (k-1p+q]. 3. Derive the h-step forecast and forecast error for VAR(1, mean and covariance of the forecast error. [ ],, ( ( ( ( 5

( ( (( ( ( ( [ ] 4. Eigenvalues and Eigenvectors The equation Ax = y is a linear transformation that maps a given vector x onto a new vector y. Special vectors that map onto multiples of themselves are very important in many applications, because those vectors tend to correspond to preferred modes of behavior represented by the vectors. Such vectors are called eigenvectors (German for proper vectors, and the multiple for a given eigenvector is called its eignevalue. To find eigenvalues and eigenvectors, we start with the definition: Ax x, which can be written as A I x, which has solutions iff det A I The values of that satisfy the above determinant equation are the eigenvalues, and those eigenvalues can then be plugged back into the defining equation Ax x to find the eigenvectors. 6

If 1 and 2 are distinct eigenvalues of a symmetric matrix A, then their corresponding eigenvectors are linearly independent (orthogonal to each other. You will see that eigenvectors are only determined up to an arbitrary factor; choosing the factor is called normalizing the vector. The most common factor to choose is the one that results in the eigenvector having a length of 1. Practice: (1. Find the eigenvalues and eigenvectors of 3 1 A 4 2. 5. Derivation of below: Let s use Σ to represent the variance-covariance matrix of ( as shown Σ ( ( Here, we assume Σ is diagonalizable. If we transform ( into new error terms ( as shown below, we can prove that the variance-covariance matrix Σ of is an identity matrix, which means and are uncorrelated. Σ ( 7

Σ (Σ Σ Σ Σ (Σ Σ Σ Thus, in order to let and be uncorrelated, we just need to find out the transform Σ (. First, we need to find out the eigenvalues and eigenvectors of Σ. By solving the following equation, we can find the eigenvalues of Σ. (Σ ( ( ( ( Thus, one set of eigenvectors can be derived by solving the following equation: (Σ ( ( ( ( ( ( ( ( Σ ( Σ ( 8

( ( ( ( ( Σ Σ ( ( ( ( ( Σ ( If we let and then and Finally, we can derive the expressions for Σ which is Σ ( 9

( ( ( ( ( ( ( ( (Σ 1

6. VAR(1 -- The Reduced Form [ ] Here ( is a sequence of white noise vector where the two elements can be correlated at the same time point. That is, the covariance C may not be zero in: Σ ( ( Note: Here we use Σ to represent the variance-covariance matrix of ( shown above and below. as 7. VAR(1 -- The Structural Form From the reduced form: [ ] Where the covariance C may not be zero in: Σ ( ( By multiplying Σ on both sides, we have the structural form: Σ Σ [ ] Σ Where 11

Σ ( ( That is, the errors terms are entirely uncorrelated, even for the two elements at the same time points. However this also means that for the non-degenerate cases, we have the contemporaneous terms of the other variable as the regressors and in the model as well. The way to convert the VAR from the reduced form to the structural form is not unique. For example, one can also employ the Cholesky Decomposition. See Chapter 8 of our textbook. Briefly, the Cholesky Decomposition (or Cholesky Factorization says that for any positive definite matrix Σ, we can find a lower triangular matrix : [ ] Such that Σ Where is a diagonal matrix. Definition: In linear algebra, a symmetric n n real matrix is said to be positive definite if the scalar is positive for every non-zero column vector of real numbers. The variance-covariance matrix Σ of full-rank qualifies as a positive definite matrix because: Σ Therefore we can transform the reduced form VAR(1 to a structural form as follows: 12

[ ] Where Σ ( ( Note that equivalently, we can also apply the Cholesky Decomposition in the other order to obtain the following structural form: Where and are entirely uncorrelated. Note: There can be various equivalent expressions for the structural forms. Note: The same applies to VAR(p of any dimension. 13