Second-order approximation of dynamic models without the use of tensors

Size: px
Start display at page:

Download "Second-order approximation of dynamic models without the use of tensors"

Transcription

1 Second-order approximation of dynamic models without the use of tensors Paul Klein a, a University of Western Ontario First draft: May 17, 2005 This version: January 24, 2006 Abstract Several approaches to finding the second-order approximation to a dynamic model have been proposed recently This paper differs from the existing literature in that it makes use of the Magnus and Neudecker (1999) definition of the Hessian matrix The key result is a linear system of equations that characterizes the second-order coefficients No use is made of multi-dimensional arrays or tensors Keywords: Solving dynamic models; second-order approximation JEL classification: E0; C63 1 Introduction Several approaches to finding the second-order approximation to a dynamic model have been proposed recently Examples include Schmitt-Grohé and Uribe (2004) and Kim, Kim, Schaumburg, and Sims (2005) This paper differs from the existing literature, including Lombardo and Sutherland (2005), which also avoids the use of tensors, in that it makes use of the Magnus and Neudecker (1999) definition of the Hessian matrix The key result is a linear system of equations that characterizes the second-order coefficients No use is made of multi-dimensional arrays or tensors Matlab code is available from my website I thank Audra Bowlus, Elizabeth Caucutt, Martin Gervais, Paul Gomme, Lance Lochner and Igor Livshits 1

2 2 The model As pointed out in Schmitt-Grohé and Uribe (2004), the coefficients of a secondorder approximation to the solution of a dynamic model around its non-stochastic steady state are invariant with respect to the scale of the shocks with the notable exception of additive constants in the decision rules In what follows I will take this result for granted E t f(x t+1, y t+1, x t, y t ) = 0 (1) where f maps R 2nx+2ny into R nx+ny The solution is given by two functions g and h defined via y t = g(x t, σ) (2) and x t+1 = h(x t, σ) + σε t+1 (3) where ε t is exogenous white noise with variance matrix Σ and σ is a scaling variable The approximation is computed around the non-stochastic steady state, where σ = 0 Without loss of generality, we will assume that f(0, 0, 0, 0) = 0 3 Second-order Taylor expansions As stated in Magnus and Neudecker (1999), the second-order Taylor expansion of a twice differentiable function f : R n R m is given by f(x) f(x 0 ) + Df(x 0 )(x x 0 ) (I m (x x 0 ) )Hf(x 0 )(x x 0 ) (4) 2

3 where we define Df(x) = f(x) x = f 1 (x) x 1 f 2 (x) f 1 (x) x 2 f 1 (x) x n x 1 f m(x) x 1 f m(x) x n and Notice that and that Hf(x) = 2 f(x) x x = D vec((df(x)) ) Df 1 (x) Df Df(x) = 2 (x) Df m (x) Hf 1 (x) Hf Hf(x) = 2 (x) Hf m (x) Thus the Hessian Hf(x) is of dimension mn n and consists of m vertically concatenated symmetric n n matrices An important property of the quadratic term in the Taylor expansion is the following If f(x) = 1 2 (I m x )Ax and g(x) = 1 2 (I m x )Bx then f(x) g(x) iff 1 2 A + (A ) ν = 1 2 B + (B ) ν (5) where we define A ν via the following recipe, taken from Magnus and Neudecker (1999) 3

4 Definition Let A have the following structure A = A 1 A 2 A m where A i is an n n matrix for each i = 1, 2, m Then A ν = A 1 A 2 A m It follows that B = B 1 B 2 (B ) ν = B 1 B 2 B m B m In what follows, therefore, we will regard as equivalent two matrices of second derivatives A and B if (5) holds Strictly speaking, a Hessian matrix H is columnsymmetric, ie H = (H ) ν but we will regard any matrix G as a perfectly good Hessian if H is the Hessian and H = 1G + 2 (G ) ν 4 Representation of the second-order approximation of the solution y t ĝ(x t ) = k y + Fx t + (I ny x t )Ex t (6) and x t+1 ĥ(x t) = k x + Px t + (I nx x t )Gx t (7) Evidently F is n y n x, E is n x n y n x, P is n x n x and G is n 2 x n x 4

5 5 Finding the first-order approximation Klein (2000), King and Watson (2002) and others show how to find F and P in terms of D Following Klein (2000) one may proceed as follows, keeping in mind that we are after a non-explosive solution only Suppose the linear approximation of the equilibrium conditions can be written as A k t+1 E t λ t+1 = B where k 0 R n k is a given deterministic vector, (ξt ) is white noise and the conditional expectation is taken with respect to the natural filtration of (ξ t ) The matrices A and B are both n n The key Theorem required here is stated as Theorem 771 in Golub and van Loan (1996) It says that if there is a z C such that B za 0, then there exist matrices Q, Z, T and S such that k t λ t + ξ t+1 0 (8) 1 Q and Z are Hermitian, ie Q H Q = QQ H = I n and similarly for Z, where H denotes the Hermitian transpose (transpose followed by complex conjugation or vice versa) 2 T and S are upper triangular (all entries below the main diagonal are zero) 3 QA = SZ H and QB = TZ H 4 There is no i such that s ii = t ii = 0 1 Moreover, the matrices Q, Z, S and T can be chosen in such a way as to make the diagonal entries s ii and t ii appear in any desired order We will choose the following ordering The pairs s ii, t ii satisfying s ii > t ii appear first We will call these pairs the stable generalized eigenvalues 1 Here we denote the row i, column j element of any matrix M by m ij 5

6 We now introduce an auxiliary sequence (y t ) that will help us in finding the solution Define x t via x t = and y t via y t = Z H x t Partition y t in the same way as x t, introducing the following notation y t = Now premultiply (8) by Q This yields an equivalent system since Q is non-singular The result is Sy t+1 = Ty t k t λ t s t u t This is a triangular system More explicitly, we have S 11 S 12 s t+1 T 11 T 12 = 0 S 22 u t+1 0 T 22 s t u t If there no more stable generalized eigenvalues than there are state variables, then the second block of these equations implies that any solution that does not blow up (so that the mean is unbounded unless k 0 = 0) or have a unit root (so that the variance is unbounded unless ε t = 0 for all t) satisfies u t = 0 for t = 0, But then the first block says that S 11 s t+1 = T 11 s t If there are no less stable generalized eigenvalues than there are state variables, then S 11 is invertible and the generalized eigenvalues of (S 11, T 11 ) are stable Hence s t+1 = S 1 11 T 11s t (9) We have now reached the final step, which is to move back to x t from y t By definition, we have k t λ t Z 11 Z 12 = Z 21 Z 22 s t u t 6

7 Apparently λ t = Z 21 s t Moreover, if Z 11 is invertible, then s t = Z 1 11 k t and consequently λ t = Z 21 Z 1 11 k t We also have k t+1 = Z 11 s t+1 = Z 11 S 1 11 T 11s t = Z 11 S 1 11 T 11Z 1 11 k t Notice that Z 11 S11 1 T 11Z11 1 is similar to S 1 11 T 11 so that the two matrices have the same eigenvalues We conclude that if there are exactly as many state variables as there are stable generalized eigenvalues and Z 11 is invertible, then (unless k 0 = 0 or ε t = 0 for all t = 0, ), then λ t k t+1 = Z 21 Z 1 11 k t (10) = Z 11 S 1 11 T 11Z 1 11 k t + ξ t+1 (11) The upshot of this is that, from now on, we can treat F and P as known matrices 6 Finding the second-order approximation by solving a linear system of equations 61 Rules of differentiation 62 The equations characterizing the second-order coefficients By definition of the functions g and h (defined in (2) and (3)) we have E t f(h(x, σ) + σε t+1, g(h(x, σ) + σε t+1 ), x, g(x, σ)) 0 where f is the function defined in (1) Define z(x) defined via ) ) z(x, σ) = E t f (ĥ(x, σ) + σεt+1, ĝ (ĥ (x, σ) + εt+1, σ x, ĝ (x, σ) The second-order approximation is characterized by (i) D x z(0, 0) = 0, (ii) H xx z(0, 0) = 0 and (iii) H σσ z(0, 0) = 0 Property (i) is taken care of by choosing D and F properly, as briefly described in Section 5 Property (ii) is taken care of by choosing E 7

8 and G properly (given D and F), as described in Section 63 Finally, property (iii) is taken care of by choosing k x and k y appropriately, as described in Section Hessians We will adopt the following notation Denoting the arguments of f by x 1, x 2, x 3, x 4 (in that order) we define and f i = f(0, 0, 0, 0) x i f ij = 2 f(0, 0, 0, 0) x i x j Defining m = n x +n y, the equation Hz(0) = 0 becomes, using Theorem 9 in chapter 6 of Magnus and Neudecker (1999), (f 1 I nx )G + (f 2 I nx ) { I ny P EP + F I nx G } + (f 4 I nx )E + I m P f 11 P + I m (P F )f 22 FP+ f 33 + I m F f 44 F + 2I m P f 12 FP + 2I m P f I m P f 14 F + 2I m (P F )f I m P F f 24 F+ 2f 34 F = 0 In shorthand notation, we can summarize these equations via Taking vecs, we get A 1 + A 2 E + A 3 EP + A 4 G = 0 vec(a 1 ) + (I nx A 2 ) vec(e) + (P A 3 ) vec(e) + (I nx A 4 ) vec(g) = 0 Thus the linear system we have to solve is given by (I nx A 2 ) + (P vec(e) A 3 ) (I nx A 4 ) vec(g) = vec(a 1 ) 8

9 64 Constants The constants k x and k y are proportional to σ 2 Setting (without loss of generality) σ = 1 we have f 1 k x + f 2 k y + f 4 k y + f 2 Fk x + f 2 tr (( I ny Σ ) E ) + tr((i m (ΣF ))f 22 F) + tr((i m (Σ))f 11 ) + 2 tr((i m (Σ))f 12 F) = 0 where we define the trace of an nm n matrix A = A 1 A 2 A m as the m 1 vector tr(a 1 ) tr(a 2 ) tr(a m ) 9

10 References Golub, G, & van Loan, C (1996) Matrix Computations, Third Edition Baltimore and London: The Johns Hopkins University Press Kim, J, Kim, S, Schaumburg, E, & Sims, C (2005) Calculating and Using Second Order Accurate Solutions of Discrete Time Dynamic Equilibrium Models Manuscript King, R G, & Watson, M (2002) System Reduction and Solution Algorithms for Singular Linear Difference Systems under Rational Expectations Computational Economics, 20(1-2), Klein, P (2000) Using the Generalized Schur Form to Solve a Multivariate Linear Rational Expectations Model Journal of Economic Dynamics and Control, 24(10), Lombardo, G, & Sutherland, A (2005) Computing Second-Order-Accurate Solutions for Rational Expectations Models Using Linear Solution Methods European Central Bank Working Paper Series No 487 Magnus, J, & Neudecker, H (1999) Matrix Differential Calculus With Applications in Statistics and Econometrics John Wiley and Sons Schmitt-Grohé, S, & Uribe, M (2004) Solving dynamic general equilibrium models using a second-order approximation to the policy function Journal of Economic Dynamics and Control, 28,

Solving Linear Rational Expectation Models

Solving Linear Rational Expectation Models Solving Linear Rational Expectation Models Dr. Tai-kuang Ho Associate Professor. Department of Quantitative Finance, National Tsing Hua University, No. 101, Section 2, Kuang-Fu Road, Hsinchu, Taiwan 30013,

More information

1 Solving Linear Rational Expectations Models

1 Solving Linear Rational Expectations Models September 29, 2001 SCHUR.TEX Economics 731 International Monetary University of Pennsylvania Theory and Policy Martín Uribe Fall 2001 1 Solving Linear Rational Expectations Models Consider a vector x t

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra Massachusetts Institute of Technology Department of Economics 14.381 Statistics Guido Kuersteiner Lecture Notes on Matrix Algebra These lecture notes summarize some basic results on matrix algebra used

More information

E-Stability vis-a-vis Determinacy Results for a Broad Class. of Linear Rational Expectations Models. Bennett T. McCallum

E-Stability vis-a-vis Determinacy Results for a Broad Class. of Linear Rational Expectations Models. Bennett T. McCallum E-Stability vis-a-vis Determinacy Results for a Broad Class of Linear Rational Expectations Models Bennett T. McCallum Carnegie Mellon University, Tepper School 256, Pittsburgh, PA 15213 USA and National

More information

Perturbation Methods II: General Case

Perturbation Methods II: General Case Perturbation Methods II: General Case (Lectures on Solution Methods for Economists VI) Jesús Fernández-Villaverde 1 and Pablo Guerrón 2 February 21, 2018 1 University of Pennsylvania 2 Boston College The

More information

I = i 0,

I = i 0, Special Types of Matrices Certain matrices, such as the identity matrix 0 0 0 0 0 0 I = 0 0 0, 0 0 0 have a special shape, which endows the matrix with helpful properties The identity matrix is an example

More information

Economics 204 Fall 2013 Problem Set 5 Suggested Solutions

Economics 204 Fall 2013 Problem Set 5 Suggested Solutions Economics 204 Fall 2013 Problem Set 5 Suggested Solutions 1. Let A and B be n n matrices such that A 2 = A and B 2 = B. Suppose that A and B have the same rank. Prove that A and B are similar. Solution.

More information

Extracting Rational Expectations Model Structural Matrices from Dynare

Extracting Rational Expectations Model Structural Matrices from Dynare Extracting Rational Expectations Model Structural Matrices from Dynare Callum Jones New York University In these notes I discuss how to extract the structural matrices of a model from its Dynare implementation.

More information

Matrix Differential Calculus with Applications in Statistics and Econometrics

Matrix Differential Calculus with Applications in Statistics and Econometrics Matrix Differential Calculus with Applications in Statistics and Econometrics Revised Edition JAN. R. MAGNUS CentERjor Economic Research, Tilburg University and HEINZ NEUDECKER Cesaro, Schagen JOHN WILEY

More information

1 Overview. 2 A Characterization of Convex Functions. 2.1 First-order Taylor approximation. AM 221: Advanced Optimization Spring 2016

1 Overview. 2 A Characterization of Convex Functions. 2.1 First-order Taylor approximation. AM 221: Advanced Optimization Spring 2016 AM 221: Advanced Optimization Spring 2016 Prof. Yaron Singer Lecture 8 February 22nd 1 Overview In the previous lecture we saw characterizations of optimality in linear optimization, and we reviewed the

More information

Solutions Methods in DSGE (Open) models

Solutions Methods in DSGE (Open) models Solutions Methods in DSGE (Open) models 1. With few exceptions, there are not exact analytical solutions for DSGE models, either open or closed economy. This is due to a combination of nonlinearity and

More information

Applications for solving DSGE models. October 25th, 2011

Applications for solving DSGE models. October 25th, 2011 MATLAB Workshop Applications for solving DSGE models Freddy Rojas Cama Marola Castillo Quinto Preliminary October 25th, 2011 A model to solve The model The model A model is set up in order to draw conclusions

More information

WORKING PAPER SERIES A THEORY OF PRUNING NO 1696 / JULY Giovanni Lombardo and Harald Uhlig

WORKING PAPER SERIES A THEORY OF PRUNING NO 1696 / JULY Giovanni Lombardo and Harald Uhlig WORKING PAPER SERIES NO 1696 / JULY 2014 A THEORY OF PRUNING Giovanni Lombardo and Harald Uhlig In 2014 all ECB publications feature a motif taken from the 20 banknote. NOTE: This Working Paper should

More information

Working Paper Series. on approximating Dsge models. expansions. No 1264 / november by Giovanni Lombardo

Working Paper Series. on approximating Dsge models. expansions. No 1264 / november by Giovanni Lombardo Working Paper Series No 1264 / november 21 on approximating Dsge models by series expansions by Giovanni Lombardo WORKING PAPER SERIES NO 1264 / NOVEMBER 21 ON APPROXIMATING DSGE MODELS BY SERIES EXPANSIONS

More information

Decentralised economies I

Decentralised economies I Decentralised economies I Martin Ellison 1 Motivation In the first two lectures on dynamic programming and the stochastic growth model, we solved the maximisation problem of the representative agent. More

More information

Fisher information for generalised linear mixed models

Fisher information for generalised linear mixed models Journal of Multivariate Analysis 98 2007 1412 1416 www.elsevier.com/locate/jmva Fisher information for generalised linear mixed models M.P. Wand Department of Statistics, School of Mathematics and Statistics,

More information

OR MSc Maths Revision Course

OR MSc Maths Revision Course OR MSc Maths Revision Course Tom Byrne School of Mathematics University of Edinburgh t.m.byrne@sms.ed.ac.uk 15 September 2017 General Information Today JCMB Lecture Theatre A, 09:30-12:30 Mathematics revision

More information

Linear Algebra (Review) Volker Tresp 2017

Linear Algebra (Review) Volker Tresp 2017 Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.

More information

A Note on Eigenvalues of Perturbed Hermitian Matrices

A Note on Eigenvalues of Perturbed Hermitian Matrices A Note on Eigenvalues of Perturbed Hermitian Matrices Chi-Kwong Li Ren-Cang Li July 2004 Let ( H1 E A = E H 2 Abstract and à = ( H1 H 2 be Hermitian matrices with eigenvalues λ 1 λ k and λ 1 λ k, respectively.

More information

Chapter 6. Maximum Likelihood Analysis of Dynamic Stochastic General Equilibrium (DSGE) Models

Chapter 6. Maximum Likelihood Analysis of Dynamic Stochastic General Equilibrium (DSGE) Models Chapter 6. Maximum Likelihood Analysis of Dynamic Stochastic General Equilibrium (DSGE) Models Fall 22 Contents Introduction 2. An illustrative example........................... 2.2 Discussion...................................

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Linear Algebra (Review) Volker Tresp 2018

Linear Algebra (Review) Volker Tresp 2018 Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c

More information

ACM106a - Homework 2 Solutions

ACM106a - Homework 2 Solutions ACM06a - Homework 2 Solutions prepared by Svitlana Vyetrenko October 7, 2006. Chapter 2, problem 2.2 (solution adapted from Golub, Van Loan, pp.52-54): For the proof we will use the fact that if A C m

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

Monotone Function. Function f is called monotonically increasing, if. x 1 x 2 f (x 1 ) f (x 2 ) x 1 < x 2 f (x 1 ) < f (x 2 ) x 1 x 2

Monotone Function. Function f is called monotonically increasing, if. x 1 x 2 f (x 1 ) f (x 2 ) x 1 < x 2 f (x 1 ) < f (x 2 ) x 1 x 2 Monotone Function Function f is called monotonically increasing, if Chapter 3 x x 2 f (x ) f (x 2 ) It is called strictly monotonically increasing, if f (x 2) f (x ) Convex and Concave x < x 2 f (x )

More information

Optimization under Commitment and Discretion, the Recursive Saddlepoint Method, and Targeting Rules and Instrument Rules: Lecture Notes

Optimization under Commitment and Discretion, the Recursive Saddlepoint Method, and Targeting Rules and Instrument Rules: Lecture Notes CommDiscTRIR.tex Preliminary; comments welcome Optimization under Commitment and Discretion, the Recursive Saddlepoint Method, and Targeting Rules and Instrument Rules: Lecture Notes Lars E.O. Svensson

More information

Multivariate Differentiation 1

Multivariate Differentiation 1 John Nachbar Washington University February 23, 2017 1 Preliminaries. Multivariate Differentiation 1 I assume that you are already familiar with standard concepts and results from univariate calculus;

More information

Perturbation Methods I: Basic Results

Perturbation Methods I: Basic Results Perturbation Methods I: Basic Results (Lectures on Solution Methods for Economists V) Jesús Fernández-Villaverde 1 and Pablo Guerrón 2 March 19, 2018 1 University of Pennsylvania 2 Boston College Introduction

More information

A Model with Collateral Constraints

A Model with Collateral Constraints A Model with Collateral Constraints Jesús Fernández-Villaverde University of Pennsylvania December 2, 2012 Jesús Fernández-Villaverde (PENN) Collateral Constraints December 2, 2012 1 / 47 Motivation Kiyotaki

More information

Hands-on Matrix Algebra Using R

Hands-on Matrix Algebra Using R Preface vii 1. R Preliminaries 1 1.1 Matrix Defined, Deeper Understanding Using Software.. 1 1.2 Introduction, Why R?.................... 2 1.3 Obtaining R.......................... 4 1.4 Reference Manuals

More information

Linear Riccati Dynamics, Constant Feedback, and Controllability in Linear Quadratic Control Problems

Linear Riccati Dynamics, Constant Feedback, and Controllability in Linear Quadratic Control Problems Linear Riccati Dynamics, Constant Feedback, and Controllability in Linear Quadratic Control Problems July 2001 Revised: December 2005 Ronald J. Balvers Douglas W. Mitchell Department of Economics Department

More information

Linear Algebra Short Course Lecture 2

Linear Algebra Short Course Lecture 2 Linear Algebra Short Course Lecture 2 Matthew J. Holland matthew-h@is.naist.jp Mathematical Informatics Lab Graduate School of Information Science, NAIST 1 Some useful references Introduction to linear

More information

Matrix differential calculus Optimization Geoff Gordon Ryan Tibshirani

Matrix differential calculus Optimization Geoff Gordon Ryan Tibshirani Matrix differential calculus 10-725 Optimization Geoff Gordon Ryan Tibshirani Review Matrix differentials: sol n to matrix calculus pain compact way of writing Taylor expansions, or definition: df = a(x;

More information

Solving Nonlinear Rational Expectations Models by Approximating the Stochastic Equilibrium System

Solving Nonlinear Rational Expectations Models by Approximating the Stochastic Equilibrium System Solving Nonlinear Rational Expectations Models by Approximating the Stochastic Equilibrium System Michael P. Evers September 20, 2012 Revised version coming soon! Abstract Dynamic stochastic rational expectations

More information

Lecture 15 Review of Matrix Theory III. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore

Lecture 15 Review of Matrix Theory III. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore Lecture 15 Review of Matrix Theory III Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore Matrix An m n matrix is a rectangular or square array of

More information

Linear algebra I Homework #1 due Thursday, Oct Show that the diagonals of a square are orthogonal to one another.

Linear algebra I Homework #1 due Thursday, Oct Show that the diagonals of a square are orthogonal to one another. Homework # due Thursday, Oct. 0. Show that the diagonals of a square are orthogonal to one another. Hint: Place the vertices of the square along the axes and then introduce coordinates. 2. Find the equation

More information

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form

you expect to encounter difficulties when trying to solve A x = b? 4. A composite quadrature rule has error associated with it in the following form Qualifying exam for numerical analysis (Spring 2017) Show your work for full credit. If you are unable to solve some part, attempt the subsequent parts. 1. Consider the following finite difference: f (0)

More information

Calculus 2502A - Advanced Calculus I Fall : Local minima and maxima

Calculus 2502A - Advanced Calculus I Fall : Local minima and maxima Calculus 50A - Advanced Calculus I Fall 014 14.7: Local minima and maxima Martin Frankland November 17, 014 In these notes, we discuss the problem of finding the local minima and maxima of a function.

More information

Chapter 13. Convex and Concave. Josef Leydold Mathematical Methods WS 2018/19 13 Convex and Concave 1 / 44

Chapter 13. Convex and Concave. Josef Leydold Mathematical Methods WS 2018/19 13 Convex and Concave 1 / 44 Chapter 13 Convex and Concave Josef Leydold Mathematical Methods WS 2018/19 13 Convex and Concave 1 / 44 Monotone Function Function f is called monotonically increasing, if x 1 x 2 f (x 1 ) f (x 2 ) It

More information

HW3 - Due 02/06. Each answer must be mathematically justified. Don t forget your name. 1 2, A = 2 2

HW3 - Due 02/06. Each answer must be mathematically justified. Don t forget your name. 1 2, A = 2 2 HW3 - Due 02/06 Each answer must be mathematically justified Don t forget your name Problem 1 Find a 2 2 matrix B such that B 3 = A, where A = 2 2 If A was diagonal, it would be easy: we would just take

More information

Stat 206: Linear algebra

Stat 206: Linear algebra Stat 206: Linear algebra James Johndrow (adapted from Iain Johnstone s notes) 2016-11-02 Vectors We have already been working with vectors, but let s review a few more concepts. The inner product of two

More information

3 (Maths) Linear Algebra

3 (Maths) Linear Algebra 3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra

More information

Log-Linear Approximation and Model. Solution

Log-Linear Approximation and Model. Solution Log-Linear and Model David N. DeJong University of Pittsburgh Spring 2008, Revised Spring 2010 Last time, we sketched the process of converting model environments into non-linear rst-order systems of expectational

More information

Algorithms for nonlinear programming problems II

Algorithms for nonlinear programming problems II Algorithms for nonlinear programming problems II Martin Branda Charles University Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects of Optimization

More information

11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly.

11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly. C PROPERTIES OF MATRICES 697 to whether the permutation i 1 i 2 i N is even or odd, respectively Note that I =1 Thus, for a 2 2 matrix, the determinant takes the form A = a 11 a 12 = a a 21 a 11 a 22 a

More information

Perturbation Methods

Perturbation Methods Perturbation Methods Jesús Fernández-Villaverde University of Pennsylvania May 28, 2015 Jesús Fernández-Villaverde (PENN) Perturbation Methods May 28, 2015 1 / 91 Introduction Introduction Remember that

More information

Submatrices and Partitioned Matrices

Submatrices and Partitioned Matrices 2 Submatrices and Partitioned Matrices Two very important (and closely related) concepts are introduced in this chapter that of a submatrix and that of a partitioned matrix. These concepts arise very naturally

More information

REVIEW OF DIFFERENTIAL CALCULUS

REVIEW OF DIFFERENTIAL CALCULUS REVIEW OF DIFFERENTIAL CALCULUS DONU ARAPURA 1. Limits and continuity To simplify the statements, we will often stick to two variables, but everything holds with any number of variables. Let f(x, y) be

More information

Introduction Eigen Values and Eigen Vectors An Application Matrix Calculus Optimal Portfolio. Portfolios. Christopher Ting.

Introduction Eigen Values and Eigen Vectors An Application Matrix Calculus Optimal Portfolio. Portfolios. Christopher Ting. Portfolios Christopher Ting Christopher Ting http://www.mysmu.edu/faculty/christophert/ : christopherting@smu.edu.sg : 6828 0364 : LKCSB 5036 November 4, 2016 Christopher Ting QF 101 Week 12 November 4,

More information

Math 291-2: Final Exam Solutions Northwestern University, Winter 2016

Math 291-2: Final Exam Solutions Northwestern University, Winter 2016 Math 29-2: Final Exam Solutions Northwestern University, Winter 206 Determine whether each of the following statements is true or false f it is true, explain why; if it is false, give a counterexample

More information

Getting Started with Communications Engineering. Rows first, columns second. Remember that. R then C. 1

Getting Started with Communications Engineering. Rows first, columns second. Remember that. R then C. 1 1 Rows first, columns second. Remember that. R then C. 1 A matrix is a set of real or complex numbers arranged in a rectangular array. They can be any size and shape (provided they are rectangular). A

More information

Chapter 2: Unconstrained Extrema

Chapter 2: Unconstrained Extrema Chapter 2: Unconstrained Extrema Math 368 c Copyright 2012, 2013 R Clark Robinson May 22, 2013 Chapter 2: Unconstrained Extrema 1 Types of Sets Definition For p R n and r > 0, the open ball about p of

More information

Solving Linear Rational Expectations Models

Solving Linear Rational Expectations Models Solving Linear Rational Expectations Models simplified from Christopher A. Sims, by Michael Reiter January 2010 1 General form of the models The models we are interested in can be cast in the form Γ 0

More information

5 Linear Algebra and Inverse Problem

5 Linear Algebra and Inverse Problem 5 Linear Algebra and Inverse Problem 5.1 Introduction Direct problem ( Forward problem) is to find field quantities satisfying Governing equations, Boundary conditions, Initial conditions. The direct problem

More information

CALCULATING AND USING SECOND ORDER ACCURATE SOLUTIONS OF DISCRETE TIME DYNAMIC EQUILIBRIUM MODELS

CALCULATING AND USING SECOND ORDER ACCURATE SOLUTIONS OF DISCRETE TIME DYNAMIC EQUILIBRIUM MODELS CALCULATING AND USING SECOND ORDER ACCURATE SOLUTIONS OF DISCRETE TIME DYNAMIC EQUILIBRIUM MODELS JINILL KIM, SUNGHYUN KIM, ERNST SCHAUMBURG, AND CHRISTOPHER A SIMS I INTRODUCTION It is now widely understood

More information

Advanced Digital Signal Processing -Introduction

Advanced Digital Signal Processing -Introduction Advanced Digital Signal Processing -Introduction LECTURE-2 1 AP9211- ADVANCED DIGITAL SIGNAL PROCESSING UNIT I DISCRETE RANDOM SIGNAL PROCESSING Discrete Random Processes- Ensemble Averages, Stationary

More information

Economics 620, Lecture 4: The K-Variable Linear Model I. y 1 = + x 1 + " 1 y 2 = + x 2 + " 2 :::::::: :::::::: y N = + x N + " N

Economics 620, Lecture 4: The K-Variable Linear Model I. y 1 = + x 1 +  1 y 2 = + x 2 +  2 :::::::: :::::::: y N = + x N +  N 1 Economics 620, Lecture 4: The K-Variable Linear Model I Consider the system y 1 = + x 1 + " 1 y 2 = + x 2 + " 2 :::::::: :::::::: y N = + x N + " N or in matrix form y = X + " where y is N 1, X is N

More information

Linear Riccati Dynamics, Constant Feedback, and Controllability in Linear Quadratic Control Problems

Linear Riccati Dynamics, Constant Feedback, and Controllability in Linear Quadratic Control Problems Linear Riccati Dynamics, Constant Feedback, and Controllability in Linear Quadratic Control Problems July 2001 Ronald J. Balvers Douglas W. Mitchell Department of Economics Department of Economics P.O.

More information

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3]. Appendix : A Very Brief Linear ALgebra Review Introduction Linear Algebra, also known as matrix theory, is an important element of all branches of mathematics Very often in this course we study the shapes

More information

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions

More information

Economics 701 Advanced Macroeconomics I Project 1 Professor Sanjay Chugh Fall 2011

Economics 701 Advanced Macroeconomics I Project 1 Professor Sanjay Chugh Fall 2011 Department of Economics University of Maryland Economics 701 Advanced Macroeconomics I Project 1 Professor Sanjay Chugh Fall 2011 Objective As a stepping stone to learning how to work with and computationally

More information

Stochastic simulations with DYNARE. A practical guide.

Stochastic simulations with DYNARE. A practical guide. Stochastic simulations with DYNARE. A practical guide. Fabrice Collard (GREMAQ, University of Toulouse) Adapted for Dynare 4.1 by Michel Juillard and Sébastien Villemot (CEPREMAP) First draft: February

More information

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background

Lecture notes on Quantum Computing. Chapter 1 Mathematical Background Lecture notes on Quantum Computing Chapter 1 Mathematical Background Vector states of a quantum system with n physical states are represented by unique vectors in C n, the set of n 1 column vectors 1 For

More information

Multivariable Calculus

Multivariable Calculus 2 Multivariable Calculus 2.1 Limits and Continuity Problem 2.1.1 (Fa94) Let the function f : R n R n satisfy the following two conditions: (i) f (K ) is compact whenever K is a compact subset of R n. (ii)

More information

MATH529 Fundamentals of Optimization Unconstrained Optimization II

MATH529 Fundamentals of Optimization Unconstrained Optimization II MATH529 Fundamentals of Optimization Unconstrained Optimization II Marco A. Montes de Oca Mathematical Sciences, University of Delaware, USA 1 / 31 Recap 2 / 31 Example Find the local and global minimizers

More information

KRONECKER PRODUCT AND LINEAR MATRIX EQUATIONS

KRONECKER PRODUCT AND LINEAR MATRIX EQUATIONS Proceedings of the Second International Conference on Nonlinear Systems (Bulletin of the Marathwada Mathematical Society Vol 8, No 2, December 27, Pages 78 9) KRONECKER PRODUCT AND LINEAR MATRIX EQUATIONS

More information

Numerical Analysis of Differential Equations Numerical Solution of Elliptic Boundary Value

Numerical Analysis of Differential Equations Numerical Solution of Elliptic Boundary Value Numerical Analysis of Differential Equations 188 5 Numerical Solution of Elliptic Boundary Value Problems 5 Numerical Solution of Elliptic Boundary Value Problems TU Bergakademie Freiberg, SS 2012 Numerical

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS

MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS MATH 425-Spring 2010 HOMEWORK ASSIGNMENTS Instructor: Shmuel Friedland Department of Mathematics, Statistics and Computer Science email: friedlan@uic.edu Last update April 18, 2010 1 HOMEWORK ASSIGNMENT

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

Module 6.6: nag nsym gen eig Nonsymmetric Generalized Eigenvalue Problems. Contents

Module 6.6: nag nsym gen eig Nonsymmetric Generalized Eigenvalue Problems. Contents Eigenvalue and Least-squares Problems Module Contents Module 6.6: nag nsym gen eig Nonsymmetric Generalized Eigenvalue Problems nag nsym gen eig provides procedures for solving nonsymmetric generalized

More information

Volume 30, Issue 3. A note on Kalman filter approach to solution of rational expectations models

Volume 30, Issue 3. A note on Kalman filter approach to solution of rational expectations models Volume 30, Issue 3 A note on Kalman filter approach to solution of rational expectations models Marco Maria Sorge BGSE, University of Bonn Abstract In this note, a class of nonlinear dynamic models under

More information

An Introduction to Perturbation Methods in Macroeconomics. Jesús Fernández-Villaverde University of Pennsylvania

An Introduction to Perturbation Methods in Macroeconomics. Jesús Fernández-Villaverde University of Pennsylvania An Introduction to Perturbation Methods in Macroeconomics Jesús Fernández-Villaverde University of Pennsylvania 1 Introduction Numerous problems in macroeconomics involve functional equations of the form:

More information

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents MATHEMATICAL ECONOMICS: OPTIMIZATION JOÃO LOPES DIAS Contents 1. Introduction 2 1.1. Preliminaries 2 1.2. Optimal points and values 2 1.3. The optimization problems 3 1.4. Existence of optimal points 4

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

The properties of L p -GMM estimators

The properties of L p -GMM estimators The properties of L p -GMM estimators Robert de Jong and Chirok Han Michigan State University February 2000 Abstract This paper considers Generalized Method of Moment-type estimators for which a criterion

More information

Multiplying matrices by diagonal matrices is faster than usual matrix multiplication.

Multiplying matrices by diagonal matrices is faster than usual matrix multiplication. 7-6 Multiplying matrices by diagonal matrices is faster than usual matrix multiplication. The following equations generalize to matrices of any size. Multiplying a matrix from the left by a diagonal matrix

More information

EIGENVALUES AND EIGENVECTORS 3

EIGENVALUES AND EIGENVECTORS 3 EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices

More information

Minimization of the root of a quadratic functional under a system of affine equality constraints with application in portfolio management

Minimization of the root of a quadratic functional under a system of affine equality constraints with application in portfolio management Minimization of the root of a quadratic functional under a system of affine equality constraints with application in portfolio management Zinoviy Landsman Department of Statistics, University of Haifa.

More information

Solutions for Chapter 3

Solutions for Chapter 3 Solutions for Chapter Solutions for exercises in section 0 a X b x, y 6, and z 0 a Neither b Sew symmetric c Symmetric d Neither The zero matrix trivially satisfies all conditions, and it is the only possible

More information

Lagrange multipliers. Portfolio optimization. The Lagrange multipliers method for finding constrained extrema of multivariable functions.

Lagrange multipliers. Portfolio optimization. The Lagrange multipliers method for finding constrained extrema of multivariable functions. Chapter 9 Lagrange multipliers Portfolio optimization The Lagrange multipliers method for finding constrained extrema of multivariable functions 91 Lagrange multipliers Optimization problems often require

More information

Functions of Several Variables

Functions of Several Variables Functions of Several Variables The Unconstrained Minimization Problem where In n dimensions the unconstrained problem is stated as f() x variables. minimize f()x x, is a scalar objective function of vector

More information

Mathematics for Economics ECON MA/MSSc in Economics-2017/2018. Dr. W. M. Semasinghe Senior Lecturer Department of Economics

Mathematics for Economics ECON MA/MSSc in Economics-2017/2018. Dr. W. M. Semasinghe Senior Lecturer Department of Economics Mathematics for Economics ECON 53035 MA/MSSc in Economics-2017/2018 Dr. W. M. Semasinghe Senior Lecturer Department of Economics MATHEMATICS AND STATISTICS LERNING OUTCOMES: By the end of this course unit

More information

MS 2001: Test 1 B Solutions

MS 2001: Test 1 B Solutions MS 2001: Test 1 B Solutions Name: Student Number: Answer all questions. Marks may be lost if necessary work is not clearly shown. Remarks by me in italics and would not be required in a test - J.P. Question

More information

Math 313 Chapter 1 Review

Math 313 Chapter 1 Review Math 313 Chapter 1 Review Howard Anton, 9th Edition May 2010 Do NOT write on me! Contents 1 1.1 Introduction to Systems of Linear Equations 2 2 1.2 Gaussian Elimination 3 3 1.3 Matrices and Matrix Operations

More information

. D Matrix Calculus D 1

. D Matrix Calculus D 1 D Matrix Calculus D 1 Appendix D: MATRIX CALCULUS D 2 In this Appendix we collect some useful formulas of matrix calculus that often appear in finite element derivations D1 THE DERIVATIVES OF VECTOR FUNCTIONS

More information

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...

Matrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1... Chapter Matrices We review the basic matrix operations What is a Matrix? An array of numbers a a n A = a m a mn with m rows and n columns is a m n matrix Element a ij in located in position (i, j The elements

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Introduction to Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Vectors Arrays of numbers Vectors represent a point in a n dimensional space

More information

Matrices. Chapter Definitions and Notations

Matrices. Chapter Definitions and Notations Chapter 3 Matrices 3. Definitions and Notations Matrices are yet another mathematical object. Learning about matrices means learning what they are, how they are represented, the types of operations which

More information

Economics 204 Summer/Fall 2010 Lecture 10 Friday August 6, 2010

Economics 204 Summer/Fall 2010 Lecture 10 Friday August 6, 2010 Economics 204 Summer/Fall 2010 Lecture 10 Friday August 6, 2010 Diagonalization of Symmetric Real Matrices (from Handout Definition 1 Let δ ij = { 1 if i = j 0 if i j A basis V = {v 1,..., v n } of R n

More information

The Simplest Semidefinite Programs are Trivial

The Simplest Semidefinite Programs are Trivial The Simplest Semidefinite Programs are Trivial Robert J. Vanderbei Bing Yang Program in Statistics & Operations Research Princeton University Princeton, NJ 08544 January 10, 1994 Technical Report SOR-93-12

More information

HIGHER ORDER CUMULANTS OF RANDOM VECTORS, DIFFERENTIAL OPERATORS, AND APPLICATIONS TO STATISTICAL INFERENCE AND TIME SERIES

HIGHER ORDER CUMULANTS OF RANDOM VECTORS, DIFFERENTIAL OPERATORS, AND APPLICATIONS TO STATISTICAL INFERENCE AND TIME SERIES HIGHER ORDER CUMULANTS OF RANDOM VECTORS DIFFERENTIAL OPERATORS AND APPLICATIONS TO STATISTICAL INFERENCE AND TIME SERIES S RAO JAMMALAMADAKA T SUBBA RAO AND GYÖRGY TERDIK Abstract This paper provides

More information

5 More on Linear Algebra

5 More on Linear Algebra 14.102, Math for Economists Fall 2004 Lecture Notes, 9/23/2004 These notes are primarily based on those written by George Marios Angeletos for the Harvard Math Camp in 1999 and 2000, and updated by Stavros

More information

How large can a finite group of matrices be? Blundon Lecture UNB Fredericton 10/13/2007

How large can a finite group of matrices be? Blundon Lecture UNB Fredericton 10/13/2007 GoBack How large can a finite group of matrices be? Blundon Lecture UNB Fredericton 10/13/2007 Martin Lorenz Temple University, Philadelphia Overview Groups... and some of their uses Martin Lorenz How

More information

Invertibility and stability. Irreducibly diagonally dominant. Invertibility and stability, stronger result. Reducible matrices

Invertibility and stability. Irreducibly diagonally dominant. Invertibility and stability, stronger result. Reducible matrices Geršgorin circles Lecture 8: Outline Chapter 6 + Appendix D: Location and perturbation of eigenvalues Some other results on perturbed eigenvalue problems Chapter 8: Nonnegative matrices Geršgorin s Thm:

More information

Lecture 7: Linear-Quadratic Dynamic Programming Real Business Cycle Models

Lecture 7: Linear-Quadratic Dynamic Programming Real Business Cycle Models Lecture 7: Linear-Quadratic Dynamic Programming Real Business Cycle Models Shinichi Nishiyama Graduate School of Economics Kyoto University January 10, 2019 Abstract In this lecture, we solve and simulate

More information