Econ Slides from Lecture 8

Similar documents
Lecture 9. Econ August 20

Econ Slides from Lecture 7

Math Matrix Algebra

A A x i x j i j (i, j) (j, i) Let. Compute the value of for and

Linear Algebra Primer

Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Linear algebra I Homework #1 due Thursday, Oct Show that the diagonals of a square are orthogonal to one another.

Chapter 3 Transformations

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

MA 265 FINAL EXAM Fall 2012

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Review problems for MA 54, Fall 2004.

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis from any basis.

Math (P)refresher Lecture 8: Unconstrained Optimization

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

Paul Schrimpf. October 18, UBC Economics 526. Unconstrained optimization. Paul Schrimpf. Notation and definitions. First order conditions

Linear Algebra: Characteristic Value Problem

HW2 - Due 01/30. Each answer must be mathematically justified. Don t forget your name.

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Matrices and Deformation

Introduction to Matrix Algebra

MATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product.

Chapter 6. Eigenvalues. Josef Leydold Mathematical Methods WS 2018/19 6 Eigenvalues 1 / 45

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Symmetric Matrices and Eigendecomposition

Mathematical foundations - linear algebra

Conceptual Questions for Review

Review of Linear Algebra

Systems of Algebraic Equations and Systems of Differential Equations

Math113: Linear Algebra. Beifang Chen

Maths for Signals and Systems Linear Algebra in Engineering

Problem # Max points possible Actual score Total 120

Exercise Sheet 1.

Announcements Wednesday, November 01

Announcements Wednesday, November 01

Eigenvalues and Eigenvectors. Review: Invertibility. Eigenvalues and Eigenvectors. The Finite Dimensional Case. January 18, 2018

Linear Algebra 1 Exam 2 Solutions 7/14/3

Chapter 7: Symmetric Matrices and Quadratic Forms

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

CS 143 Linear Algebra Review

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Lecture 6 Positive Definite Matrices

Econ 204 Supplement to Section 3.6 Diagonalization and Quadratic Forms. 1 Diagonalization and Change of Basis

Diagonalization. MATH 1502 Calculus II Notes. November 4, 2008

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra Practice Problems

16.1 L.P. Duality Applied to the Minimax Theorem

235 Final exam review questions

Eigenvalues and Eigenvectors

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Mathematical foundations - linear algebra

Stat 206: Linear algebra

A Brief Outline of Math 355

Dimension. Eigenvalue and eigenvector

OR MSc Maths Revision Course

UNIT 6: The singular value decomposition.

Introduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz

Lecture 7: Positive Semidefinite Matrices

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if

Study Notes on Matrices & Determinants for GATE 2017

Math 308 Practice Final Exam Page and vector y =

Econ Slides from Lecture 10


Mobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti

Math Linear Algebra Final Exam Review Sheet

MATH 310, REVIEW SHEET 2

Linear Algebra Practice Problems

Linear algebra and applications to graphs Part 1

Math 301 Test III. Dr. Holmes. November 4, 2004

Fundamentals of Unconstrained Optimization

Ma/CS 6b Class 20: Spectral Graph Theory

B553 Lecture 5: Matrix Algebra Review

Assignment #10: Diagonalization of Symmetric Matrices, Quadratic Forms, Optimization, Singular Value Decomposition. Name:

2. Every linear system with the same number of equations as unknowns has a unique solution.

Advanced Engineering Mathematics Prof. Pratima Panigrahi Department of Mathematics Indian Institute of Technology, Kharagpur

Lecture 2 - Unconstrained Optimization Definition[Global Minimum and Maximum]Let f : S R be defined on a set S R n. Then

Lecture 15 Review of Matrix Theory III. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

CS 246 Review of Linear Algebra 01/17/19

1. General Vector Spaces

Lecture: Face Recognition and Feature Reduction

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

EE731 Lecture Notes: Matrix Computations for Signal Processing

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Eigenvalues and Eigenvectors A =

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

Linear Algebra And Its Applications Chapter 6. Positive Definite Matrix

7. Symmetric Matrices and Quadratic Forms

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

Transcription:

Econ 205 Sobel Econ 205 - Slides from Lecture 8 Joel Sobel September 1, 2010

Computational Facts 1. det AB = det BA = det A det B 2. If D is a diagonal matrix, then det D is equal to the product of its diagonal elements. 3. det A is equal to the product of the eigenvalues of A. 4. The trace of a square matrix A is equal to the sum of the diagonal elements of A. That is, tr (A) = n i=1 a ii. Fact: tr (A) = n i=1 λ i, where λ i is the ith eigenvalue of A (eigenvalues counted with multiplicity).

Econ 205 Sobel Symmetric Matrices Have Orthonormal E-vectors Theorem If A is symmetric, then we take take the eigenvectors of A to be orthonormal. In this case, the P in the previous theorem has the property that P 1 = P t.

Why Eigenvalues? 1. They play a role in the study of stability of difference and differential equations. 2. They make certain computations easy. 3. They make it possible to define a sense in which matrices can be positive and negative that allows us to generalize the one-variable second-order conditions.

Quadratic Forms Definition A quadratic form in n variables is any function Q : R n R that can be written Q(x) = x t Ax where A is a symmetric n n matrix. When n = 1 a quadratic form is a function of the form ax 2. When n = 2 it is a function of the form a 11 x 2 1 + 2a 12 x 1 x 2 + a 22 x 2 2 (remember a 12 = a 21 ). When n = 3, it is a function of the form a 11 x 2 1 + a 22 x 2 2 + a 33 x 2 3 + 2a 12 x 1 x 2 + 2a 13 x 1 x 3 + 2a 23 x 2 x 3 A quadratic form is second-degree polynomial that has no constant term.

Econ 205 Sobel Classification Definition A quadratic form Q(x) is 1. positive definite if Q(x) > 0 for all x 0. 2. positive semi definite if Q(x) 0 for all x. 3. negative definite if Q(x) < 0 for all x 0. 4. negative semi definite if Q(x) 0 for all x. 5. indefinite if there exists x and y such that Q(x) > 0 > Q(y).

Interpretation Generalized positivity. Check one-variable case. Lots of indefinite matrices when n > 1. Think about diagonal matrices for intuition. Q(x) = x t Ax = n i=1 a iixi 2. This quadratic form is positive definite if and only if all of the a ii > 0, negative definite if and only if all of the a ii < 0, positive semi definite if and only if a ii 0, for all i negative semi definite if and only if a ii 0 for all i, and indefinite if A has both negative and positive diagonal entries.

Quadratic Forms and Diagonalization Assume A symmetric matrix. It can be written A = R t DR, where D is a diagonal matrix with (real) eigenvalues down the diagonal and R is an orthogonal matrix. Q(x) = x t Ax = x t R t DRx = (Rx) t D (Rx). The definiteness of A is equivalent to the definiteness of its diagonal matrix of eigenvalues, D.

Theorem on Definiteness Theorem The quadratic form Q(x) = x t Ax is 1. positive definite if λ i > 0 for all i. 2. positive semi definite if λ i 0 for all i. 3. negative definite if λ i < 0 for all i. 4. negative semi definite if λ i 0 for all i. 5. indefinite if there exists j and k such that λ j > 0 > λ k.

Econ 205 Sobel Computational Trick Definition A principal submatrix of a square matrix A is the matrix obtained by deleting any k rows and the corresponding k columns. The determinant of a principal submatrix is called the principal minor of A. The leading principal submatrix of order k of an n n matrix is obtained by deleting the last n k rows and column of the matrix. The determinant of a leading principal submatrix is called the leading principal minor of A.

Econ 205 Sobel Definiteness Tests Theorem A matrix is 1. positive definite if and only if all of its leading principal minors are positive. 2. negative definite if and only if its odd principal minors are negative and its even principal minors are positive. 3. indefinite if one of its kth order leading principal minors is negative for an even k or if there are two odd leading principal minors that have different signs. The theorem permits you to classify the definiteness of matrices without finding eigenvalues. The conditions make sense for diagonal matrices.

Multivariable Calculus Goal: Extend the calculus from real-valued functions of a real variable to general functions from R n to R m. Raising the dimension of the range space: easy. Raising the dimension of domain: some new ideas.

Linear Structures In R 2 three linear subspaces: point, lines, entire space. In general, we ll care about 1- and (n 1)-dimensional subsets of R n.

Analytic Geometry Definition A line is described by a point x and a direction v. It can be represented as {z : there exists t R such that z = x + tv}. If we constrain t [0, 1] in the definition, then the set is the line segment connecting x to x + v. Two points still determine a line: The line connecting x to y can be viewed as the line containing x in the direction v. You should check that this is the same as the line through y in the direction v. Definition A hyperplane is described by a point x 0 and a normal direction p R n, p 0. It can be represented as {z : p (z x 0 ) = 0}. p is called the normal direction of the plane. A hyperplane consists of all of the z with the property that the direction z x 0 is normal to p.

In R 2 lines are hyperplanes. In R 3 hyperplanes are ordinary planes. Lines and hyperplanes are two kinds of flat subset of R n. Lines are subsets of dimension one. Hyperplanes are subsets of dimension n 1 or co-dimension one. You can have a flat subsets of any dimension less than n. Lines and hyperplanes are not subspaces (because they do not contain the origin) you obtain these sets by translating a subspace that is, by adding the same constant to all of its elements.

Definition A linear manifold of R n is a set S such that there is a subspace V on R n and x 0 R n with S = V + {x 0 }. In the above definition, V + {x 0 } {y : y = v + x 0 for some v V }. Officially lines and hyperplanes are linear manifolds and not linear subspaces.

Review Given two points x and y, you can construct a line that passes through the points. Two-dimensional version: {z : z = x + t(y x) for some t.} z 1 = x 1 + t(y 1 x 1 ) and z 2 = x 2 + t(y 2 x 2 ). If you use the equation for z 1 to solve for t and substitute out you get: z 2 = x 2 + (y 2 x 2 ) (z 1 x 1 ) y 1 x 1 or z 2 x 2 = y 2 x 2 y 1 x 1 (z 1 x 1 ), which is the standard way to represent the equation of a line (in the plane) through the point (x 1, x 2 ) with slope (y 2 x 2 )(y 1 x 1 ).

Conclusion The parametric representation is essentially equivalent to the standard representation in R 2. (Why essentially?) You need two pieces of information to describe a line: Point and Direction. or Two points. (Obtain direction by subtracting the points.)

Hyperplane Given a point and (normal)direction. Three points in R 3 (n points in R n ), provided that they are in general position. Going from points to normal: Solve linear system (or remember computational trick).

Example Given (1, 2 3), (0, 1, 1), (2, 1, 1) find A, B, C, D such that A + 2B 3C = D B + C = D 2A + B + C = D Ax 1 + Bx 2 + Cx 3 = D Doing so yields (A, B, C, D) = (0,.8D,.2D, D). (If you find one set of coefficients that work, any non-zero multiple will also work.) Hence an equation for the plane is: 4x 2 + x 3 = 5 you can check that the three points actually satisfy this equation.

Alternatively Look for a normal direction. A normal direction is a direction that is orthogonal to all directions in the plane. A direction in the plane is a direction of a line in the plane. You can get such a direction by subtracting any two points in the plane. A two dimensional hyperplane will have two independent directions. One direction can come from the difference between the first two points: (1, 1, 4). The other can come from the difference between the second and third points ( 2, 0, 0). Now find a normal to both of them. That is, a p such that p 0 and p (1, 1, 4) = p ( 2, 0, 0) = 0. This is a system of two equations and three variables. All multiples of (0, 4, 1) solve the equations.

Linear Functions Definition A function L: R n R m is linear if and only if 1. If for all x and y, f (x + y) = f (x) + f (y) and 2. for all scalars λ, f (λx) = λf (x).

Discussion 1. Linear functions are additive. 2. Linear functions exhibit constant returns to scale. 3. If L is a linear function, then L(0) = 0 and, more generally, L(x) = L( x). 4. Any linear function can be represented by matrix multiplication. Given a linear function, compute L(e i ), where e i is the ith standard basis element. Call this a i and let A be the square matrix with ith column equal to a i. A must have n columns and m rows and L(x) = Ax for all x. 5. Multiplication of matrices is composition of functions.