Economics 204 Fall 2013 Problem Set 5 Suggested Solutions

Similar documents
Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

1. Let A = (a) 2 (b) 3 (c) 0 (d) 4 (e) 1

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

Eigenvalues and Eigenvectors 7.2 Diagonalization

235 Final exam review questions

Homework 6 Solutions. Solution. Note {e t, te t, t 2 e t, e 2t } is linearly independent. If β = {e t, te t, t 2 e t, e 2t }, then

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

Diagonalization. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

Lecture 7: Positive Semidefinite Matrices

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 10, 2011

EIGENVALUES AND EIGENVECTORS

Diagonalization of Matrix

2 Eigenvectors and Eigenvalues in abstract spaces.

EIGENVALUES AND EIGENVECTORS 3

Linear Algebra Practice Final

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Math 20F Final Exam(ver. c)

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Multivariable Calculus

1 Overview. 2 A Characterization of Convex Functions. 2.1 First-order Taylor approximation. AM 221: Advanced Optimization Spring 2016

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

Economics 205, Fall 2002: Final Examination, Possible Answers

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

MA 265 FINAL EXAM Fall 2012

Implicit Functions, Curves and Surfaces

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

NATIONAL UNIVERSITY OF SINGAPORE MA1101R

Lecture Summaries for Linear Algebra M51A

Math 312 Final Exam Jerry L. Kazdan May 5, :00 2:00

LINEAR ALGEBRA REVIEW

PRACTICE PROBLEMS FOR THE FINAL

Math 353, Practice Midterm 1

Linear Algebra Formulas. Ben Lee

1. Select the unique answer (choice) for each problem. Write only the answer.

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

Linear Systems. Class 27. c 2008 Ron Buckmire. TITLE Projection Matrices and Orthogonal Diagonalization CURRENT READING Poole 5.4

Math 315: Linear Algebra Solutions to Assignment 7

Math 113 Final Exam: Solutions

Nonlinear equations. Norms for R n. Convergence orders for iterative methods

Exercise Set 7.2. Skills

The Jordan Normal Form and its Applications

Section 3.5 The Implicit Function Theorem

Chapter 3. Determinants and Eigenvalues

Study Guide for Linear Algebra Exam 2

MTH 464: Computational Linear Algebra

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents

In Class Peer Review Assignment 2

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Eigenvalues and Eigenvectors

MATH 235. Final ANSWERS May 5, 2015

Math 314/ Exam 2 Blue Exam Solutions December 4, 2008 Instructor: Dr. S. Cooper. Name:

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Computationally, diagonal matrices are the easiest to work with. With this idea in mind, we introduce similarity:

Dimension. Eigenvalue and eigenvector

PRACTICE FINAL EXAM. why. If they are dependent, exhibit a linear dependence relation among them.

Solutions to the Calculus and Linear Algebra problems on the Comprehensive Examination of January 28, 2011

Linear Algebra II Lecture 13

Math 291-2: Final Exam Solutions Northwestern University, Winter 2016

G1110 & 852G1 Numerical Linear Algebra

Solutions to Final Exam

Chapter SSM: Linear Algebra. 5. Find all x such that A x = , so that x 1 = x 2 = 0.

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly.

Control Systems. Linear Algebra topics. L. Lanari

Econ 204 (2015) - Final 08/19/2015

INTRODUCTION TO LIE ALGEBRAS. LECTURE 2.

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

Def. A topological space X is disconnected if it admits a non-trivial splitting: (We ll abbreviate disjoint union of two subsets A and B meaning A B =

Math 113 Winter 2013 Prof. Church Midterm Solutions

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

Math 302 Outcome Statements Winter 2013

Math 369 Exam #2 Practice Problem Solutions

City Suburbs. : population distribution after m years

Review problems for MA 54, Fall 2004.

The eigenvalues are the roots of the characteristic polynomial, det(a λi). We can compute

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

2 b 3 b 4. c c 2 c 3 c 4

AMS10 HW7 Solutions. All credit is given for effort. (-5 pts for any missing sections) Problem 1 (20 pts) Consider the following matrix 2 A =

Selected exercises from Abstract Algebra by Dummit and Foote (3rd edition).

1. General Vector Spaces

We see that this is a linear system with 3 equations in 3 unknowns. equation is A x = b, where

ALGEBRAIC GEOMETRY HOMEWORK 3

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Convex Optimization and Modeling

CHAPTER 4: HIGHER ORDER DERIVATIVES. Likewise, we may define the higher order derivatives. f(x, y, z) = xy 2 + e zx. y = 2xy.

MAT Linear Algebra Collection of sample exams

Math 217: Eigenspaces and Characteristic Polynomials Professor Karen Smith

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Math 205, Summer I, Week 4b: Continued. Chapter 5, Section 8

Recall : Eigenvalues and Eigenvectors

Knowledge Discovery and Data Mining 1 (VO) ( )

Transcription:

Economics 204 Fall 2013 Problem Set 5 Suggested Solutions 1. Let A and B be n n matrices such that A 2 = A and B 2 = B. Suppose that A and B have the same rank. Prove that A and B are similar. Solution. To begin, we note that from the last problem set we know that if λ is an eigenvalue of A then λ 2 is an eigenvalue of A 2. Since A = A 2 we must have λ = λ 2 for any eigenvalue of A. Therefore, the eigenvalues of A and B can only be either 0 or 1. Now, observe that eigenspace of the eigenvalue λ = 0 is just ker(a). We claim that that eigenspace corresponding to the eigenvalue λ = 1 is Im(A). Lets pick any x Im(A), i.e. there exists some y X such that x = Ay Ax = A Ay = A 2 y = Ay = x which implies that x is an eigenvector associated with eigenvalue λ = 1. Now, observing that any eigenspace of A is a subset of Im(A) we immediately get the result we desire. Let k be the rank of A and B, which is, given what we said above, is dimension of eigenspace corresponding to eigenvector λ = 1. By the Rank Nullity Theorem we know that ranka + dim ker(a) = n thus, the dimension of eigenspace corresponding to eigenvector λ = 0 is n k and we know that there n independent eigenvalues that span the whole space. Therefore, A and B are both diagonalizable, implying that there are non-singular matrices C 1 and C 2 such that C 1 AC 1 1 = [ Ik, k 0 0 0 and [ C 2 BC2 1 Ik, = k 0 0 0 Thus, we have C 1 AC 1 1 = C 2 BC 1 2 A = (C 1 1 C 2 )B(C 1 1 C 2 ) 1. 2. Identify which of the following matrices can be diagonalized and provide the diagonalization. If you claim that a diagnoalization does not exist, prove it. [ [ 1 0 0 3 1 2 2 3 1 1 A =, B =, C = 2 1 3, D = 2 0 2 2 5 0 1 1 1 1 2 1 1 Solution. Note that matrix A has two distinct eigenvalues: 1 and 4, and is therefore diagonalizable (these can be found either by inspection using the fact

that the sum of the eigenvalues equals the trace of the matrix, or by solving the characteristic polynomial). Two eigenvectors associated with these eigenvalues are [ 3, 1 T and [ 1, 2 T are respectively. A diagonalization therefore is [ 1 0 0 4 = [ 3 1 1 2 1 [ 2 3 2 5 [ 3 1 1 2 Note that, for the matrix on the far right, the columns of the matrix are eigenvectors. Matrix B has only one eigenvalue: 1, repeated twice. This eigenvalue does not generate two linearly independent eigenvectors: the eigenvector [ 1, 0 T spans the eigenspace. Since the dimension of the eigenspace is therefore less than 2, matrix B is not diagonalizable. Matrix C has three distinct eigenvalues: 1, 2 and 2, and is therefore diagonalizable. Three eigenvectors associated with these eigenvalues are [ 3, 1, 2 T, [ 0, 3, 1 T and [ 0, 1, 1 T respectively. A diagonalization therefore is 1 0 0 0 2 0 0 0 2 = 3 0 0 1 3 1 2 1 1 1 1 0 0 2 1 3 1 1 1 3 0 0 1 3 1 2 1 1 Matrix D has only two eigenvalues, 1 and 0, with 1 repeated. However, we are in luck, as the eigenvalue of 1 is associated with two linearly independent eigenvectors: [ 1, 0, 1 T and [ 1, 2, 0 T. The eigenvalue of 0 is associated with the eigenvector [ 1, 1, 1 T. Thus, the matrix D is diagonalizable and a diagonalization is: 1 1 0 0 1 1 1 3 1 2 1 1 1 0 1 0 = 0 2 1 2 0 2 0 2 1 0 0 0 1 0 1 2 1 1 1 0 1 3. Let A and B be n n matrices. Prove or disprove each of the following statements: (a) If A and B are diagonalizable, so is A + B. Solution. Take [ 1 0 A = 1 1 [ 1 1, and B = 0 1 they are both diagonalizable (they are either lower- or upper-triangular matrices with district diagonal term), but [ 0 1 A + B = 1 0 is NOT diagonalizable. 2

(b) If A and B are diagonalizable, so is A B. Solution. Take A = [ 1 2 0 2 [ 1 0, and B = 0 1 2 then A and B are clearly diagonalizable (B is already in diagonal form and A is upper-triangular). However, [ 1 1 A B = 0 1 and as you have seen in previous exercise, it is NOT diagonalizable. (c) If A 2 = A then A is diagonalizable. Solution. We have just proved that in problem 1. In case, you wondered no, that was not intended. But you are welcome anyway. 4. Prove that if f : R R is differentiable, then Im (f ) is an interval (possibly a singleton). Solution. Lets start by observing that Im (f ) has to be at least a singleton (i.e. can t be an empty set) because by assumption f is differentiable on all of R. Lets consider following function F : Ω R defined as F (x, y) = f(x) f(y) x y on open half plane Ω = {(x, y) R 2 x > y}. Let J be Im (F ). Note that F is continuous and Ω is connected set (we will prove that in a bit!), so J must be a connected set in R, i.e. an interval. By Mean Value Theorem, for any two point (x, y) Ω there is a point z R, such that f(x) f(y) = f (z)(x y). By construction of F, we must have F (x, y) = f (z). Therefore, it must be the case that J Im(f ). Moreover, for a given x R we have f (x) = lim y x F (x, y), therefore, Im (f ) must be in the closure of J. Thus, we have J Im(f ) J and it remains to show that Ω is a connected set. It is easy to see that Ω is a convex set, where we call a set S convex if for any x, x S and α [0, 1, the 3

point (1 α)x + αx S. So, lets prove more general result that any convex set Ω R k is connected. As with most proofs of connectedness we proceed by contraction. Suppose that there is a separation of Ω, where Ω = A B and A and B are separated sets. We will prove this claim by showing that separation of Ω implies that a line segment connecting any two points in A and B is also separated. But that would be impossible as it would contradict the definition of Ω being convex. So, lets pick any a A and b B, and lets define following function: p(t) = (1 t)a + b for t R. Lets also define two following sets A 0 = p 1 (A), and B 0 = p 1 (B). We want to demonstrate that A 0 B 0 = and that A 0 B 0 =. Assume to contrary that A 0 B 0 then pick t 0 A 0 B 0. If t 0 B 0 then we have that p(t 0 ) A B which is a contradiction to the fact that A and B are separated. So, it must be the case that t 0 B 0. Now, we want to show that this implies p(t 0 ) B because again, this will lead us to contraction with our assumption that A and B are separated. Note that that for any ε > 0 there exists a δ > 0 such that for all t B δ (t 0 ) we have p(t) B ε (p(t 0 )) (that is just a continuity of p(t) and, for instance, δ = ε will work). Now, since t b a 0 is a limit point, there is some t 1 B δ (t 0 ) B 0 such that t 1 t 0 and, thus, we have that p(t 1 ) p(b δ (t 0 )) which implies that p(t 1 ) B ε (p(t 0 )). This means that there is p(t 1 ) p(t 0 ) for any ε ball around p(t 0 ). Thus, p(t 0 ) B. We arrive at contradiction, because this implies p(t 0 ) A B. Thus, A 0 B 0 =. Similar reasoning can be employed to show that A 0 B 0 =. Now, lets demonstrate that there is a t 0 (0, 1) such that p(t 0 ) A B. Again, lets reason by contradiction, and suppose that the there is no such t 0 such that p(t 0 ) A B. Then, for all t 0 we have p(t 0 ) A or p(t 0 ) B but not in both because A and B are separated. So this means that (0, 1) p 1 (A B) = A 0 B 0. Since (0,1) is connected it must be completely contained in either A 0 or in B 0. The proof of that by contradiction is short and left as an exercise. So assume that (0, 1) A 0. This means that [0, 1 A 0, but p(1) B so that is a contradiction because we have just shown that A 0 and B 0 are separated. Therefore, there must be t 0 (0, 1) such that p(t 0 ) A B. But this is immediate contradiction to the fact that Ω is a convex set! Therefore, Ω must be connected. We are done. 5. Some practice with implicit function theorem (a) Define f : R 3 R by f(x, y 1, y 2 ) = x 2 y 1 + e x + y 2. Show that f(0, 1, 1) = 0 and D 1 f(0, 1, 1) 0. Thus, there exists a differentiable function g in some neighborhood of (1, 1) of R 2, such that 4

g(1, 1) = 0 and Compute Dg(1, 1). f(g(y 1, y 2 ), y 1, y 2 ) = 0. Solution. Direct computations immediately verify that f(0, 1, 1) = 0 and D 1 f(0, 1, 1) = e 0 = 1, as well as D 2 f(0, 1, 1) = 0 and D 3 f(0, 1, 1) = 1. Since D 1 f(0, 1, 1) is non-singular, the Implicit Function Theorem guarantees the existence of C 1 mapping g, defined in the neighborhood of (y 1, y 2 ) = (1, 1), such that g(y 1, y 2 ) = 0 and f(g(y 1, y 2 ), y 1, y 2 ) = 0 as required. Also, it allows us to compute D 1 g(1, 1) = D 2f D 1 f = 0 and D 2g(1, 1) D 3f D 1 f = 1. (b) Let x = x(y, z), y = y(x, z) and z = z(x, y) be functions that are implicitly defined by equation F (x, y, z) = 0. Prove that x y z y z x = 1. Solution. Since we are not given a particular point (x 0, y 0, z 0 ) and since the problem already presumes the existence of implicit functions x = x(y, z), y = y(x, z) and z = z(x, y), we just appeal directly to Implicit Function Theorem to compute the partial derivatives that we need: x y = F y F x, y z = F z F y, and which immediately give us the result we need: ( x y z y z x = F ) ( y F ) ( z F ) z F x F y F y z x = F z F y, = 1. with understanding that all partial derivatives above are evaluated at (x 0, y 0, z 0 ). 6. Let f : R R be C 1 function and let u = f(x) v = y + xf(x). If f (x 0 ) 0, show that this transformation is locally invertible in the neighborhood of (x 0, y 0 ) and that the inverse has the form x = g(u) y = v + ug(u). 5

Solution. Consider F : R 2 R 2 given by F (x, y) = (f(x), y + xf(x)). Computing the Jacobian of F at (x 0, y 0 ) yields f (x 0 ) 0. Thus, be the Inverse Function Theorem, F is invertible in an open neighborhood of (x 0, y 0 ). Applying Implicit Function Theorem, f has a local inverse, g in an open neighborhood of x 0. Thus, we can solve for each component of F 1 explicitly and get g(u) = g(f(x)) = x and y = v + xf(x) = v + g(u)f(g(u)) = v + ug(u). 7. Using Taylor s formula approximate following two functions f(x, y) = (1 + x) m (1 + y) n and f(x, y) = cos x cos y. up to the second order terms, assuming x and y are small in absolute value. Estimate approximation error. Solution. First, lets consider f(x, y) = (1 + x) m (1 + y) n. The second-order Taylor series expansion of f(x, y) around (x 0, y 0 ) is given by ( ) x x0 f(x, y) = f(x 0, y 0 )+ Df(x 0, y 0 ) + y y 0 + 1 ( ) ( ) x x x0 y y 0 D 2 x0 f(x 0, y 0 ). 2! y y 0 The derivatives we need to evaluate this expression are given by f x (x, y) = m(1 + x) m 1 (1 + y) n f y (x, y) = n(1 + x) m (1 + y) n 1 f xx (x, y) = m(m 1)(1 + x) m 2 (1 + y) n f yy (x, y) = n(n 1)(1 + x) m (1 + y) n 2 f xy (x, y) = nm(1 + x) m 1 (1 + y) n 1 which when evaluated at x 0 = 0 and y 0 = 0 and plugged in yield f(x, y) = 1 + mx + ny + 1 2 (m(m 1)x2 + 2mnxy + n(n 1)y 2 ). The error estimate is the difference between the true value of f(x, y) and the Taylor expansion. We write this as: E 3 = (1 + x) m (1 + y) n 1 mx ny 1 2 (m(m 1)x2 + 2mnxy + n(n 1)y 2 ). 6

Where the subscript ( 3 on E 3 indicates that this error term is a third order error term. It is also O x 3). y Now, consider f(x, y) = cos x. Using Taylor s formula we have following approximations cos y cos x = 1 x2 2 + o(x2 ), and that are true when x and z are infinitesimally small. Thus, we have cos x cos y = 1 x2 + 2 o(x2 ) 1 y2 + 2 o(y2 ) = 1 1 z = 1 + z + z2 + o(z 2 ) ) ) (1 x2 2 + o(x2 ) (1 + y2 2 + o(y2 ) = = 1 x2 2 + y2 2 + x2 o(y 2 ) + y 2 o(x 2 ) 1 x2 y 2. 2 The estimate of the error terms is analogous to that for function f(x, y) = (1 + x) m (1 + y) n. 7