= W z1 + W z2 and W z1 z 2

Similar documents
Math 4242 Fall 2016 (Darij Grinberg): homework set 8 due: Wed, 14 Dec b a. Here is the algorithm for diagonalizing a matrix we did in class:

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Conceptual Questions for Review

Dot Products, Transposes, and Orthogonal Projections

Topics in linear algebra

c 1 v 1 + c 2 v 2 = 0 c 1 λ 1 v 1 + c 2 λ 1 v 2 = 0

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Bare-bones outline of eigenvalue theory and the Jordan canonical form

First we introduce the sets that are going to serve as the generalizations of the scalars.

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Practice problems for Exam 3 A =

Linear Algebra. Min Yan

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

MATH 583A REVIEW SESSION #1

Chapter 2 Notes, Linear Algebra 5e Lay

Calculating determinants for larger matrices

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

The 4-periodic spiral determinant

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

MIT Final Exam Solutions, Spring 2017

22A-2 SUMMER 2014 LECTURE 5

18.06 Problem Set 8 Solution Due Wednesday, 22 April 2009 at 4 pm in Total: 160 points.

Lecture 2e Row Echelon Form (pages 73-74)

Math Linear Algebra II. 1. Inner Products and Norms

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

Math 4242 Fall 2016 (Darij Grinberg): homework set 6 due: Mon, 21 Nov 2016 Let me first recall a definition.

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

Review problems for MA 54, Fall 2004.

Volume in n Dimensions

Lecture 6: Lies, Inner Product Spaces, and Symmetric Matrices

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

0.1. Linear transformations

Recitation 8: Graphs and Adjacency Matrices

Eigenvalues and Eigenvectors A =

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

A proof of the Jordan normal form theorem

Final Exam Practice Problems Answers Math 24 Winter 2012

MAT1302F Mathematical Methods II Lecture 19

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

Chapter 4 & 5: Vector Spaces & Linear Transformations

Linear Algebra, Summer 2011, pt. 2

Numerical Linear Algebra Homework Assignment - Week 2

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

MATH 310, REVIEW SHEET 2

Gaussian elimination

A PRIMER ON SESQUILINEAR FORMS

Math 54 HW 4 solutions

CANONICAL FORMS FOR LINEAR TRANSFORMATIONS AND MATRICES. D. Katz

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Elementary Linear Algebra

Chapter SSM: Linear Algebra. 5. Find all x such that A x = , so that x 1 = x 2 = 0.

Math Computation Test 1 September 26 th, 2016 Debate: Computation vs. Theory Whatever wins, it ll be Huuuge!

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Math Matrix Theory - Spring 2012

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

1. Introduction to commutative rings and fields

Math 121 Homework 5: Notes on Selected Problems

MA106 Linear Algebra lecture notes

Maths for Signals and Systems Linear Algebra in Engineering

4 Elementary matrices, continued

Linear Algebra (MATH ) Spring 2011 Final Exam Practice Problem Solutions

Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler.

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

LINEAR ALGEBRA BOOT CAMP WEEK 2: LINEAR OPERATORS

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

MATHEMATICS 23a/E-23a, Fall 2015 Linear Algebra and Real Analysis I Module #1, Week 4 (Eigenvectors and Eigenvalues)

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

Notes on arithmetic. 1. Representation in base B

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

REPRESENTATION THEORY OF S n

Lecture 19: Isometries, Positive operators, Polar and singular value decompositions; Unitary matrices and classical groups; Previews (1)

Lecture 6: Finite Fields

(VI.D) Generalized Eigenspaces

Econ Slides from Lecture 7

The minimal polynomial

Linear Algebra Practice Final

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if

Math 3C Lecture 20. John Douglas Moore

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

MATH 235. Final ANSWERS May 5, 2015

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

Math 291-2: Lecture Notes Northwestern University, Winter 2016

Linear Least-Squares Data Fitting

Frame Diagonalization of Matrices

Eigenvalues by row operations

Elementary linear algebra

Transcription:

Math 44 Fall 06 homework page Math 44 Fall 06 Darij Grinberg: homework set 8 due: Wed, 4 Dec 06 [Thanks to Hannah Brand for parts of the solutions] Exercise Recall that we defined the multiplication of complex numbers by the rule a, b a, b a a b b, a b + a b a Prove that this multiplication is associative: ie, that z z z 3 z z z 3 for every three complex numbers z, z, z 3 Begin by writing z in the form a, b, etc [5 points] b For any complex number z a, b a + bi, define a real matrix W z by a b W z b a Given two complex numbers z and z, prove that W z z W z W z [5 points] Solution to Exercise We begin by proving part b b Let z and z be two complex numbers Write z in the form z a, b Write z in the form z a, b Thus, z z a, b a, b a a b b, a b + b a by the definition of the product of two complex numbers Hence, the definition of W z z yields a a W z z b b a b + b a a b + b a a a b b a a b b a b + b a a b b a a a b b a b On the other hand, we have z a, b Thus, W z Similarly, b a a b W z Multiplying these two equalities, we obtain b a a b W z W z a b a a + b b a b + b a b a b a b a + a b b b + a a a a b b a b + b a a b b a a a b b Comparing this with, we obtain W z z W z W z This solves Exercise b [Remark: Of course, we also have W z +z W z + W z and W z z W z W z for any two complex numbers z and z These facts, combined, show that the addition,

Math 44 Fall 06 homework page subtraction and multiplication of complex numbers are mirrored by the addition, subtraction and multiplication of their corresponding W-matrices where the W- matrix of a complex number z means the -matrix W z In more abstract terms, this says that the map C R sending each complex number z to its W-matrix W z is a ring homomorphism This fact is behind our second solution of part a given below] a First solution of part a: Here is the straightforward approach: We defined the multiplication of complex numbers by the rule a, b a, b a a b b, a b + a b Given three complex numbers z a, b, z a, b, and z 3 a 3, b 3, we can see that z z z 3 z z z 3 by explicitly computing both sides of this equation: and z z z 3 a, b a, b a 3, b 3 a, b a a 3 b b 3, a b 3 + a 3 b a a a 3 b b 3 b a b 3 + a 3 b, a a b 3 + a 3 b + a a 3 b b 3 b a a a 3 a b b 3 a b b 3 a 3 b b, a a b 3 + a a 3 b + a a 3 b b b b 3 z z z 3 a, b a, b a 3, b 3 a a b b, a b + a b a 3, b 3 a a b b a 3 a b + a b b 3, a a b b b 3 + a 3 a b + a b a a a 3 a 3 b b a b b 3 a b b 3, a a b 3 b b b 3 + a a 3 b + a a 3 b The right hand sides of these two equations are equal even though the terms appear in slightly different orders in them Thus, the left hand sides are also equal In other words, z z z 3 z z z 3 This solves part a Second solution of part a: Here is a more elegant proof, using part b Part b says that W z z W z W z for any two complex numbers z and z Renaming z and z as u and v, we can rewrite this as follows: W z W uv W u W v for any two complex numbers u and v Furthermore, any complex number z can be reconstructed from the -matrix Therefore, if u and v are two complex numbers satisfying W u W v, then u v Strictly speaking, this statement also includes the facts that W 0 0 and W I a b Proof Let z be a complex number Write z in the form a, b Then, W z by the b a definition of W z Hence, a and b are the two entries of the first row of the matrix W z Therefore, we can reconstruct a and b from W z Therefore, we can reconstruct z from W z since z a, b Qed

Math 44 Fall 06 homework page 3 Now, let z, z and z 3 be three complex numbers Applying to u z and v z z 3, we obtain W z z z 3 W z W z z }{{} 3 W z W z3 by, applied to uz and vz 3 W z W z W z3 3 But applying to u z z and v z 3, we obtain W z z z 3 W z z }{{} W z W z by, applied to uz and vz W z3 W z W z W z3 W z W z W z3 since we know that multiplication of matrices is associative Comparing this with 3, we obtain W z z z 3 W z z z 3 But recall that if u and v are two complex numbers satisfying W u W v, then u v Applying this to u z z z 3 and v z z z 3, we obtain z z z 3 z z z 3 since W z z z 3 W z z z 3 This solves part a again [Remark: This second solution illustrates an idea frequently used in algebra: We want to prove that a structure in our case, the ring C of complex numbers satisfies a certain property in this case, associativity of multiplication Instead of doing this directly as was done in the first solution of part a, we embed the structure in a bigger structure in our case, the bigger structure is the ring R of - matrices, and the embedding is the map sending each z C to the matrix W z which is already known to possess this property after all, we know that matrix multiplication is associative; then, we get the property on the smaller structure for free] Here is the algorithm for diagonalizing a matrix we did in class: Algorithm 0 Let A C n n be an n n-matrix We want to diagonalize A; that is, we want to find an invertible n n-matrix S and a diagonal n n-matrix Λ such that A SΛS We proceed as follows: Step : We compute the polynomial det A xi n where x is the indeterminate This polynomial, or the closely related polynomial det xi n A, is often called the characteristic polynomial of A Step : We find the roots of this polynomial det A xi n Let λ, λ,, λ k be these roots without repetitions eg, multiple roots are not listed multiple times, in whatever order you like For example, if det A xi n x x, then you can set k, λ and λ, or you can set k, λ and λ, but you must not set k 3 Step 3: For each j {,,, k}, we find a basis for Ker A λ j I n Notice that Ker { 0 } A λ j I n because λ j is a root of det A λ j I n, and thus det A λ j I n 0; hence, the basis should consist of at least one vector

Math 44 Fall 06 homework page 4 Step 4: Concatenate these bases into one big list s, s,, s m of vectors If m < n, then the matrix A cannot be diagonalized, and the algorithm stops here Otherwise, m n, and we proceed further Step 5: Thus, for each p {,,, m}, the vector s p belongs to a basis of Ker A λ j I n for some j {,,, k} Denote the corresponding λj by µ p so that s p Ker A µ p I n For example, if sp belongs to a basis of Ker A 5I n, then µ p 5 Thus, we have defined m numbers µ, µ,, µ m Step 6: Let S be the n n-matrix whose columns are s, s,, s n Let Λ be the diagonal matrix whose diagonal entries from top-left to bottom-right are µ, µ,, µ n I called these Steps differently in class the first four steps were called Steps to 4, while the last two steps were called Part But the above is less confusing Here is an example that is probably too messy for a midterm, but illustrates some things: Example 0 Let A 5 5 4 Algorithm 0: Step : We have n 3 and thus det A xi n det Let us diagonalize A We proceed using 5 x 5 x 4 x 5 x x x + 4 + 5 5 x 4 5 x x x 3 + 8x 0x 4 Step : Now we must find the roots of this polynomial det A xi n x 3 + 8x 0x 4 This is a cubic polynomial, so if it has no rational roots, then finding its roots is quite hopeless in theory, there is Cardano s formula, but it is so complicated that it is almost useless Thus, we hope that there is a rational root To find it, we use the rational root theorem, which says that any rational root of a polynomial with integer coefficients must have the form p where p is q an integer dividing the constant term and q is a positive integer dividing the leading coefficient This is more general than what I quoted in class, and more correct than what I quoted in Section 070 In our case, the polynomial x 3 + 8x 0x 4 has leading coefficient and constant term 4 Thus, any rational root must have the form p where p is an integer dividing 4 q and q is a positive integer dividing This leaves 6 possibilities for p namely,

Math 44 Fall 06 homework page 5 p {,, 3, 4, 6, 8,, 4,,, 3, 4, 6, 8,, 4} and possibility for q namely, q Trying out all of these possibilities, we find that p 4 and q works Thus, p q 4 4 is a root Hence, we have found one root of our polynomial: namely, x 4 In order to find the others, we divide the polynomial by x 4 using polynomial long division We get x 3 + 8x 0x 4 x 4 x + 4x + 6 It thus remains to find the roots of x + 4x + 6 This is a quadratic, so we know how to do this The roots are + 0 and 0 Thus, altogether, the three roots of det A xi n are 4, + 0 and 0 Let me number them λ 4, λ + 0 and λ 3 0 although you can use any numbering you wish Step 3: Now, we must find a basis of Ker A λ j I n for each j {,, 3} This is a straightforward exercise in Gaussian elimination, and the only complication is that you have to know how to rationalize a denominator because λ and λ 3 involve square roots Let me only show the computation for j : Computing Ker A λ I n : We have Ker A λ I n Ker A + 0 I n Ker 3 0 5 0 4 0 This is the set of all solutions to the system 3 0 x + y + 5z 0; x + 0 y + 4 z 0; x + y + 0 z 0 4 So let us solve this system We divide the first equation by 3 0 in order to have a simpler pivot entry This is tantamount to multiplying it by 3 0 3 0 this was obtained by rationalizing the denominator, and it is absolutely useful here: you don t want to carry nested fractions around! It then becomes x + 3 + 0 y + 5 5 0 z 0, and the whole system transforms into x + 3 + 0 y + 5 5 0 z 0; x + 0 y + 4 z 0; x + y + 0 z 0

Math 44 Fall 06 homework page 6 Now, subtracting appropriate multiples of the first row from the other two rows, we eliminate x, resulting in the following system: x + 3 + 0 y + 5 5 0 z 0; 6 3 0 y + 6 + 0 0 z 0; 4 0 y + 4 + 4 0 z 0 Next, we divide the second equation by 6 3 0 aka, multiply it by 6 3 0 9 0, so that it becomes y + 8 0 z 0 Then, 8 3 3 subtracting an appropriate multiple of it from the third equation turns the third equation into 0 0 Thus, our system takes the form x + 3 + 0 y + 5 5 0 z 0; y + 8 0 z 0; 3 3 0 0 In this form, it can be solved by back-substitution unsurprisingly, there is a free variable, because the kernel is nonzero The solutions have the form Thus, 4 0 + r x 3 3 y 8 z 0 + r 3 3 r 4 0 + 3 3 Ker A λ I n span 8 0 + 3 3 4 0 + 3 3 Hence, 8 0 + 3 3 is a basis of Ker A λ I n Of course, you can scale the vector by 3 in order to get rid of the denominators

Math 44 Fall 06 homework page 7 Similarly, we can find a basis of Ker A λ I n for example, 0, and 4 0 + 3 3 a basis of Ker A λ 3 I n for example, 8 0 + 3 3 Step 4: Now, we concatenate these three bases into one big list s, s,, s m of vectors So this big list is s, s, s 3 0 }{{} a basis of KerA λ I n 4 0 + 3 3 4 0 + 3 3, 8 0 +, 3 3 8 0 + 3 3 }{{}}{{} a basis of a basis of KerA λ I n KerA λ 3 I n Thus, m 3, so that m n, and thus A can be diagonalized Step 5: Since s belongs to a basis of Ker A λ I n, we have µ λ 4 Similarly, µ λ + 0 and µ 3 λ 3 0 Step 6: Now, S is the n n-matrix whose columns are s, s,, s n In other words, 0 + 0 + 4 4 3 3 3 S 8 0 + 3 3 3 0 3 8 0 + 3 Furthermore, Λ is the diagonal matrix whose diagonal entries from top-left to bottom-right are µ, µ,, µ n In other words, Λ 4 0 0 0 + 0 0 0 0 0 These are the S and Λ we were seeking With some patience, you could check that A SΛS although it s not necessary to check it Remark 03 a Algorithm 0 relies on some nontrivial theorems for example, Lemma 83 in Olver/Shakiban See 83 of Olver/Shakiban for a complete treatment Chapter 7 of Lankham/Nachtergaele/Schilling comes close, whereas

Math 44 Fall 06 homework page 8 Chapter FiveIV of Hefferon is probably overkill b What can we do if A is not diagonalizable? The next best thing is the Jordan normal form or Jordan canonical form; see 86 of Olver/Shakiban c In Step 4 of Algorithm 0, we may sometimes notice that A is not diagonalizable since m < n Is there a way to notice this earlier, thus saving ourself some useless work? Yes For each j {,,, k}, let α j be the multiplicity of the root λ j of the polynomial det A xi n For example, if det A xi n x 6 3 x + and λ 6, then α 3, because the root λ 6 has multiplicity 3 In Step 3, when computing Ker A λ j I n, the dimension dim Ker A λj I n will be either α j or < α j If it is < α j, then the algorithm is doomed to failure ie, you will get m < n in Step 4, and A is not diagonalizable This can save you some work d Algorithm 0 is more of a theoretical result than an actual workable algorithm; the difficulty of finding exact roots of polynomials, and the instability of Gaussian elimination for non-exact matrices, makes it rather useless However, for -matrices it works fine you can solve quadratics, and it also works nicely for various kinds of matrices of nice forms eg, you can diagonalize the n n-matrix for each n; try it Practical algorithms for n n n numerical computation are a completely different story 06 of Olver/Shakiban tells the beginnings of the story namely, how to find eigenvalues, and get something close to diagonalization Similar to Gaussian elimination, it is wrong to expect diagonalization to work with approximate matrices, because S and Λ can jump wildly when A is changed only a little bit; however, certain things can be done that come close to diagonalization e There is a theorem called the spectral theorem saying that if A is a symmetric matrix with real entries, then A is always diagonalizable over the reals ie, we can find S and Λ with real entries, and moreover you can find an S that is orthogonal ie, the columns of S are orthonormal This is a hugely important fact in applications it is related to the SVD, among many other things, but we will not have the time for it in class Let me just mention that finding an orthogonal S requires only a simple fix to Algorithm 0: In Step 3, you have to choose an orthonormal basis of Ker A λ j I n not just some basis Then, in Step 4, the big list s, s,, s m will automatically be an orthonormal basis of R n This is one of the miracles of symmetric matrices See 84 in Olver/Shakiban for a proof and more details Exercise a Diagonalize A 4 0 b Diagonalize A 0 [5 points] [5 points]

Math 44 Fall 06 homework page 9 c Diagonalize A 0 0 0 0 [0 points] Solution to Exercise We proceed by using Algorithm 0 You have seen this often enough that a We have x det A xi det x 4 x x 4 5x x The roots of this polynomial ie, the eigenvalues of A are clearly 0 and 5 We number them as λ 0 and λ 5 We now must find bases for Ker A λ I and Ker A λ I We can do this using the standard Gaussian elimination procedure you can also see the result directly if you are sufficiently astute, obtaining the basis for Ker A λ I and the basis for Ker A λ I The big list is therefore s, s, This has size, which is our n; hence, the matrix A can be diagonalized We have s, µ λ 0, s and µ λ 5 0 0 Therefore, S and Λ 0 5 0 0 b We can take S and Λ 0 0 One way to solve this is by proceeding exactly as in part a Another is to observe that our matrix A is already diagonal, so we can diagonalize it by simply taking S I and Λ A 0 0 0 0 c We can take S 0 0 and Λ 0 0 0 0 0 Again, the method is the same as for part a, but this time we have to solve the cubic equation x 3 3x + x 0 This is done as follows: The root x 0 is obvious Leaving this root aside, we can find the other two roots by solving x 3x + 0; this can be done using the standard formula for the roots of a quadratic Exercise 3 Define a sequence g 0, g, g, of integers by g 0 0, g, g n+ 3g n + g n for all n This is similar to the Fibonacci sequence Here is a partial table of values: k 0 3 4 5 6 7 8 9 0 g k 0 3 0 33 09 360 89 397

Math 44 Fall 06 homework page 0 a What are g 9 and g 0? b Define a -matrix A by A c Prove that 3 0 A n gn+ g n g n [ points] Find A and A 3 [ points] g n for all n The proof or at least the easiest proof is by induction over n: In the induction base, you should check that 5 holds for n In the induction step, you assume that 5 holds for n m for a given positive integer m, and then you have to check that 5 also holds for n m + Use the fact that A m+ AA m 3 A 0 m [0 points] d Diagonalize A [0 points] e Use this to obtain an explicit formula for g n The formula will involve square roots and n-th powers of numbers, but no recursion and no matrices [0 points] Solution to Exercise 3 a We have g 9 3g 8 + g 7 3 397 + 89 970 and g 0 3g 9 + g 8 3 970 + 397 4837 b We have A 0 3 and A 3 3 33 0 [This is, of course, a 0 3 particular case of 5] c We mimic the proof of Proposition 48 in the lecture notes: We shall prove 5 by induction over n: Induction base: We have A A g+ g g g we obtain A 3 0 g+ g g g 3 0 Comparing this with 5 since g + g 3, g and g g 0 0, In other words, 5 holds for n This completes the induction base Induction step: Let N be a positive integer I am calling it N rather than m here, in order to stay closer to the proof of Proposition 48 in the lecture notes Assume that 5 holds for n N We must show that 5 also holds for n N + The definition of the sequence g 0, g, g, shows that g N+ 3g N+ + g N and g N+ 3g N + g N We have assumed that 5 holds for n N In other words, A N gn+ g N g N g N

Math 44 Fall 06 homework page Now, A N+ }{{} A N gn+ g N g N g N gn+ g N }{{} A 3 gn+ 3 + g N 0 g N+ + g N 0 g N 3 + g N g N + g N 0 g N g N by the definition of a product of two matrices 3gN+ + g N g N+ gn+ g N+ 3g N + g N g N g N+ g N 3 0 since 3g N+ + g N g N+ and 3g N + g N g N+ In other words, 5 holds for n N + This completes the induction step; hence, 5 is proven d We have A SΛS, where S 3 3 and Λ 3 3 3 3 + 0 3 This can be found using Algorithm 0 again 0 3 + e Fix n N Let S and Λ be as in the solution to part d Then, 3 n 3 + 0 Λ n 3 n 0 3 + because in order to raise a diagonal matrix to the n-th power, it suffices to raise each diagonal entry to the n-th power Now, from A SΛS, we obtain A n SΛ n S as shown in class 3 3 3 3 3 3 3 3 3 + since S 3 3 3 3, Λ n 3 n 3 + 0 0 3 n 3 + 3 n 3 + 0 0 3 n 3 +

Math 44 Fall 06 homework page 3 and S 3 3 3 If we multiply out this product, we obtain 3 + explicit formulas for each of the four entries of A n In particular, we obtain the following formula for its, -th entry: A n, 3 But 5 shows that A n, g n Hence, g n A n, 3 3 n 3 + 3 n 3 + 3 n 3 + 3 n 3 + This is the formula we are looking for It is, of course, similar to the Binet formula for the Fibonacci numbers Exercise 4 Let A be an n n-matrix Assume that A can be diagonalized, with A SΛS for an invertible n n-matrix S and a diagonal n n-matrix Λ a Diagonalize A [5 points] b Diagonalize A, if A is invertible You can use the fact that for an invertible A, the diagonal entries of Λ are nonzero, and so Λ is a diagonal matrix again [5 points] c Diagonalize A T the transpose of A [0 points] The answers should be in terms of S and Λ For example, A + I n can be diagonalized as follows: A + I n S Λ + I n S Indeed, S is an invertible matrix, Λ + I n is a diagonal matrix being the sum of the two diagonal matrices Λ and I n, and we have S Λ + I n S } SΛS {{ } + SI }{{} n A S Solution to Exercise 4 a We have A SΛS, and thus S A + SS }{{} I n A + I n A SΛS SΛ S } {{ S} ΛS S }{{} ΛΛ S SΛ S I n Λ The matrix Λ is diagonal Thus, any power of Λ is diagonal as well because in order to raise a diagonal matrix Λ to some power, we merely need to raise its diagonal entries to this power In particular, Λ is diagonal Therefore, the equality A SΛ S provides a diagonalization of A

Math 44 Fall 06 homework page 3 b Assume that A is invertible Then, it is not hard to see that all diagonal entries of Λ are nonzero 3 Therefore, the diagonal matrix Λ is invertible, and its inverse Λ is obtained by inverting all diagonal entries of Λ In particular, Λ is a diagonal matrix as well You can use this fact without proof, but it is helpful to know how it is proven Recall that UV V U for any two invertible n n-matrices U and V Applying this to U SΛ and V S, we find SΛS S SΛ SΛ S We have A SΛS, and thus A SΛS SΛ S } {{ } S }{{} Λ S This equality provides a diagonalization of A since the matrix Λ is diagonal c Proposition 38 f in the lecture notes applied to S instead of A shows that the matrix S T is invertible, and its inverse is S T S T Proposition 38 e in the lecture notes shows that any two matrices U and V satisfy UV T V T U T as long as the product UV is well-defined, ie, the number of columns of U equals the number of rows of V Applying this to U SΛ and SΛ T }{{}}{{} S T Λ T S T The matrix Λ its diagonal Hence, Λ T Λ because transposing a diagonal matrix does not change it Now, from A SΛS, we obtain V S, we obtain SΛS T S T A T SΛS T S T Λ T }{{} Λ }{{} S T S T S T Λ T S T S T Λ S T This equality provides a diagonalization of A T since the matrix Λ is diagonal 3 Proof Assume the contrary Then, at least one diagonal entry of Λ is zero But the matrix Λ is diagonal, and thus upper-triangular Hence, the determinant of Λ equals the product of its diagonal entries, and therefore equals 0 since at least one diagonal entry is 0, and therefore the whole product must be 0 In other words, det Λ 0 Now, from A SΛS, we obtain det A det SΛS det S det }{{ Λ } det S 0 Hence, A is not invertible since a square 0 matrix with determinant 0 is not invertible This contradicts the fact that A is invertible This contradiction shows that our assumption was false, qed [This was not the easiest or most elementary proof, but the shortest one]