Lecture 23: Determinants (1)

Similar documents
Lecture 23: Trace and determinants! (1) (Final lecture)

Lecture 22: Jordan canonical form of upper-triangular matrices (1)

Lecture 19: Isometries, Positive operators, Polar and singular value decompositions; Unitary matrices and classical groups; Previews (1)

Lecture 21: The decomposition theorem into generalized eigenspaces; multiplicity of eigenvalues and upper-triangular matrices (1)

Travis Schedler. Thurs, Oct 27, 2011 (version: Thurs, Oct 27, 1:00 PM)

Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1)

Lecture 19: Polar and singular value decompositions; generalized eigenspaces; the decomposition theorem (1)

Lecture 11: Finish Gaussian elimination and applications; intro to eigenvalues and eigenvectors (1)

Question: Given an n x n matrix A, how do we find its eigenvalues? Idea: Suppose c is an eigenvalue of A, then what is the determinant of A-cI?

Determinants An Introduction

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

Chapter 2:Determinants. Section 2.1: Determinants by cofactor expansion

and let s calculate the image of some vectors under the transformation T.

MATH 323 Linear Algebra Lecture 6: Matrix algebra (continued). Determinants.

Therefore, A and B have the same characteristic polynomial and hence, the same eigenvalues.

Math 110 Linear Algebra Midterm 2 Review October 28, 2017

FALL 2011, SOLUTION SET 10 LAST REVISION: NOV 27, 9:45 AM. (T c f)(x) = f(x c).

Linear Systems and Matrices

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

MH1200 Final 2014/2015

22A-2 SUMMER 2014 LECTURE 5

DETERMINANTS DEFINED BY ROW OPERATIONS

Determinants Chapter 3 of Lay

ANSWERS. E k E 2 E 1 A = B

Homework 11/Solutions. (Section 6.8 Exercise 3). Which pairs of the following vector spaces are isomorphic?

4 Elementary matrices, continued

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Chapter SSM: Linear Algebra Section Fails to be invertible; since det = 6 6 = Invertible; since det = = 2.

MATH 2050 Assignment 8 Fall [10] 1. Find the determinant by reducing to triangular form for the following matrices.

Evaluating Determinants by Row Reduction

A = , A 32 = n ( 1) i +j a i j det(a i j). (1) j=1

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Linear algebra I Homework #1 due Thursday, Oct Show that the diagonals of a square are orthogonal to one another.

1 Last time: least-squares problems

Online Exercises for Linear Algebra XM511

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Calculating determinants for larger matrices

Math 320, spring 2011 before the first midterm

Fundamental theorem of modules over a PID and applications

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

Determinants. Recall that the 2 2 matrix a b c d. is invertible if

Eigenvalues and Eigenvectors

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Properties of the Determinant Function

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

MIT Final Exam Solutions, Spring 2017

Solution to Homework 1

Math Computation Test 1 September 26 th, 2016 Debate: Computation vs. Theory Whatever wins, it ll be Huuuge!

Cheat Sheet for MATH461

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Lecture Summaries for Linear Algebra M51A

Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler.


Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C

Linear Algebra II. Ulrike Tillmann. January 4, 2018

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Conceptual Questions for Review

Extra Problems for Math 2050 Linear Algebra I

3 Matrix Algebra. 3.1 Operations on matrices

0.1 Rational Canonical Forms

Formula for the inverse matrix. Cramer s rule. Review: 3 3 determinants can be computed expanding by any row or column

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

k=1 ( 1)k+j M kj detm kj. detm = ad bc. = 1 ( ) 2 ( )+3 ( ) = = 0

Determinants by Cofactor Expansion (III)

Eigenvalues and Eigenvectors A =

A = 3 1. We conclude that the algebraic multiplicity of the eigenvalues are both one, that is,

2. Every linear system with the same number of equations as unknowns has a unique solution.

HOMEWORK 9 solutions

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Third Midterm Exam Name: Practice Problems November 11, Find a basis for the subspace spanned by the following vectors.

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

av 1 x 2 + 4y 2 + xy + 4z 2 = 16.

Section 13 Homomorphisms

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

4 Elementary matrices, continued

Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm

NOTES FOR LINEAR ALGEBRA 133

det(ka) = k n det A.

Math 240 Calculus III

A Field Extension as a Vector Space

JORDAN NORMAL FORM NOTES

1 9/5 Matrices, vectors, and their applications

LU Factorization. A m x n matrix A admits an LU factorization if it can be written in the form of A = LU

Eigenvalues and Eigenvectors

INTRODUCTION TO REPRESENTATION THEORY AND CHARACTERS

The determinant. Motivation: area of parallelograms, volume of parallepipeds. Two vectors in R 2 : oriented area of a parallelogram

No books, notes, any calculator, or electronic devices are allowed on this exam. Show all of your steps in each answer to receive a full credit.

Linear algebra II Tutorial solutions #1 A = x 1

MATH 1210 Assignment 4 Solutions 16R-T1

CHAPTER 8: Matrices and Determinants

Math Lecture 26 : The Properties of Determinants

Chapter 1: Systems of Linear Equations and Matrices

Math 346 Notes on Linear Algebra

REU 2007 Apprentice Class Lecture 8

8 Square matrices continued: Determinants

Math 313 (Linear Algebra) Exam 2 - Practice Exam

Solution Set 7, Fall '12

MTH 5102 Linear Algebra Practice Final Exam April 26, 2016

Transcription:

Lecture 23: Determinants (1) Travis Schedler Thurs, Dec 8, 2011 (version: Thurs, Dec 8, 9:35 PM)

Goals (2) Warm-up: minimal and characteristic polynomials of Jordan form matrices Snapshot: Generalizations of Jordan form over R and other fields Determinants: their unique properties, and how to compute them Characteristic polynomial of linear transformations Volume formula and the change-of-variable formula in multivariable integrals Read Chapters 8 and 10, do PS 11.

Warm-up: Minimal and char. polys of Jordan form matrices (3) Let A be a Jordan form matrix. Prove that: (a) The characteristic polynomial of A is (x λ) d λ, λ where d λ is the sum of the sizes of all Jordan blocks with λ on the diagonal. (b) The minimal polynomial of A is (x λ) m λ, λ where m λ is the size of the largest Jordan block with λ on the diagonal (the product is only taken over λ that occur, i.e., over eigenvalues of A). Hint: Note that p(a) = 0 if and only if p(j) = 0 for all Jordan blocks J occurring in A. Now find the minimal polynomial of a single Jordan block.

Solution to warm-up (4) (a) We know that the formula is valid if d λ equals the number of times that λ occurs on the diagonal. But this is just the sum of the sizes of all Jordan blocks with λ on the diagonal. (b) Since A is block diagonal with Jordan blocks on the diagonal, p(a) = 0 if and only if p(j) = 0 for every Jordan block J occurring in A. For each Jordan block J = λi + N n, the only eigenvalue is λ, so the minimal polynomial p J (x) is a power of (x λ), and this power is at most n since χ J (x) = (x λ) n. But we know that J λi = N n has the property that 0 (but Nn n = 0). So p J (x) = (x λ) n. Then p A (J) = 0 for all Jordan blocks if and only if (x λ) n divides p A whenever a Jordan block λi + N n occurs. Hence p A (J) = 0 for all Jordan blocks if and only if the given polynomial is a factor of p A, i.e., the given polynomial is the minimal polynomial. N n 1 n

Snapshot: Gen. of Jordan form for F = R [opt] (5) Over R, not all transformations have Jordan form because they need not have eigenvalues.

Snapshot: Gen. of Jordan form for F = R [opt] (5) Over R, not all transformations have Jordan form because they need not have eigenvalues. That is, they don t all have upper-triangular matrices.

Snapshot: Gen. of Jordan form for F = R [opt] (5) Over R, not all transformations have Jordan form because they need not have eigenvalues. That is, they don t all have upper-triangular matrices. But they have block upper-triangular matrices. So they have a block upper-tri. Jordan form : block-diagonal with Jordan blocks of the form X I 0 X I......, X I 0 X ( where X ) is either 1 1 (an eigenvalue), or the matrix a b for some a, b R, b 0. b a

Snapshot: Gen. of Jordan form for F = R [opt] (5) Over R, not all transformations have Jordan form because they need not have eigenvalues. That is, they don t all have upper-triangular matrices. But they have block upper-triangular matrices. So they have a block upper-tri. Jordan form : block-diagonal with Jordan blocks of the form X I 0 X I......, X I 0 X ( where X ) is either 1 1 (an eigenvalue), or the matrix a b for some a, b R, b 0. b a This matrix is unique up to permuting the Jordan blocks, and replacing b with b for all the X s inside any given block.

Snapshot: Gen. of Jordan form for arbitrary F [opt] (6) Suppose that F contains Q (i.e., all nonzero integers are invertible). Then we can still put every transformation in block-diagonal form with blocks X I 0 X I......, X I 0 X now with X a matrix with irreducible char. poly.. This is unique up to rearranging blocks and replacing X with another matrix with the same char. poly.

Snapshot: Gen. of Jordan form for arbitrary F [opt] (6) Suppose that F contains Q (i.e., all nonzero integers are invertible). Then we can still put every transformation in block-diagonal form with blocks X I 0 X I......, X I 0 X now with X a matrix with irreducible char. poly.. This is unique up to rearranging blocks and replacing X with another matrix with the same char. poly. Remark: This works also for some cases where F does not contain Q: the char. poly. χ T of T must be Galois (all irreducible factors must have distinct roots: e.g., x 2 + 1 over R has distinct roots ±i).

Determinants (7) Consider the properties of a function φ : Mat(n, n, F) F: (a) φ(ab) = φ(a)φ(b) and φ(e) = det(e) when E is an (b) (c) elementary matrix (single row op.) (i) If A is obtained from A by adding a mult. of one row to another, then φ(a ) = φ(a). (ii) If A is obtained from A by rescaling a row by λ, then φ(a ) = λφ(a). (iii) If A is obtained from A by permuting two rows, then φ(a ) = φ(a). (i ) Multiadditivity: If A, B, and C are identical except in one row, where C is the sum of that row in A and B, then φ(c) = φ(a) + φ(b). (ii ) Multihomogeneity: part (ii) above. (iii ) Alternation: If A has two identical rows, then φ(a) = 0. Theorem The function φ = det is: the unique nonzero function satisfying (a); the unique function up to scaling satisfying (b); the unique function up to scaling satisfying (c).

Equivalence of (a) and (b) up to scaling (8) Note: φ = 0 satisfies all (a), (b), and (c). Lemma For φ nonzero, (a) is equivalent to: (b) together with φ(i ) = 1. Proof. (b) is equivalent to: if E is an elementary matrix (corr. to row op), then φ(eb) = det(e)φ(b) for all B. So (a) implies (b). Also (a) implies φ(i ) = φ(i 2 ) = φ(i )φ(i ), so φ(i ) = 1 or 0; but if φ(i ) = 0 then φ(a) = φ(ai ) = 0 for all A. So (a) implies φ(i ) = 1. Conversely, given (b), write any A = E 1 E n C where C is reduced row-echelon and E i elementary. If A is invertible, C = I. Then φ(i ) = 1 implies φ(a) = det(e 1 ) det(e n ). In particular φ(e) = det(e) for E elementary. Also, if A and B are invertible, we get φ(ab) = φ(a)φ(b). On the other hand, if A is not invertible, C has a zero row, and then φ(c) = 0 by homogeneity, and hence φ(a) = 0 as well by row ops. So if A or B is noninvertible, φ(ab) = 0 = φ(a)φ(b).

(c) implies (b) (9) Lemma Condition (c) implies (b). Proof. First we prove (i). This follows immediately from (i ) and (iii ). Note that (ii)=(ii ). Finally we prove (iii). Take A = (u 1 u n ) t. Consider the matrix C obtained by replacing the i-th and j-th rows both by u i + u j. Then (iii ) implies φ(c) = 0. But by (i ) and (iii ), φ(c) also equals φ(a) + φ(a ) where A is obtained from A by swapping the i-th and j-th rows. So φ(a ) = φ(a), i.e., (iii). So it remains to show: det satisfies (c); a function satisfying (b) is unique up to scaling. We showed det satisfies (c) on the HW!

Uniqueness (10) Lemma There is at most one function φ, up to scaling, satisfying (b).

Uniqueness (10) Lemma There is at most one function φ, up to scaling, satisfying (b). Proof. If φ satisfies (b), then φ(a) is determined from its row-echelon form matrix using row ops. But we saw that φ(c) = 0 for C row-echelon and noninvertible. So then φ(a) is determined uniquely by φ(i ), and changing φ(i ) just rescales φ. Examples! Use Gaussian elimination to compute determinant! (And hence char. poly as well!)

Characteristic polynomials (11) Recall that for a matrix A, χ A (x) = det(xi A). (For upper-tri matrices, this is obviously the usual formula.) Theorem For T L(V ), χ M(T ) (x) does not depend on basis.

Characteristic polynomials (11) Recall that for a matrix A, χ A (x) = det(xi A). (For upper-tri matrices, this is obviously the usual formula.) Theorem For T L(V ), χ M(T ) (x) does not depend on basis. Proof. Let A and A be two different matrices of T. Then A = BAB 1 for some B. So χ A (x) = det(xi BAB 1 ) = det(b(xi A)B 1 ) = det(b) det(xi A) det(b 1 ) = det(xi A) = χ A (x).

Characteristic polynomials (11) Recall that for a matrix A, χ A (x) = det(xi A). (For upper-tri matrices, this is obviously the usual formula.) Theorem For T L(V ), χ M(T ) (x) does not depend on basis. Proof. Let A and A be two different matrices of T. Then A = BAB 1 for some B. So χ A (x) = det(xi BAB 1 ) = det(b(xi A)B 1 ) = det(b) det(xi A) det(b 1 ) = det(xi A) = χ A (x). Same argument shows: det(m(t )) doesn t depend on the basis. Alternatively, note, for A Mat(n, n, F), χ A (x) = x n tr(a)x n 1 + + ( 1) n det(a), and independence of χ M(T ) (x) implies that of tr and det.

Characteristic polynomial for arbitary F (12) Thus we get: Definition The char. poly. of T is det(xi M(T )), independent of basis!

Characteristic polynomial for arbitary F (12) Thus we get: Definition The char. poly. of T is det(xi M(T )), independent of basis! I already used this definition earlier when talking about Jordan form for general F.

Volumes (13) Theorem (Theorem 10.38) Let F = R and V = R n. Let R V be a solid region and T L(V ). Then vol(t (R)) = det T vol(r). Proof. Write T = SP for S an isometry and P positive. Then T (R) = S(P(R)). Also det T = det(s) det(p) = det S det P = det P. So this reduces to the isometry and positive cases, i.e., it is enough to show that vol(p(r)) = det P vol(r) and that vol(s(r )) = vol(r ) for R = P(R). We do this next.

Volumes for positive operators and isometries (14) Lemma For S an isometry, vol(s(r)) = vol(r).

Volumes for positive operators and isometries (14) Lemma For S an isometry, vol(s(r)) = vol(r). Proof. We use that: a rotation in R 2 preserves area. Then if we have a rotation of the x 1, x 2 plane doing nothing to other coords, volume is preserved. Then the result follows from the next lemma. Lemma Every isometry S is a product of rotations in planes (in x 1, x 2 plane up to choice of orthonormal coords) and reflections (up to coords, x 1 x 1, other coords the same).

Volumes for positive operators and isometries (14) Lemma For S an isometry, vol(s(r)) = vol(r). Proof. We use that: a rotation in R 2 preserves area. Then if we have a rotation of the x 1, x 2 plane doing nothing to other coords, volume is preserved. Then the result follows from the next lemma. Lemma Every isometry S is a product of rotations in planes (in x 1, x 2 plane up to choice of orthonormal coords) and reflections (up to coords, x 1 x 1, other coords the same). Proof: by induction; if we have time on board; otherwise exercise!

Volumes for positive operators and isometries (14) Lemma For S an isometry, vol(s(r)) = vol(r). Proof. We use that: a rotation in R 2 preserves area. Then if we have a rotation of the x 1, x 2 plane doing nothing to other coords, volume is preserved. Then the result follows from the next lemma. Lemma Every isometry S is a product of rotations in planes (in x 1, x 2 plane up to choice of orthonormal coords) and reflections (up to coords, x 1 x 1, other coords the same). Proof: by induction; if we have time on board; otherwise exercise! Corollary: Volume does not depend on choice of orthonormal coordinates.

Volumes under positive operators (15) Lemma For P positive, vol(p(r)) = det P vol(r).

Volumes under positive operators (15) Lemma For P positive, vol(p(r)) = det P vol(r). Proof. Choose orthonormal coordinates so that. P rescales each coordinate direction by a nonneg. number. This rescales volume by the product of these numbers, det(p). So changing coords rescales volume by det(t ).

Change of variable for integrals [opt] (16) Let T L(R n ) be invertible, and f : R n R (or to C). Then f (T (v)) d vol = det T 1 f (T (v)) d(vol T ) = det T 1 f (v) d vol. Moreover, we can replace T with an arbitrary nonlinear but invertible F : R n R n. Then at every x 0 R n, we have F (x 0 ) L(V ), which is the multivar. derivative of F at x 0 (matrix of partials). So at x = x 0, vol(d F (x)) = vol(f (x 0 ) d x). Thus f (F (v)) d vol = det F 1 f (F (v)) d(vol F ) = ( det F 1 F 1 )f (v) d vol.