33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

Similar documents
Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Review problems for MA 54, Fall 2004.

Conceptual Questions for Review

Linear Algebra Primer

Math Linear Algebra Final Exam Review Sheet

235 Final exam review questions

A Brief Outline of Math 355

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

ANSWERS. E k E 2 E 1 A = B

PRACTICE FINAL EXAM. why. If they are dependent, exhibit a linear dependence relation among them.

Eigenvalues and Eigenvectors

Calculating determinants for larger matrices

SUMMARY OF MATH 1600

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Examples True or false: 3. Let A be a 3 3 matrix. Then there is a pattern in A with precisely 4 inversions.

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

TBP MATH33A Review Sheet. November 24, 2018

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

MATH 1553 PRACTICE MIDTERM 3 (VERSION B)

18.06SC Final Exam Solutions

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

Eigenvalues, Eigenvectors, and Diagonalization

Solving a system by back-substitution, checking consistency of a system (no rows of the form

Solutions to Final Exam

Chapter 3 Transformations

MA 265 FINAL EXAM Fall 2012

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Linear Algebra- Final Exam Review

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2

Math 215 HW #9 Solutions

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

MATH 583A REVIEW SESSION #1

Math 291-2: Lecture Notes Northwestern University, Winter 2016

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

. = V c = V [x]v (5.1) c 1. c k

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

2. Every linear system with the same number of equations as unknowns has a unique solution.

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

Chapter 7: Symmetric Matrices and Quadratic Forms

I. Multiple Choice Questions (Answer any eight)

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Math 110 Linear Algebra Midterm 2 Review October 28, 2017

1. Select the unique answer (choice) for each problem. Write only the answer.

MATH 310, REVIEW SHEET 2

7. Symmetric Matrices and Quadratic Forms

Math 21b. Review for Final Exam

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

and let s calculate the image of some vectors under the transformation T.

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

Study Guide for Linear Algebra Exam 2

MATH 221, Spring Homework 10 Solutions

1 9/5 Matrices, vectors, and their applications

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

LINEAR ALGEBRA SUMMARY SHEET.

MATH 235. Final ANSWERS May 5, 2015

MTH 464: Computational Linear Algebra

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

Cheat Sheet for MATH461

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

Linear Algebra II. Ulrike Tillmann. January 4, 2018

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

1. General Vector Spaces

FINAL EXAM Ma (Eakin) Fall 2015 December 16, 2015

MAT Linear Algebra Collection of sample exams

Solution Set 7, Fall '12

Linear Algebra in Actuarial Science: Slides to the lecture

MATH 1553-C MIDTERM EXAMINATION 3

Math Matrix Algebra

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

A Review of Linear Algebra

Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler.

Math Homework 8 (selected problems)

1 Last time: determinants

Notes on singular value decomposition for Math 54. Recall that if A is a symmetric n n matrix, then A has real eigenvalues A = P DP 1 A = P DP T.

Chapter 5. Eigenvalues and Eigenvectors

MIT Final Exam Solutions, Spring 2017

Linear Algebra - Part II

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

What is on this week. 1 Vector spaces (continued) 1.1 Null space and Column Space of a matrix

Linear Algebra Review

Warm-up. True or false? Baby proof. 2. The system of normal equations for A x = y has solutions iff A x = y has solutions

MATH 369 Linear Algebra

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

Math 315: Linear Algebra Solutions to Assignment 7

Section 6.4. The Gram Schmidt Process

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

1 Last time: least-squares problems

Reduction to the associated homogeneous system via a particular solution

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

MATH 423 Linear Algebra II Lecture 20: Geometry of linear transformations. Eigenvalues and eigenvectors. Characteristic polynomial.

Lecture 10 - Eigenvalues problem

There are six more problems on the next two pages

Transcription:

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM (UPDATED MARCH 17, 2018) The final exam will be cumulative, with a bit more weight on more recent material. This outline covers the what we ve done since the second midterm; see previous outlines, your notes and homeworks to review the older stuff. I ve included references to corresponding sections in Strang, and lists of recommended problems if you want more practice beyond the homework problems. As with previous study guides, this is not intended as a substitute for class notes, and certainly not a substitute for doing and understanding the homework problems! (1) Orthonormal bases and Gram Schmidt (Section 4.4) For additional practice: 4.4: 3, 15, 20 Definitions of: mutually orthogonal set of vectors, orthonormal bases, orthogonal matrices. Examples of orthogonal matrices (ex. identity, permutations, rotations, reflections). Show that orthogonal matrices preserve dot products between vectors, and hence lengths of vectors: If Q is n n orthogonal and x, y R n, then (Qx) (Qy) = x T Q T Qy = x T I n y = x y. Show that if Q is m n with orthonormal columns, then QQ T is the matrix for projection to its column space C(Q). Gram Schmidt procedure: Given a set of vectors a 1,..., a n R m spanning a subspace W of dimension k, know how to use the Gram Schmidt algorithm to produce a (possibly smaller) set of orthonormal vectors q 1,..., q k R m with the same span W. Know the two variants of the algorithm: One produces the vectors q 1,..., q k in order, while the other produces a set of mutually orthogonal (not necessarily unit) vectors b 1,..., b k, and then normalizes them at the end. We get more than we asked for: for each 1 l k, {q 1,..., q l } is an orthonormal basis for the span of the first l linearly independent vectors among the a 1,..., a n. Know how to perform QR factorization of a matrix A (it s basically bookkeeping for the Gram Schmidt procedure). What it means for a real vector space V to be the direct sum W 1 W 2 of two subspaces. Know that for any subspace W of V we have V = W W. Moreover, if {q 1,..., q k } is an orthonormal basis for W, and {q k+1,..., q n is an orthonormal basis for W, and dim V = m, then n = m and {q 1,..., q m } is an orthonormal basis for V. (2) Determinants (Sections 5.1, 5.2, some of 5.3) For additional practice: 5.1: 2, 4, 6, 7, 15, 16, 24, 34 Determinant for 2 2 matrices. We defined determinant for general n n matrices in three stages: (a) Stage 1 (determinant for the identity matrix): det I n = 1. (b) Stage 2 (determinant for permutation matrices): A permutation of [n] = {1,..., n} is a mapping σ : [n] [n] that is one-to-one and onto. To a permutation σ of [n] we associate an n n matrix P σ with entries P σ (i, j) equal to 1 when σ(i) = j and zero otherwise. The number N(σ) of inversions of a permutation σ : [n] [n] is the number of pairs (i, j) with 1 i < j n and σ(i) > σ(j). For a permutation matrix P σ we define det P σ = sign(σ) = ( 1) N(σ). 1

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM 2 (c) Stage 3 (general case): For A = (a ij ) n n we define det A by the Liebniz formula (or big formula ) det A = σ S n sign(σ) n a iσ(i), where S n (called the symmetric group) is the set of all n! permutations of [n]. Important properties of the determinant: (a) det A = det A T (b) Viewed as a function of the n columns of A = (a 1 a n ), det is a multilinear function. This means that for any 1 j n, a 1,..., a j 1, a j+1,..., a n, a, b R n and c, d R, det (a 1..., a j 1 ca + db a j+1 a n ) = c det (a 1..., a j 1 a a j+1 a n ) + d det (a 1..., a j 1 b a j+1 a n ). (c) If a row or column of A is the zero vector then det A = 0. (d) det doesn t change if we replace a row of A with the sum of that row and a scalar muliple of another row (i.e. we multiply A by an elmination matrix E ij for some multiplier l ij ). (e) If two rows of A are equal then det A = 0. (f) If we switch two rows of A then the determinant switches sign. (g) If A is lower or upper triangular then det A = n i=1 a ii. (h) det A is the product of its pivots. (i) In particular, det A = 0 if and only if A is singular (if and only if the columns of A are linearly dependent). (j) If B and C are both n n then det(bc) = det(b) det(c). Cofactor expansion: The (i, j) cofactor fo A is cof ij (A) = ( 1) i+j det A ij, where A ij denotes the n 1 n 1 matrix with ith row and jth column removed from A. Laplace s formula says that for any 1 i n n det A = a ij cof ij (A) (expansion along ith row) and det A = j=1 n a ki cof ki (A) k=1 i=1 (expansion along ith column). Know how to compute determinants using the Leibniz formula, row reduction (answer is ±1 times the product of pivots, where the sign is determined by the number of row swaps that were made in elimination), and cofactor expansion. (3) Linear transformations (Sections 8.1, 8.2) For additional practice: 8.2: 10, 14, 20 21 Definition of a linear transformation: for real vector spaces X, Y, a mapping T : X Y is a linear transformation if for every f, g X and a, b R, T (af + bg) = at (f) + bt (g). Examples: (a) If A is m n real matrix, then T A : R n R m defined by T A (x) = Ax is a linear transformation. (b) D = d/dx is a linear transformation from the space P of all polynomials to itself. If X is a real vector space with basis B = {f 1,..., f n }, then for each f X we can expand f = n i=1 c if i for some scalars c 1,..., c n R, and so we can associate a column vector (f) B R n with components c 1,..., c n. Note that the mapping L B : X R n given by

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM 3 L B (f) = (f) B is a linear mapping, giving the coordinates of f is the coordinate system B. This mapping has an inverse L 1 B : Rn X given by L 1 B (a) = a 1f 1 + a n f n. Turns out example (a) above in a sense gives all linear transformations between finite dimensional vector spaces. Specifically, if T : X Y is a linear transformation between real vector spaces X, Y with respective bases B X = {f 1,..., f n }, B Y = {g 1,..., g m }, then we get an m n matrix A = (a ij ) by expanding the image of each f i in the basis for Y : Since g 1,..., g m is a basis for Y, for each 1 j n there are scalars a 1j,..., a mj R such that m T (f j ) = a ij g i. i=1 In other words, the columns of A are the vectors (T (f j )) BY. Now with this A we have (T (x 1 f 1 + +x n f n )) BY = Ax = T A (x) for all x R n, or in other words, L BY T L 1 B X = T A (the picture I drew in class is a nice visual aid for this fact). Change of basis: If B old = {f 1,..., f n } and B new = {g 1,..., g n } are two different bases for a real vector space X, then for any f X we can pass from a column vector representation (f) Bold in the first basis to a column vector representation (f) Bnew in the second basis by multiplying by the n n change of basis matrix C: (f) Bnew = C(f) Bold. To find the matrix C, note that substituting an element f j of the basis B old for f in the above equation yields Ce j = c j, the jth column of C, on the right hand side, while on the left hand side we have (f j ) Bnew. So the columns are c j = (f j ) Bnew. In other words, to find c ij we just express each old basis vector f j in terms of the new basis vectors: f j = n i=1 c ijg i. How do matrices transform under change of basis? Consider a linear transformation T : X X (from a space to itself, as is the case with square matrices for instance) let B old = {f 1,..., f n }, B new = {g 1,..., g n } be two bases for X, and let C : R n R n be the change of basis matrix as above. Letting A be the matrix for T in the basis B old and B the be the matrix for T in the basis B new, we have B = CAC 1. (This is perhaps easiest to see using the diagram I made in class.) Given a linear transformation T : X Y between abstract vector spaces (such as the derivative transformation D = d/dx from the space of degree three polynomials to itself), know how to write down the matrix representation for T with respect to given bases for X and Y. For the case X = Y, and X having two given bases, know how to write down the change of basis matrix for going from column vector representations in one basis to the other, and know how to transform the matrix representation of T under the different bases. (Review the homework problems and examples from lecture, and ask questions!) (4) Eigenvalues and eigenvectors (Sections 6.1, 6.2, 6.4, 7.2, some of 7.4) For additional practice: 6.1: 5, 10, 13, 14, 21, 24; 6.2: 2, 4, 7, 11, 15 16 6.4: 5, 6, 23; 7.2: 2, 8 Algebra and arithmetic of complex numbers: C = {a + ib : a, b R} where i = 1. Complex conjugate: For z = a + ib C, z = a ib. In particular, z z = (a + ib)(a ib) = a 2 + b 2 = z 2 is the squared distance of z to the origin in the complex plane. We call z the modulus of z (we don t say absolute value, even though it generalizes the absolute value for real numbers). Euler s formula: for any θ R, e iθ = cos θ + i sin θ.

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM 4 In particular, for any integer n, e i(θ+2πn) = e iθ. (-1.1) We can use this to give a polar representation for complex numbers (complementing the Cartesian representation z = a + ib): For z = a + ib, letting θ [0, 2π) be such that z cos θ = a and z sin θ = b, we have z = z (cos θ + i sin θ) = z e iθ. The nth roots of unity are the n distinct complex number roots of the polynomial z n 1 (i.e. the solutions to the equation z n = 1). Write z = z e iθ. First solve for the modulus of z: taking the modulus of both sizes of z n = 1 we get z n = 1, so z = 1. Now we want to find the angles θ such that 1 = (e iθ ) n = e inθ. We can represent 1 on the left hand side as 1 = e 2πim for any integer m. Thus, any θ for which e inθ = e 2πim for some integer m is a root of z n 1. Equating exponentials, we find the set of all angles θ that work is the set of all integer multiples of 2π/n: θ {..., 4π/n, 2π/n, 0, 2π/n, 4π/n,... }. But these numbers repeat themselves; the list of distinct angles is 0, 2π/n,..., 2π(n 1)/n. So the nth roots of unity are {e 2πik/n : 0 k n 1}. These numbers divide the unit circle in C into n equal arcs, and include the number 1. Definitions: Let A be an n n real or complex matrix (we only deal with real matrices in this class, but that doesn t prevent its eigenvectors and eigenvalues from being complex!). A complex scalar λ C is an eigenvalue of A if Av = λv for some nonzero v C n. v is called an eigenvector of A associated to the eigenvalue λ. Note that any nonzero scalar multiple of an eigenvector v is also an eigenvector with the same eigenvalue. This motivates defining the eigenspace W λ = {v C n : Av = λv}. You should be able to show that if λ is an eigenvalue of A then W λ is a subspace of C n (where we now use scalar multiplication with complex numbers, so W is a subspace of C n if ax + by W for all x, y W and a, b C). Know how to find all eigenvalues and associated eigenvectors and eigenspaces for a given real square matrix A: (a) Write down the characteristic polynomial χ A (z) = det(a zi n ), and find all of its roots (counting multiplicities). That is, we factorize χ A (z) = ( 1) n (z λ 1 ) (z λ n ), where the number of repetitions of λ in the list {λ 1,..., λ n } is called its algebraic multiplicity. (b) For each eigenvalue λ, look for nonzero solutions v to (A λi n )v = 0. If λ has algebraic multiplicity 1 then you will find one linearly independent eigenvector (i.e. the eigenspace W λ is one-dimensional) and you get to choose the (non-zero) scalar multiple. (Important: Later when we do SVD we take the eigenvectors to be unit vectors, so we only get to choose the sign.) If λ has algebraic multiplicity greater than 1 then you should look for multiple linearly independent solutions to (A λi)v = 0. Once you have the maximum possible number of LI eigenvectors, you have a basis for W λ. The dimension of W λ is called the geometric multiplicity of λ. (c) Sometimes an eigenvalue has geometric multiplicity smaller than its algebraic multiplicity. In this case we do not obtain a full basis of eigenvectors for C n, and we

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM 5 say that A is not diagonalizable. Smallest example: the matrix ( ) 0 1 A = 0 0 has eigenvalue λ = 0 with algebraic multiplicity 2 and geometric multiplicity 1. (d) Note that W 0 is the null space of a matrix. Diagonalization: If you obtain bases for each eigenspace W λ of size equal to the algebraic multiplicity of λ, then you have obtained n linearly independent eigenvectors v 1,..., v n for A, which is hence a basis (called an eigenbasis ) for C n. In this case we say that A is diagonalizable. Let V = (v 1 v n ) and put associated eigenvalues λ 1,..., λ n along the diagonal of a diagonal matrix Λ = diag(λ 1,..., λ n ). Then A = V ΛV 1. You can prove this by showing that the columns of AV are the same as the columns of V Λ. Misc: The trace of an n n matrix A = (a ij ) is defined tr A = a 11 + + a nn. Shortcut for writing down the characteristic polynomial for a 2 2 matrix A: χ A (λ) = λ 2 tr Aλ + det A. Know how to use diagonalization of a matrix to easily raise matrices to high powers or invert them: If A = V ΛV 1 then A k = V Λ k V 1 for any integer k 1, and A 1 = V Λ 1 V 1. Fact you should know (don t need to know the proof): If v 1,..., v k are eigenvectors of A associated to distinct eigenvalues λ 1,..., λ k C, then v 1,..., v k are linearly independent. In particular, if all of the eigenvalues of A have algebraic multiplicity 1 then A is diagonalizable. Diagonalization of symmetric matrices The Spectral Theorem (for symmetric matrices): Let A be n n symmetric and real. Then all of the eigenvalues of A are real, and A has an associated set of real eigenvectors q 1,..., q n that is an orthonormal basis for R n. That is, A has a factorization A = QΛQ T where Q is orthogonal and Λ is real diagonal. I told you the spectral theorem for Hermitian matrices for your general cultural knowledge :) (it s important for quantum mechanics) but you don t need to know about that for the final. Know how to show that the eigenvalues of a symmetric matrix are real. Singular value decomposition: Applies to any real m n matrix (there s also a version for complex matrices but we didn t cover that). For any such A, we can write A = UΣV T where U is m m orthogonal, V is n n orthogonal, Σ is m n diagonal with non-negative entries in non-increasing order σ 1 σ r > 0 = σ r+1 = σ r+2 =. In terms of the first r singular values/vectors, we get a representation of A as a sum of rank 1 matrices of decreasing size (or importance in statistics applications): r A = σ i u i vi T. i=1 Know how to find this decomposition (see notes or the book). Know that {u 1,..., u r } is an orthonormal basis for C(A) (so r, the number of nonzero singular values, is the rank), {u r+1,..., u m } is an orthonormal basis for C(A) = N(A T ),

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM 6 {v 1,..., v r } is an orthonormal basis for N(A) = C(A T ), and {v r+1,..., v n } is an orthonormal basis for N(A).