TBP MATH33A Review Sheet. November 24, 2018

Similar documents
Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Practice Exam. 2x 1 + 4x 2 + 2x 3 = 4 x 1 + 2x 2 + 3x 3 = 1 2x 1 + 3x 2 + 4x 3 = 5

SOLUTION KEY TO THE LINEAR ALGEBRA FINAL EXAM 1 2 ( 2) ( 1) c a = 1 0

Definitions for Quizzes

235 Final exam review questions

Review problems for MA 54, Fall 2004.

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

2. Every linear system with the same number of equations as unknowns has a unique solution.

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Linear Algebra Primer

Topic 2 Quiz 2. choice C implies B and B implies C. correct-choice C implies B, but B does not imply C

1. General Vector Spaces

7. Symmetric Matrices and Quadratic Forms

Linear Algebra

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math Linear Algebra Final Exam Review Sheet

Conceptual Questions for Review

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Math 315: Linear Algebra Solutions to Assignment 7

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

Introduction to Matrix Algebra

Chapter 3 Transformations

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

LINEAR ALGEBRA SUMMARY SHEET.

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Linear Algebra: Matrix Eigenvalue Problems

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Lecture Summaries for Linear Algebra M51A

and let s calculate the image of some vectors under the transformation T.

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

Math 21b. Review for Final Exam

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Chapter 7: Symmetric Matrices and Quadratic Forms

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

A A x i x j i j (i, j) (j, i) Let. Compute the value of for and

ft-uiowa-math2550 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 12/31/2014 at 10:36pm CST

MA 265 FINAL EXAM Fall 2012

Introduction to Linear Algebra, Second Edition, Serge Lange

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

ANSWERS. E k E 2 E 1 A = B

Exercise Sheet 1.

1 9/5 Matrices, vectors, and their applications

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

Examples True or false: 3. Let A be a 3 3 matrix. Then there is a pattern in A with precisely 4 inversions.

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2

Diagonalizing Matrices

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

Cheat Sheet for MATH461

MATH 220 FINAL EXAMINATION December 13, Name ID # Section #

There are six more problems on the next two pages

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

ft-uiowa-math2550 Assignment NOTRequiredJustHWformatOfQuizReviewForExam3part2 due 12/31/2014 at 07:10pm CST

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of

MATH 369 Linear Algebra

Review of Linear Algebra

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

Warm-up. True or false? Baby proof. 2. The system of normal equations for A x = y has solutions iff A x = y has solutions

Spring 2014 Math 272 Final Exam Review Sheet

Linear Algebra- Final Exam Review

Online Exercises for Linear Algebra XM511

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Recall : Eigenvalues and Eigenvectors

Math 110 Linear Algebra Midterm 2 Review October 28, 2017

1. Select the unique answer (choice) for each problem. Write only the answer.

Linear Algebra Highlights

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

Proofs for Quizzes. Proof. Suppose T is a linear transformation, and let A be a matrix such that T (x) = Ax for all x R m. Then

UNIT 6: The singular value decomposition.

(v, w) = arccos( < v, w >

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

Section 6.4. The Gram Schmidt Process

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

EIGENVALUES AND EIGENVECTORS

Linear Algebra - Part II

1 Last time: least-squares problems

33A Linear Algebra and Applications: Practice Final Exam - Solutions

MAC Module 12 Eigenvalues and Eigenvectors. Learning Objectives. Upon completing this module, you should be able to:

MAC Module 12 Eigenvalues and Eigenvectors

Basic Concepts in Matrix Algebra

LINEAR ALGEBRA REVIEW

A Brief Outline of Math 355

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

8. Diagonalization.

CS 246 Review of Linear Algebra 01/17/19

Dimension. Eigenvalue and eigenvector

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Foundations of Matrix Analysis

Lecture 10 - Eigenvalues problem

Math 290-2: Linear Algebra & Multivariable Calculus Northwestern University, Lecture Notes

MATH 1553, Intro to Linear Algebra FINAL EXAM STUDY GUIDE

1. Diagonalize the matrix A if possible, that is, find an invertible matrix P and a diagonal

ICS 6N Computational Linear Algebra Eigenvalues and Eigenvectors

Transcription:

TBP MATH33A Review Sheet November 24, 2018

General Transformation Matrices: Function Scaling by k Orthogonal projection onto line L Implementation If we want to scale I 2 by k, we use the following: [ k 0 k I 2 = 0 k Given a unit vector u that is parallel to L such that [ u u1 = u 2 The matrix [ 2 u1 u 1 u 2 u 1 u 2 2 u 2 Reflection about a line Rotation through angle θ generates an orthogonal projection onto line Lwhere u 2 1 + u 2 2 = 1. In order to reflect about a line, we utilize a matrix of the form [ a b b -a where a 2 + b 2 = 1. Rotation through an angle θ involves a matrix of the form [ cosθ -sinθ sinθ cosθ or, more generally, [ a -b b a Rotation through angle θ combined with scaling by r where a 2 + b 2 = 1. This matrix takes the form [ a -b b a which is equivalent to [ cosθ -sinθ r sinθ cosθ Horizontal shear Matrices of the form [ 1 k 0 1 where k is an arbitrary constant. 1

Vertical shear Matrices of the form [ 1 0 k 1 where k is an arbitrary constant. Chapter 1 Rank shows dimension of im(a) where A is an arbitrary matrix If rank(a ) = rank(a), where Aĩs the augmented matrix of A, then the system is solvable and the dimension of the solution space is equal to the number of free variables. Furthermore, if, for the mxn matrix, m < n and the system is solvable, there are infinitely many solutions A transformation T is linear iff Chapter 2 T( v + w) = T( v ) + T( w) for all vectors v and w in IR m T(k v ) = kt( v ) for all vectors v in IR m and all scalars k Earlier, we defined an orthogonal projection onto line L where the unit vector u is on the line. We now define the more general case with a vector u that is on the line but not a unit vector. In this case [ w w1 = w 2 and the matrix is 1 w 1 2 + w 2 2 [ 2 w1 w 1 w 2 w 1 w 2 2 w 2 Furthermore, a reflection with respect to line L with an arbirtrary vector w on L as defined before can be found using 1 [ 2 2 w1 - w 2 2w 1 w 2 w 2 2 1 + w 2 2w 1 w 2 w 2 2 2 - w 1 which more generally takes the form [ a b b a where, as before, a 2 + b 2 = 1. Generally, for subspace L and V s.t. V =L, Ref V ( x) = -Ref L ( x) 2

The projection matrix onto a space where u is a unit vector in the space of interest ()and defined in the same way as before) is found by computing the matrix [ u1 u u2 u for the 2D case and [ u1 u u2 u u3 u Finally, we define that the matrix for reflection about a subspace V is equal to the identity matrix of that dimension minus the projection matrix. Chapter 3 A R V = I d A P V We begin by defining a subspace as something that is closed under addition and allows for scalar multiplication. Furthermore, if T ( x) = A x is a linear transformation from IR m to IR n, then ker(t ) = ker(a) is a subspace of IR m im(t ) = im(a) is a subspace of IR n We define a basis to be a set of vectors in substance V of IR n s.t. B = ( v 1, v 2, v 3,... v m ) Any vector x in V can be written as x = c 1 v 1 + c 1 v 2 + c 1 v 3 + 1 v m where c 1, c 2, c 3,...c m are the β coordinates of x and the vector c 1 c 2 c 3... c m is defined as the β coordinate vector of x = [ x β We see x = B [ x β To get a better idea of the relationship between transformations and bases, we refer to the following diagram from page 152 of the Bretscher textbook. 3

To explain this diagram further, we start at the top left corner and consider an arbitrary vector x with a basis as described above that is on an arbitrary line L. In order to describe the vector on the line, we need to use two basis vectors and two constants. Looking at the bottom left of the diagram, we see that the coordinate vector of x is defined with two different coefficients. However, let us consider now x the basis so that one of the basis vectors v 1 is on L. This reduces the coordinate vector so that only the coefficient c 1 is relevant. The equivalent transformation for this change in basis is shown so that T transforms [ c1 c 2 into [ c1 0 We observe a parallelism in the transformation matrix and the coordinate vector. Chapter 5 We define a set of orthonormal vectors to be a set of all unit vectors that are all perpendicular to each other and are linearly independent. We also define a transform of A with notation A T such that, if [ 1 2 3 A = 4 5 6 then A T = 1 4 2 5 3 6 Consequently, v T = v w. The orthogonal projection of v T onto V is denoted as v 4

if V is a subspace of IR n with orthonormal basis u 1, u 2, u 3,... u m proj V x = v = ( u 1 x) u 1 +... + ( u m x) u m Cauchy-Schwartz inequality: x y x y Gram-Schmidt process: convert an old base to a new base First, we consider a basis with vectors v 1, v 2,... v m and a new basis with vectors found by where we define u 1 = 1 v 1 u 1, u 2,... u m v 1, u 2 = 1 v 2 v 2,... u m = 1 v m v j = v j - ( u 1 x j ) u 1 -... - ( u j - 1 x j ) u j - 1 Q-R factorization is a method we use to convert a matrix R from the old base to a new base. We use the equation M = QR where M is a matrix with the basis vectors v 1, v 2,... v m as its columns and Q is a matrix with the unit basis vectors u 1, u 2,... u m as its columns. This equation can also be rewritten as R = Q T A Note that, in using this, Q must have orthonormal columns and R is an upper triangular matrix with positive diagonals. A transformation is classified as an orthogonal transformation if it preserves length If T ( x) = A x is an orthogonal transformation, A is an orthogonal matrix. In general, an orthogonal matrix is invertable, its determinant is not equal to zero, the matrix multiplied by its transpose is the identity matrix, and the inverse of the matrix is equal to its transpose. The matrix of an orthogonal projection is P = QQ T where General properties of transpose: (A + B) T = A T + B T (ka) T = ka T Q = [ u1 u2... u m v m 5

(AB) T = B T A T rank(a T ) = rank(a) (A T ) -1 = (A -1 ) T (Im(A)) = ker(a T ) Least square solution (A T A) x = A T b If A T A is invertible/linearly ind, then x = (A T A) -1 A T b and is unique For A x = b, A x is the orthogonal projection of b onto the image of A. The matrix of an orthogonal projection is Chapter 6 A(A T A) -1 A T b The determinant of an (upper or lower) triangular matrix is the product of the diagonal entries of the matrix. det(a T ) = det(a) Determinant operations: Chapter 7 Swapping rows results in x ( 1) where x is the determinant Dividing a row by scalar k results in x ( 1 k ) det(a) A square matrix is invertivle if and only if det(a) = 0 To make it easier to solve for the determinant of a given matrix A, we turn A into some upper triangular matrix B and find that det(a) = ( 1) s k 1 k 2... k r det(b) where s is the number of row swaps, k is a scalar that we divide a row in A by, and r is the number of scalars we the rows of A by. det(ab) = (det(a)) (det(b)) det(a m ) = (det(a)) m det(a -1 ) = 1 det(a) = (det(a))-1 An eigenvector is a vector v such that where λ is an eigenvalue An eigenbasis is a set of eigenvectors A v = λ v 6

[ #» v1 #» v2 #» v3... v #» n B = S -1 AS or A = SBS -1 where The trace of a matrix is defined as S = [ #» v1 #» v2 #» v3... v #» n λ 1 0 0 0 B = 0 λ 2 0 0............ 0 0 0 λ n tr(a) = a 11 +a 22 +... + a nn where a ii is the value of matrix at row i and column i. The roots of f A (λ) = det(a λi). An eigenspace is defined as E λ = ker(a λi n ) or { v in IR n : A v = λ v } Consequently, dim(e λ ) = geometric multiplicity. Complex Numbers z = a + bi = r(cosθ + sintθi) e iθ = cosθ + sinθi Thus, z = re iθ z= a - bi z * z = a 2 + b 2 = z 2 = r 2 If A has eigenvalues a ± ib where b 0 and v + i w is an eigenvector of A with eigenvalue a ± ib, then S -1 AS = [ a b b a where S = [ w v 7

Alternatively, [ a b A = S b a where S = [ v1 v2 and v 1 + v 2 i is an eigenvector for a ± ib. Chapter 8 A symmetric matrix is defined as A = A T. If A nn is symmetric matrix, There exists an orthonormal eigenbases (and thus the matrix is orthogonally diagonalizable). All eigenvalues are real ( λ = λ), there are n real eigenvalues, and an algebraic multiplicity of n. x T (T y) = y T (T x) For any orthonormal basis β, [T β is symmetric. If v 1 and v 2 are eigenvectors, of A with distinct eigenvalues λ 1 and λ 2, then v 1 v 2 = 0 and v 1 is orthogonal to v 2. To perform an orthogonal diagonalization of a symmetric matrix A, Find the eigenvalues of A, and find a basis of each eigenspace. Use the Gram-Schmidt process, find an orthonormal basis of each eigenspace. Forma n orthonormal eigenbases v 1, v 2, v 3,..., v #» n for A by concatenating the orthonormal basis you found in the second part such that S = [ v1 v2 v3... v #» n S -1 From this, S is orthogonal and S -1 AS will be diagonal. Quadratic forms: q( x) = x A x = x T A x = λ 1 c 2 1 + λ 2 c 2 2 +... + λ n c 2 n where we define A to be a symmetric nxn matrix, β to be an orthonormal eigenbasis for A with associated λ 1 c 2 1, λ 2 c 2 2,..., λ n c 2 n, and c i to be the coordinates of x with respect to β. Orthonormal coordinate transformation such that x = Sy to y = S -1 x = S T x Q( #» y 1, #» y 2,... #» y n ) = λ 1 y 2 1 + λ 2 y 2 2 +... + λ n y 2 n 8

A is positive definite if q( x) is positive for all nonzero x in IR n and iff det(a) (m) 0 for all m = 1, 2,..., n. This implies that all eigenvalues are positive. positive semidefinite if q( x) 0 for all x in IR n. This implies that all eigenvalues are positive or zero. indefinite if q( x) takes positive as well as negative values. The principal axes for q( x) = x A x with A being a symmetric nxn matrix with n distinct eigenvalues are the eigenspaces of A. Conside the curve defined by q( x 1, x 2 ) = a x 2 1 + b x #» #» 1 x 2 + c x #» 2 = 1 with eigenvalues λ 1, λ 2 of matrix [ a b/2 b/2 c of q. If both λ 1, λ 2 are positive, the curve is an ellipse, if one is positive and one is negative, the curve is a hyperbola. A linear transformation is described as L( x) = A x and is an invertivle linear transformation from IR 2 to IR 2 where A T A is symmetric, the image of the unit circle under L is an ellipse E, and the lengths of semimajor and semiminor axes of E are σ 1 and σ 2 of A. We define the following equation: where A = UΣV T V is an orthonormal eigenbasis for A T A with respect to eigenvalues defined such that λ 1 λ 2... λ r 0 = λ r + 1... 0 = λ m with corresponding singular values (defined such athat singular values are the square root of λ) σ 1 σ 2... σ r 0... We associate the columns of V to be the orthonormal vectors v 1, v 2, v 3,... v #» n in IR n such that L( v 1 ), L( v 2 ), L( v 3 ),...L( v #» n ) are orthogonal. Σ = σ 1 0 0 0 0 0 σ 2 0 0 0 0 0 σ r 0 0 0 0 0... 0 0 0 0 0 0 U = [ u #» 1... u #» 2 ur + 1 = 0 0 0 where u i are ybut vectors defined by u #» 1 = 1 σ 1 A v 1... u #» r = 1 σ r A v r such that A v i = σ 1 ui for i = 1, 2, 3...r and A v i = 0 for i = r + 1,...m. Furthermore, if A is an n x m matrix of rank r, then singular values σ 1, σ 2,... σ r are nonzero while σ r + 1,... σ m are zero. 9