Chapter 3 Transformations

Similar documents
LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Linear Algebra Massoud Malek

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

Review problems for MA 54, Fall 2004.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

There are two things that are particularly nice about the first basis

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Diagonalizing Matrices

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

7. Symmetric Matrices and Quadratic Forms

Review of Linear Algebra

Conceptual Questions for Review

MA 265 FINAL EXAM Fall 2012

Lecture notes: Applied linear algebra Part 1. Version 2

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Chap 3. Linear Algebra

MAT Linear Algebra Collection of sample exams

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Algebra II. Paulius Drungilas and Jonas Jankauskas

235 Final exam review questions

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

HW2 - Due 01/30. Each answer must be mathematically justified. Don t forget your name.

MATH Spring 2011 Sample problems for Test 2: Solutions

ANSWERS. E k E 2 E 1 A = B

Cheat Sheet for MATH461

ELE/MCE 503 Linear Algebra Facts Fall 2018

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

PRACTICE FINAL EXAM. why. If they are dependent, exhibit a linear dependence relation among them.

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

Symmetric Matrices and Eigendecomposition

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

Lecture 7: Positive Semidefinite Matrices

1. General Vector Spaces

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

Linear Algebra Review. Vectors

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

k is a product of elementary matrices.

Spring 2014 Math 272 Final Exam Review Sheet

MATH 221, Spring Homework 10 Solutions

Linear Algebra Primer

SOLUTION KEY TO THE LINEAR ALGEBRA FINAL EXAM 1 2 ( 2) ( 1) c a = 1 0

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

TBP MATH33A Review Sheet. November 24, 2018

Linear Algebra: Matrix Eigenvalue Problems

Recall the convention that, for us, all vectors are column vectors.

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Chapter 7: Symmetric Matrices and Quadratic Forms

Solutions to Review Problems for Chapter 6 ( ), 7.1

Linear Algebra. Session 12

Exercise Sheet 1.

MATH 235. Final ANSWERS May 5, 2015

22m:033 Notes: 7.1 Diagonalization of Symmetric Matrices

MTH 2032 SemesterII

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

PRACTICE PROBLEMS FOR THE FINAL

Review of Some Concepts from Linear Algebra: Part 2

UNIT 6: The singular value decomposition.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

(v, w) = arccos( < v, w >

Applied Linear Algebra in Geoscience Using MATLAB

Math Linear Algebra Final Exam Review Sheet

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Math 265 Linear Algebra Sample Spring 2002., rref (A) =

Linear Algebra: Characteristic Value Problem

INNER PRODUCT SPACE. Definition 1

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2

MATH. 20F SAMPLE FINAL (WINTER 2010)

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra

Solutions to Final Exam

Deep Learning Book Notes Chapter 2: Linear Algebra

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

Definitions for Quizzes

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

Linear Algebra. and

Practice problems for Exam 3 A =

No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question.

NOTES on LINEAR ALGEBRA 1

Theorem A.1. If A is any nonzero m x n matrix, then A is equivalent to a partitioned matrix of the form. k k n-k. m-k k m-k n-k

A Brief Outline of Math 355

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Econ Slides from Lecture 7

2. Every linear system with the same number of equations as unknowns has a unique solution.

Quantum Computing Lecture 2. Review of Linear Algebra

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

LINEAR ALGEBRA KNOWLEDGE SURVEY

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

Numerical Linear Algebra

Linear Algebra Practice Problems

Tutorials in Optimization. Richard Socher

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Section 6.4. The Gram Schmidt Process

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Transcription:

Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1

Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases for and, then the linear transformation can be represented by a matrix. Theorem 3.1: Suppose that is a given vector, and is the representation of with respect to the given basis for. If and is the representation of with respect to the given basis for, then, where and is called the matrix representation of. Special case: with respect to natural bases for and 2

Linear Transformations Let and be two bases for. Define the matrix that is, the ith column of is the vector of coordinates of with respect to the basis. Given a vector, let be the coordinates of the vector with respect to and be the coordinates of the same vector with respect to. Then,. 3

Example (Finding a Transition Matrix) Consider bases B = {u 1, u 2 } and B = {u 1, u 2 } for R 2, where u 1 = (1, 0), u 2 = (0, 1); u 1 = (1, 1), u 2 = (2, 1). Find the transition matrix from B to B. Find [v] B if [v] B = [-3 5] T. Solution: First we must find the coordinate matrices for the new basis vectors u 1 and u 2 relative to the old basis B. By inspection u 1 = u 1 + u 2 so that 4 1 2 = = B 1 1 [ u ' ] and [ u ' ] 1 B 2 Thus, the transition matrix from B to B is 1 2 P = 1 1

Example (Finding a Transition Matrix) Using the transition matrix yields As a check, we should be able to recover the vector v either from [v] B or [v] B. -3u 1 + 5u 2 = 7u 1 + 2u 2 = v = (7,2) 5

Example (A Different Viewpoint) u 1 = (1, 0), u 2 = (0, 1); u 1 = (1, 1), u 2 = (2, 1) In the previous example, we found the transition matrix from the basis B to the basis B. However, we can just as well ask for the transition matrix from B to B. We simply change our point of view and regard B as the old basis and B as the new basis. As usual, the columns of the transition matrix will be the coordinates of the new basis vectors relative to the old basis. u 1 = -u 1 + u 2 ;u 2 = 2u 1 u 2 6

Remarks If we multiply the transition matrix from B to B and the transition matrix from B to B, we find 7

Linear Transformations Consider a linear transformation and let be its representation with respect to and its representation with respect to. Let and. Therefore, 8 and hence, or Two matrices and are similar if there exists a nonsingular matrix such that. In conclusion, similar matrices correspond to the same linear transformation with respect to difference bases. input output

Eigenvalues and Eigenvectors Let be an square matrix. A scalar and a nonzero vector satisfying the equation are said to be, respectively, an eigenvalue and an eigenvector of. The matrix must be singular; that is, This leads to an nth-order polynomial equation The polynomial is called the characteristic polynomial, and the equation is called the characteristic equation. 9

Eigenvalues and Eigenvectors Suppose that the characteristic equation has n distinct roots. Then, there exist n linearly independent vectors such that Consider a basis formed by a linearly independent set of eigenvectors. With respect to this basis, the matrix is diagonal. Let 10

Eigenvalues and Eigenvectors A matrix is symmetric if. Theorem 3.2: All eigenvalues of a real symmetric matrix are real. Theorem 3.3: Any real symmetric matrix has a set of n eigenvectors that are mutually orthogonal. (i.e., this matrix can be orthogonally diagonalized) If is symmetric, then a set of its eigenvectors forms an orthogonal basis for R n. If the basis is normalized so that each element has norm of unity, then defining the matrix we have, or A matrix whose transpose is its inverse is said to be an orthogonal matrix. 11

Example Find an orthogonal matrix P that diagonalizes Solution: The characteristic equation of A is λ 4 2 2 λ λ λ λ 2 2 λ 4 The basis of the eigenspace corresponding to λ = 2 is u Applying the Gram-Schmidt process to {u 1, u 2 } yields the following orthonormal eigenvectors: 4 2 2 A = 2 4 2 2 2 4 2 det( I A) = det 2 4 2 = ( 2) ( 8) = 0 1 1 = 1 and 0 u = 0 1 1 2 12 v 1/ 2 1/ 6 = 1/ 2 and v = 1/ 6 0 2 / 6 1 2

Example The basis of the eigenspace corresponding to λ = 8 is Applying the Gram-Schmidt process to {u 3 } yields: u 3 1 = 1 1 Thus, v 3 1/ 3 = 1/ 3 1/ 3 P = [ v v ] 1 2 v3 1/ 1/ 0 orthogonally diagonalizes A. = 2 2 1/ 1/ 2 / 6 6 6 1/ 1/ 1/ 3 3 3 13

Orthogonal Projections If is a subspace of R n, then the orthogonal complement of, denoted by, consists of all vectors that are orthogonal to every vector in, i.e. The orthogonal complement of is also a subspace. Together, and span R n in the sense that every vector can be represented uniquely as, where and The representation above is the orthogonal decomposition of We say that and are orthogonal projections of onto the subspaces and, respectively. We write and say that R n is a direct sum of and. We say that a linear transformation is an orthogonal projector onto if for all we have and 14

Orthogonal Projections Theorem 3.4: Let, the range or image of can be denoted The nullspace or kernel of can be denoted Column space 15 and are subspaces. and (four fundamental spaces in Linear Algebra) Row space If is an orthogonal projector onto, then for all, and Theorem 3.5: A matrix is an orthogonal projector if and only if

Quadratic Forms A quadratic form is a function, where is an real matrix. There is no loss of generality in assuming to be symmetric: 16

Quadratic Forms For if the matrix is not symmetric, we can always replace it with the symmetric A quadratic form is said to be positive definite if for all nonzero vectors. It is positive semidefinite if for all. Similarly, we define the quadratic form to be negative definite, or negative semidefinite, if or 17

Quadratic Forms The principal minors for a matrix are itself and the determinants of matrices obtained by successively removing an ith row and an ith column. The leading principal minors are and the minors obtained by successive removing the last row and the last column. 18

Quadratic Forms Theorem 3.6 Sylvester s Criterion: A quadratic form is positive definite if and only if the leading principal minors of are positive. Note that if is not symmetric, Sylvester s criterion cannot be used. A necessary condition for a real quadratic form to be positive semidefinite is that the leading principal minors be nonnegative. However, it is not a sufficient condition. In fact, a real quadratic form is positive semidefinite if and only if all principal minors are nonnegative. 19

Quadratic Forms A symmetric matrix is said to be positive definite if the quadratic form is positive definite. If is positive definite, we write Positive semidefinite, negative definite, negative semidefinite properties are defined similarly. The symmetric matrix is indefinite if it is neither positive semidefinite nor negative semidefinite. Theorem 3.7: A symmetric matrix is positive definite (or positive semidefinite) if and only if all eigenvalues of are positive (or nonnegative) 20

Matrix Norms The norm of a matrix, denoted by, is any function that satisfies the following conditions: equal to zero., where is a matrix with all entries An example of a matrix norm is the Frobenius norm, defined as Note that the Frobenius norm is equivalent to the Euclidean norm on R mn. For our purpose, we consider only matrix norms satisfying the addition condition: 21

Matrix Norms In many problems, both matrices and vectors appear simultaneously. Therefore, it is convenient to construct the matrix norm in such a way that it will be related to vector norms. To this end we consider a special class of matrix norms, called induced norms. Let and be vector norms on R n and R m, respectively. We say that the matrix norm is induced by, or is compatible with, the given vector norms if for any matrix and any vector, the following inequality is satisfied: 22

Matrix Norms We can define an induced matrix norm as that is, is the maximum of the norms of the vectors where the vector runs over the set of all vectors with unit norm. We may omit the subscripts in the following. For each matrix the maximum is attainable; that is, a vector exists such that and 23

Matrix Norms Theorem 3.8: Let the matrix norm induced by this vector norm is where is the largest eigenvalue of the matrix Rayleigh s Inequality: If an matrix is real symmetric positive definite, then where denotes the smallest eigenvalue of, and denotes the largest eigenvalue of. 24

Matrix Norms Consider the matrix and let the norm in R 2 be given by Then, and Thus, The eigenvector of corresponding to is Note that Because in this example, we have. However, in general. Indeed, we have 25

Example Note that 0 is the only eigenvalue of. Thus, for, 26