Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Similar documents
Conceptual Questions for Review

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Linear Algebra Primer

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

Math Linear Algebra Final Exam Review Sheet

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

18.06SC Final Exam Solutions

MAT Linear Algebra Collection of sample exams

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Math 215 HW #9 Solutions

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Dimension. Eigenvalue and eigenvector

A Brief Outline of Math 355

orthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis,

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

18.06 Quiz 2 April 7, 2010 Professor Strang

MTH 464: Computational Linear Algebra

LINEAR ALGEBRA SUMMARY SHEET.

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Review problems for MA 54, Fall 2004.

ANSWERS. E k E 2 E 1 A = B

Solutions to Review Problems for Chapter 6 ( ), 7.1

q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis from any basis.

Math 407: Linear Optimization

1. Select the unique answer (choice) for each problem. Write only the answer.

Linear Algebra- Final Exam Review

Study Guide for Linear Algebra Exam 2

MATH 221, Spring Homework 10 Solutions

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

MA 265 FINAL EXAM Fall 2012

1. General Vector Spaces

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

There are six more problems on the next two pages

SUMMARY OF MATH 1600

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

4. Determinants.

Math 2B Spring 13 Final Exam Name Write all responses on separate paper. Show your work for credit.

PRACTICE FINAL EXAM. why. If they are dependent, exhibit a linear dependence relation among them.

2. Every linear system with the same number of equations as unknowns has a unique solution.

CS 246 Review of Linear Algebra 01/17/19

Cheat Sheet for MATH461

Reduction to the associated homogeneous system via a particular solution

Math 265 Linear Algebra Sample Spring 2002., rref (A) =

Solutions to Final Exam

Warm-up. True or false? Baby proof. 2. The system of normal equations for A x = y has solutions iff A x = y has solutions

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Miderm II Solutions To find the inverse we row-reduce the augumented matrix [I A]. In our case, we row reduce

Announcements Monday, October 29

MAT188H1S LINEAR ALGEBRA: Course Information as of February 2, Calendar Description:

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

FINAL EXAM Ma (Eakin) Fall 2015 December 16, 2015

Solving a system by back-substitution, checking consistency of a system (no rows of the form

Lecture 3: Linear Algebra Review, Part II


Properties of Linear Transformations from R n to R m

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6

18.06 Professor Johnson Quiz 1 October 3, 2007

PRACTICE PROBLEMS FOR THE FINAL

MATH 2210Q MIDTERM EXAM I PRACTICE PROBLEMS

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Announcements Wednesday, November 01

I. Multiple Choice Questions (Answer any eight)

Matrix Operations: Determinant

Econ Slides from Lecture 7

2018 Fall 2210Q Section 013 Midterm Exam II Solution

Section 6.4. The Gram Schmidt Process

1 Last time: determinants

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

235 Final exam review questions

1 9/5 Matrices, vectors, and their applications

Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th.

Linear algebra I Homework #1 due Thursday, Oct Show that the diagonals of a square are orthogonal to one another.

Solution of Linear Equations

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

MATH 22A: LINEAR ALGEBRA Chapter 4

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula

1 Last time: least-squares problems

Eigenvalues and Eigenvectors A =

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Problem # Max points possible Actual score Total 120

MATH 369 Linear Algebra

M340L Final Exam Solutions May 13, 1995

TBP MATH33A Review Sheet. November 24, 2018

MATH 423 Linear Algebra II Lecture 20: Geometry of linear transformations. Eigenvalues and eigenvectors. Characteristic polynomial.

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

Transcription:

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures up to and including Friday, April 4. This corresponds approximately to chapters 1-5 and section 6.1 in the textbook. The exam will focus on material covered after the first midterm, but some reliance on the foundational material from the first part of the course is unavoidable. Outline of topics covered after Exam 1 This outline is designed to help you plan your review for the second exam. It is not intended to cover every detail. (1) Bases and Dimensions of the Four Subspaces ( 3.5-3.6) Let A be an m n matrix of rank r. (a) The row space C(A T ) and the null space N(A) are subspaces of R n. (b) The column space C(A) and the left null space N(A T ) are subspaces of R m. (c) To find bases for the four subspaces, start by reducing [A I] to [R E], where R is the RREF and EA = R. (d) The row spaces of A and R are the same, C(A T ) = C(R T ); a basis is given by the r nonzero rows of R, so dim C(A T ) = r. (e) The null spaces of A and R are the same, N(A) = N(R); a basis is given by the (n r) special solutions so dim N(A) = (n r). (f) The r pivot columns of A give a basis for the column space C(A), so dim C(A) = r. (The column spaces of A and R are different, but they have the same dimension.) (g) The last m r rows of E give a basis for the left null space N(A T ), so dim N(A T ) = (m r). (The left null spaces of A and R are different, but they have the same dimension.) (2) Orthogonality of Vectors and Subspaces ( 4.1) (a) The product x T y is a real number, sometimes called the dot product of x and y. (b) Vectors x and y are orthogonal if x T y = 0. In R 2 and R 3, this means the vectors are perpendicular in the usual sense. (c) Alternately, x and y are orthogonal if and only if x 2 + y 2 = x +y 2, where x = x T x. (d) If x is orthogonal to each of the vectors y 1,..., y m, then x is orthogonal to every vector in the span of y 1,..., y m.

2 (e) Subspaces V and W of R n are orthogonal if every vector in V is orthogonal to every vector in W. Orthogonal subspaces have no vectors in common except 0. (f) The orthogonal complement V of a subspace V of R n is the set of all vectors orthogonal to V. It is a subspace. (g) We say V and W are orthogonal complements if V = W (equivalently, W = V ). If V and W are subspaces of R n and are orthogonal complements, then their dimensions add up to n. (3) Orthogonality of the Four Subspaces ( 4.1) (a) The null space N(A) and row space C(A T ) are orthogonal complements; this comes from the equation Ax = 0 satisfied by a vector x N(A). (b) The left null space N(A T ) and the column space C(A) are also orthogonal complements. (This can be derived from the fact about the null space and row space, applied to A T.) (c) To find a basis for a subspace V of R d or its orthogonal complement V, it is best to find a matrix A that has V as one of its four subspaces. For example: To find a basis for a subspace V, you can try to find a matrix A with V as its column space, and then take the pivot columns of A. You can also find a basis for the left null space of A, which is C(A) = V. Alternately, you can find a matrix B with V as its null space, then the special solutions of B give a basis for V. You can also find a basis for the row space of B, which is N(A) = V. (4) Projections ( 4.1) (a) The projection of a vector b onto the line spanned by a is p = a a T b a T a = a a T b a 2. (b) The projection matrix P = 1 aa T a 2 multiplies b to produce p. The matrix P has rank one. (c) To project b onto the column space of a matrix A (which should have independent columns), we need to find p = Aˆx C(A) so that e = b p is orthogonal to C(A). This leads to the equation A T Aˆx = A T b. (d) If A has independent columns, A T A is invertible. (e) The matrix P that projects onto C(A) is given by the formula P = A(A T A) 1 A T, so the projection of b onto C(A) is P b. (f) Extreme cases: If P is the projection matrix onto V, then P b = b if b V and P b = 0 if b V.

(g) Projection matrices are symmetric (P = P T ) and satisfy P 2 = P (because projecting twice is the same as projecting once). (h) If P is the projection onto V, then I P is the projection onto V. (i) If V is a subspace of R n, then every vector b R n can be expressed as a sum b = b V + b V where b V V and b V V. To find these vectors, let P be the projection matrix for V ; then b V = P b and b V = (I P )b. (5) Least Squares Approximation, Fitting Problems ( 4.3) (a) The best way to understand this application of orthogonality and projections is to work through an example, such as the one Strang describes on pp. 206-209. (b) Fitting a line (or other model curve) with n parameters (e.g. slope D and intercept C) to a set of m data points (b 1,..., b m ) gives m equations in n variables, i.e. Ax = b where A is m n. (c) Typically, m is much larger than n, so the system has no solution for most b. (d) The normal equations A T Aˆx = A T b replace the original system with n equations in n variables. If A has independent columns, then the normal equations have a unique solution. (e) The solution ˆx to the normal equations is the best approximate solution to Ax = b, i.e. it makes E = Ax b 2 as small as possible. The number E is the sum of the squares of individual errors, so this kind of approximate solution is called least squares. (6) Orthonormal Sets and Orthogonal Matrices ( 4.4) (a) Vectors q 1,..., q n are orthonormal if they are mutually orthogonal and each has unit length. Equivalently q T i q j = 0 when i and j are different, and q T i q i = 1. (b) Orthonormal vectors are linearly independent. (c) Vectors q 1,..., q n R m are orthonormal if and only if Q T Q = I, where Q is the m n matrix with columns q 1,..., q n. (d) Notation: Q always represents a matrix with orthonormal columns. (e) If Q is square, it is called an orthogonal matrix. In this case Q T = Q 1. (f) Then the vectors x and Qx have the same length ( x = Qx ). (g) The projection onto C(Q) has a particularly simple form: P = QQ T. (If Q is square, then P = I because the columns span R n.) (7) The Gram-Schmidt Algorithm ( 4.4) (a) The Gram-Schmidt algorithm starts with independent vectors a 1,..., a n and produces orthonormal vectors q 1,..., q n that span the same space. (b) Gram-Schmidt is inductive; we explain how to do it for one vector, and then how to do it for n vectors if we already know how to do it for n 1 vectors. (c) For one vector a 1, Gram-Schmidt gives q 1 = a 1 a 1. 3

4 (d) To apply Gram-Schmidt to n vectors a 1,..., a n, first apply it to the (n 1) vectors a 1,..., a n 1 to obtain q 1,..., q n 1. Let V n 1 be the span of a 1,..., a n 1, which is the same as the span of q 1,..., q n 1. Then q n is given by the formula: A n = a n (projection of a n onto V n 1 ) q n = = a n (q T 1 a n )q 1 (q T 2 a n )q 2 (q T n 1a n )q n 1 A n A n (e) Applying Gram-Schmidt to the columns of an m n matrix A (which we assume are independent) produces an m n matrix Q with orthonormal columns. (f) The corresponding matrix factorization is A = QR where R = Q T A is an n n matrix. The entries of R are R ij = q T i a j. (g) The matrix R is upper triangular because later qs are orthogonal to earlier as. (8) Properties of the Determinant ( 5.1) (a) det(a) = A = the determinant of A (an n n matrix) (b) Three essential properties uniquely define the determinant: (i) I = 1 (ii) The determinant changes sign when two rows are exchanged. (iii) The determinant is a linear function of any single row. (Which involves a scalar multiplication condition and a row addition condition.) (c) Some additional properties of the determinant follow from the three above: If A has two rows that are equal, then A = 0. Subtracting a multiple of one row from another leaves A unchanged. If A has a row of zeros, then A = 0. If A is triangular, then A = a 11 a 22 a nn (the product of the diagonal entries). A = 0 if and only if A is singular. AB = A B (this is amazing!) A T = A (so we can substitute column for row in any other property) (d) A permutation matrix P has determinant ±1. If P = 1 then P is even, otherwise P is odd. (e) It follows from these properties that A is ± the product of the pivots if A is invertible (i.e. has n pivots). The sign is determines by the row exchanges: it is + if the permutation of the rows is even, and if the permutation of the rows is odd.

(9) Formulas for the Determinant ( 5.2) (a) The big formula for the determinant expresses it as a sum of n! terms, one for each permutation of (1,..., n): A = P a 1α a 2β a nω P =(α,β,...,ω) permutation of (1,...,n) (b) In the big formula, we have one term for each way to select a single entry of A from each row and each column. The term includes a + or sign depending on whether the corresponding permutation of the columns is even or odd, respectively. (c) The minor matrix M ij has size (n 1) (n 1), and is obtained from A by removing row i and column j. (d) The cofactor C ij the determinant of M ij, up to sign: C ij = ( 1) i+j M ij (e) The signs of the cofactors follow a checkerboard pattern: + + + + + + + + + + + + + (f) Choose a row i of A; then we get the cofactor formula for A using that row: A = a i1 C i1 + a i2 + + a in C in This is essentially the big formula, but we have separated the terms according to which entry of row i is present. (g) A similar cofactor formula works for a column. (h) The cofactor formula expresses an n n determinant as a sum of (n 1) (n 1) determinants. (It is inductive.) (10) Applications of the Determinant ( 5.3) (a) Cofactors give us a formula for entries of A 1 : (A 1 ) ij = 1 A C ji. In other words, A 1 = 1 A CT. (b) The solution to Ax = b is x = A 1 b, when A is invertible. (c) Using the formula for A 1, we can write the solution to Ax = b as x j = B j A where B j is the matrix obtained by replacing column j of A with b. This is Cramer s rule. 5

6 (d) Cramer s rule is computationally inefficient. (e) The determinant of A is the volume of a box (parallelepiped) in R n formed from the rows of A. (f) For n = 2 the volume interpretation means that the area of a parallelogram with vertices (0, 0), (x 1, y 1 ), (x 2, y 2 ), and (x 1 + x 2, y 1 + y 2 ) is the determinant: area = x 1 y 1 x 2 y 2 (g) This leads to a nice formula for the area of a triangle in the plane with vertices (x 1, y 1 ), (x 2, y 2 ), (x 3, y 3 ): area = 1 x 1 y 1 1 2 x 2 y 2 1 x 3 y 3 1 (11) Eigenvalues and eigenvectors ( 6.1) (a) Eigenvalues and eigenvectors are only defined for square matrices, because x and Ax must be vectors of the same size. (b) We say x is an eigenvector of an n n matrix A with eigenvalue λ if Ax = λx and x 0. (c) Thus x is an eigenvector if it and Ax have the same direction, i.e. one is a multiple of the other. (d) Both x and λ are unknowns in the equation Ax = λx ; it is not a linear system like Ax = b. (e) Ax = λx is equivalent to x N(A λi), so λ is an eigenvalue of A when (A λi) is singular. (f) 0 is an eigenvalue of A if and only if A is singular; eigenvectors with eigenvalue 0 are vectors in N(A). (g) The roots of the characteristic equation det(a λi) = 0 are the eigenvalues of A, because these are the values of λ for which (A λi) is singular. (h) det(a λi) is a polynomial of degree n in λ (the characteristic polynomial), so there are n eigenvalues, counted with multiplicity. (i) To find linearly independent eigenvectors with eigenvalue λ, compute a basis for the null space of (A λi). (Therefore, once you know the eigenvalues, it is easy to find the associated eigenvectors.) (j) The sum of the n eigenvalues of A is equal to the trace of A (the sum of its diagonal entries), i.e. tr(a) = i a ii = i λ i (k) The product of the n eigenvalues of A is equal to the determinant of A i.e. det(a) = λ 1 λ 2 λ n