Chapter 3: Vector Spaces x1: Basic concepts Basic idea: a vector space V is a collection of things you can add together, and multiply by scalars (= nu

Similar documents
Math Linear Algebra Final Exam Review Sheet

Math 314/814 Topics for first exam

Linear Algebra Highlights

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

2. Every linear system with the same number of equations as unknowns has a unique solution.

Math113: Linear Algebra. Beifang Chen

A Brief Outline of Math 355

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

Lecture 23: 6.1 Inner Products

MATH 2360 REVIEW PROBLEMS

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

Solving a system by back-substitution, checking consistency of a system (no rows of the form

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

Definitions for Quizzes

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

MAT Linear Algebra Collection of sample exams

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

Chapter 2:Determinants. Section 2.1: Determinants by cofactor expansion

Solution to Homework 1

Math Final December 2006 C. Robinson

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Chapter 3. Vector spaces

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Lecture Summaries for Linear Algebra M51A

CS 246 Review of Linear Algebra 01/17/19

CSL361 Problem set 4: Basic linear algebra

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

LINEAR ALGEBRA REVIEW

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

Math 240, 4.3 Linear Independence; Bases A. DeCelles. 1. definitions of linear independence, linear dependence, dependence relation, basis

Online Exercises for Linear Algebra XM511

Math 1553 Introduction to Linear Algebra

Lecture 3: Linear Algebra Review, Part II

Study Guide for Linear Algebra Exam 2

Chapter 4. Determinants

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

1 Last time: inverses

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

This MUST hold matrix multiplication satisfies the distributive property.

1 Last time: determinants

4.3 - Linear Combinations and Independence of Vectors

1 Matrices and Systems of Linear Equations

MATH 167: APPLIED LINEAR ALGEBRA Chapter 2

7. Dimension and Structure.

1. Select the unique answer (choice) for each problem. Write only the answer.

Linear Algebra Review

MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam. Topics

Comps Study Guide for Linear Algebra

Linear Algebra Primer

Linear Algebra (Math-324) Lecture Notes

MATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product.

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2

Daily Update. Math 290: Elementary Linear Algebra Fall 2018

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

1. General Vector Spaces

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Math Linear Algebra II. 1. Inner Products and Norms

Linear Algebra March 16, 2019

Math 314 Lecture Notes Section 006 Fall 2006

Math 321: Linear Algebra

1 9/5 Matrices, vectors, and their applications

March 27 Math 3260 sec. 56 Spring 2018

Chapter SSM: Linear Algebra Section Fails to be invertible; since det = 6 6 = Invertible; since det = = 2.

Spring 2014 Math 272 Final Exam Review Sheet

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Math 215 HW #9 Solutions

Math 205, Summer I, Week 3a (continued): Chapter 4, Sections 5 and 6. Week 3b. Chapter 4, [Sections 7], 8 and 9

Math 344 Lecture # Linear Systems

(v, w) = arccos( < v, w >

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6

MATH 235. Final ANSWERS May 5, 2015

Solution Set 7, Fall '12

W2 ) = dim(w 1 )+ dim(w 2 ) for any two finite dimensional subspaces W 1, W 2 of V.

Topic 15 Notes Jeremy Orloff

Linear Algebra. Christos Michalopoulos. September 24, NTU, Department of Economics

SUMMARY OF MATH 1600

Math 1180, Notes, 14 1 C. v 1 v n v 2. C A ; w n. A and w = v i w i : v w = i=1

Chapter 2 Notes, Linear Algebra 5e Lay

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Section 2.2: The Inverse of a Matrix

Math 290, Midterm II-key

Conceptual Questions for Review

The Four Fundamental Subspaces

Solutions to Final Exam

Elementary Row Operations on Matrices

Row Space and Column Space of a Matrix

MATH 369 Linear Algebra

Transcription:

Math 314 Topics for second exam Technically, everything covered by the rst exam plus Chapter 2 x6 Determinants (Square) matrices come in two avors: invertible (all Ax = b have a solution) and noninvertible (Ax = 0 has a non-trivial solution). It is an amazing fact that one number identies this dierence; the determinant of A. a b For 22 matrices A =, this number is det(a)=ad, bc; if 6= 0,A is invertible, if c d =0, A is non-invertible (=singular). For larger matrices, there is a similar (but more complicated formula): A= n n matrix, M ij (A) =matrix obtained by removing ith row and jth column of A. det(a) = n i=1(,1) i+1 a i1 det(m i1 (A)) (this is called expanding along the rst column) Amazing properties: If A is upper triangular, then det(a) = product of the entries on the diagonal If you multiply a row of A by c to get B, then det(b) =cdet(a) If you add a mult of one row of A to another to get B, then det(b) = det(a) If you switch a pair of rows of A to get B, then det(b) =,det(a) In other words, we can understand exactly how each elementary row operation aects the determinant. In part, A is invertible i det(a) 6= 0;and in fact, we can use row operations to calculate det(a) (since the RREF of a matrix is upper triangular). More interesting facts: det(ab) =det(a)det(b) ;det(a T ) = det(a) We can expand along other columns than the rst: det(a) = n i=1(,1) i+j a ij det(m ij (A)) (expanding along jth column) And since det(a T ) = det(a), we could expand along rows, as well... A formula for the inverse of a matrix: If we dene A c to be the matrix whose (i; j)th entry is (,1) i+j det(m ij (A)), then A T c A = (deta)i (A T c is called the adjoint of A). So if det(a)6= 0, then we can write the inverse of A as 1 A,1 = det(a) AT c (This is very handy for 22 matrices...) The same approach allows us to write an explicit formula for the solution to Ax = b, when A is invertible: If we write B i = A with its ith column replaced by b, then the (unique) solution to Ax = b has ith coordinate equal to det(b i ) det(a) 1

Chapter 3: Vector Spaces x1: Basic concepts Basic idea: a vector space V is a collection of things you can add together, and multiply by scalars (= numbers) V = things for which v; w 2 V implies v + v 2 V ; a 2 R and v 2 V implies a v 2 V E.g., V =R 2, add and scalar multiply componentwise V =all 3-by-2 matrices, add and scalar multiply entrywise V =fax 2 +bx+c : a; b; c 2 Rg = polynomials of degree 2; add, scalar multiply as functions The standard vector space of dimension n : R n = f(x 1 ;:::;x n ) : x i 2 R all ig An abstract vector space is a set V together with some notion of addition and scalar multiplication, satisfying the `usual rules': for u; v; w 2 V and c; d 2 R we have u + v 2 V, cu 2 V u + v = v + u, u +(v + w) =(u + v)+w There is 0 2 V and,u 2 V with 0 + u = u all u, and u +(,u) =0 c(u + v) =cu + cv, (c + d)u = cu + du, (cd)u = c(du), 1u = u Examples: R m;n = all m n matrices, under matrix addition/scalar mult C[a; b] = all continuous functions f:[a; b]! R, under function addition fa 2 R n;n : A T = Ag = all symmetric matrices, is a vector space Note: ff 2 C[a; b] : f(a) =1g is not a vector space (e.g., has no bf 0) Basic facts: 0v = 0, c0 = 0, (,c)v =,(cv); cv = 0 implies c =0or v = 0 A vector space (=VS) has only one 0; a vector has only one additive inverse Linear operators: T : V! W is a linear operator if T (cu + dv) =ct (u)+dt (v) for all c; d 2 R, u; v 2 V Example: T A : R n! R m, T A (v) =Av, islinear T : C[a; b]! R, T (f) =f(b), is linear T : R 2! R, T (x; y) =x, xy +3y is not linear! x2: Subspaces Basic idea: V = vector space, W V, then to check if W is a vector space, using the same addition and scalar multiplication as V, we need only check two things: whenever c 2 R and u; v 2 W, we always have cu; u + v 2 W All other properties come for free, since they are true for V! If V is a VS, W V and W is a VS using the same operations as V, we say that W is a (vector) subspace of V. Examples: f(x; y; z) 2 R 3 : z =0g is a subspace of R 3 f(x; y; z) 2 R 3 : z =1g is not a subspace of R 3 fa 2 R n;n : A T = Ag is a subspace of R n;n Basic construction: v 1 ; ;v n 2 V W = fa 1 v 1 +a n v n : a 1 ;:::;a n 2 R = all linear combinations of v 1 ; ;v n = spanfv 1 ; ;v n g = the span of v 1 ; ;v n,isa subspace of V Basic fact: if w 1 ;:::;w k 2 spanfv 1 ; ;v n g, then spanfw 1 ; ;w k g spanfv 1 ; ;v n g 2

x3: Subspaces from matrices column space of A = C(A) = spanfthe columns of Ag row space of A = R(A) =spanf(transposes of the ) rows of Ag nullspace of A = N (A) =fx 2 R n : Ax = 0g (Check: N (A) is a subspace!) Alternative view Ax = lin comb of columns of A, so is in C(A); in fact, C(A) = fax : x 2 R n g Subspaces from linear operators: T : V! W image of T = im(t )=ftv : v 2 V g kernel of T = ker(t ) = fx : T (x) =0g When T = T A, im(t )=C(A), and ker(t ) = N (A) T is called one-to-one if Tu = Tv implies u = v Basic fact: T is one-to-one i ker(t )=f0g x4: Norm and inner product Norm means length! In R n this is computed as jjxjj = jj(x 1 ;:::;x n )jj =(x 2 1 + + x 2 n) 1=2 Basic facts: jjxjj 0, and jjxjj =0i x = 0, jjcujj = jcjjjujj, and jju + vjj jjujj + jjvjj (triangle inequality) unit vector: the norm of u=jjujj is 1; u=jjujj is the unit vector in the direction of u. convergence: u n! u if jju n, ujj! 0 Inner product: idea: assign a number to a pair of vectors (think: angle between them?) In R n, we use the dot product: v =(v 1 ;:::;v n ), w =(w 1 ;:::;w n ) v w = hv; wi = v 1 w 1 + + v n w n = v T w Basic facts: hv; vi = jjvjj 2 (so hv; vi 0, and equals 0 i v = 0) hv; wi = hw; vi; hcv; wi = hv; cwi = chv; wi x5: Applications of norms and inner products Cauchy-Schwartz inequality: for all v; w, jhv; wij jjvjj jjwjj (this implies the triangle inequality) So:,1 hv; wi=(jjvjj jjwjj) 1 Dene: the angle between v and w = the angle (between 0 and with cos() = hv; wi=(jjvjj jjwjj) Ex: v = w : then cos() = 1, so =0 Two vectors are orthogonal if their angle is =2, i.e., hv; wi=0. Notation: v? w Pythagorean theorem: if v? w, then jjv + wjj 2 = jjvjj 2 + jjwjj 2 Orthogonal projection: Given v; w 2 R n, then we can write v = cw + u, with u? w hv; wi c = hw; wi ; cw = proj w v = u = v, cw! hv; wi hv; wi w= hw; wi jjwjj w jjwjj = (orthogonal) projection of v onto w 3

Least squares: Idea: Find the closest thing to a solution to Ax = b, when it has no solution. Overdetermined system: more equations than unknowns. Typically, the system will have no solution. Instead, nd the closest vector with a solution (i.e, of the form Ax) to b. Need: Ax, b perpendicular to the subspace C(A) I.e, need: Ax, b? each column of A, i.e., need h(column of A),Ax, bi = 0 I.e., need A T (Ax, b) =0, i.e., need (A T A)x =(A T b) Fact: such a system of equations is always consistent! Ax will be the closest vector in C(A) to b If A T A is invertible (need: r(a)=number of columnsof A), then we can write x = (A T A),1 (A T b); Ax = A(A T A),1 (A T b) x6: Bases and dimension Idea: putting free and bound variables on a more solid theoretical footing We've seen: every solution to Ax = b can be expressed in terms of the free variables (x = v + x i1 v 1 + + x ik v k ) Could a dierent method of solution give us a dierent number of free variables? (Ans: No! B/c that number is the `dimension' of a certain subspace...) Linear independence/dependence: v 1 ;:::;v n 2 V are linearly independent if the only way to express 0 as a linear combination of the v i 's is with all coecients equal to 0; whenever c 1 v 1 + + c n v n = 0, we have c 1 = = c n =0 Otherwise, we say the vectors are linearly dependent. I.e, some non-trivial linear combination equals 0. Any vector v i in such a linear combination having a non-zero coecient is called redundant; the expression (lin comb = 0) can be rewritten to say that v i = lin comb of the remaining vectors, i.e., v i is in the span of the remaining vectors. This means: Any redundant vector can be removed from our list of vectors without changing the span of the vectors. A basis for a vector space V is a set of vectors v 1 ;:::;v n so that (a) they are linearly independent, and (b) V =spanfv 1 ;:::;v n g. Example: The vectors e 1 = (1; 0;:::;0)me 2 = (0; 1; 0;:::;0);:::;e n = (0;:::;0; 1) are a basis for R n, the standard basis. To nd a basis: start with a collection of vectors that span, and repeatedly throw out redundant vectors (so you don't change the span) until the ones that are left are linearly independent. Note: each time you throw one out, you need to ask: are the remaining ones lin indep? Basic fact: If v 1 ;:::;v n is a bassis for V, then every v 2 V can be expressed as a linear combination of the v i 's in exactly one way. If v = a 1 v 1 + + a n v n, we call the a i the coordinates of v with respect to the basis v 1 ;:::;v n. The Dimension Theorem: Any two bases of the same vector space contain the same number of vectors. (This common number is called the dimension of V, denoted dim(v ).) Reason: if v 1 ;:::;v n is a basis for V and w 1 ;:::;w k 2 V are linearly independent, then k n 4

As part of that proof, we also learned: If v 1 ;:::;v n is a basis for V and w 1 ;:::;w k are linearly independent, then the spanning set v 1 ;:::;v n ;w 1 ;:::;w k for V can be thinned down to a basis for V by throwing away v i 's. In reverse: we can take any linearly independent set of vectors in V, and add to it from any basis for V,toproduce a new basis for V. Some consequences: If dim(v )=n, and W V is a subspace of V, then dim(w ) n If dim(v )=n and v 1 ;:::;v n 2 V are linearly independent, then they also span V If dim(v )=n and v 1 ;:::;v n 2 V span V, then they are also linearly independent. x7: Linear systems revisited Using our new-found terminology, we have: A system of equations Ax = b has a solution i b 2C(A). If Ax 0 = b, then every other solution to Ax = b is x = x 0 + z, where z 2N(A). To nish our description of (a) the vectors b that have solutions, and (b) the set of solutions to Ax = b, we need to nd (useful) bases for C(A) and N (A). So of course we start with: Finding a basis for the row space. Basic idea: if B is obtained from A by elementary row operations, then R(A) = R(B). So of R is the reduced row echelon form of A, R(R) =R(A) But a basis for R(R) is easy to nd; take all of the non-zero rows of R! (The zero rows are clearly redundant.) These rows are linearly independent, since each has a `special coordinate' where, among the rows, only it is non-zero. That coordinate is the pivot in that row. So in any linear combination of rows, only that vector can contribute something non-zero to that coordinate. Consequently, in any linear combination, that coordinate is the coecient of our vector! So, if the lin comb is 0, the coecient of our vector (i.e., each vector!) is 0. Put bluntly, to nd a basis for R(A), row reduce A, tor; the (transposes of) the non-zero rows of R form a basis for R(A). This in turn gives a way to nd a basis for C(A), since C(A) =R(A T )! To nd a basis for C(A), take A T, row reduce it to S; the (transposes of) the non-zero rows of S form a basis for R(A T ) =C(A). This is probably in fact the most useful basis for C(A), since each basis vector has that special coordinate. This makes it very easy to decide if, for any given vector b, Ax = b has a solution. You need to decide if b can be written as a linear combination of your basis vectors; but each coecient will be the corrdinate of b lying at the special coordinate of each vector. Then just check to see if that linear combination of your basis vectors adds up to b! 5

There is another, perhaps less useful, but faster way to build a basis for C(A); row reduce A to R, locate the pivots in R, and take the columns of A (Note: A, not R!) the correspond to the columns containing the pivots. These form a (dierent) basis for C(A). Why? Imagine building a matrix B out of just the bound columns. Then in row reduced form there is a pivot in every column. Solving Bv = 0 in the case that there are no free variables, we get v = 0, so the columns are linearly independent. If we now add a free column to B to get C, we get the same collection of pivots, so our added column represents a free variable. Then there are non-trivial solutions to Cv = 0, so the columns of C are not linearly independent. This means that the added columns can be expressed as a linear combination of the bound columns. This is true for all free columns, so the bound columns span C(A). Finally, there is the nullspace N (A). To nd a basis for N (A): Row reduce A to R, and use each row of R to solve Rx = 0 by expressing each bound variable in terms of the frees. collect the coeecients together and write x = x i1 v 1 + + x ik v k where the x ij are the free variables. Then the vectors v 1 ;:::;v k form a basis for N (A). Why? By construction they span N (A); and just with our row space procedure, each has a special coordinate where only it is 0 (the coordinate corresponding to the free variable!). Note: since the number of vectors in the bases for R(A) and C(A) is the same as the number of pivots ( = number of nonzero rows in the RREF) = rank of A, we have dim(r(a))=dim(c(a))=r(a). And since the number of vectors in the basis for N (A) is the same as the number of free variables for A ( = the number of columns without a pivot) = nullity of A (hence the name!), we have dim(n (A)) = n(a) =n, r(a) (where n=number of columns of A). So, dim(c(a)) + dim(n (A)) = the number of columns of A. 6