Review Notes for Midterm #2

Similar documents
2018 Fall 2210Q Section 013 Midterm Exam II Solution

LINEAR ALGEBRA REVIEW

(i) [7 points] Compute the determinant of the following matrix using cofactor expansion.

What is on this week. 1 Vector spaces (continued) 1.1 Null space and Column Space of a matrix

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

PRACTICE PROBLEMS FOR THE FINAL

2. (10 pts) How many vectors are in the null space of the matrix A = 0 1 1? (i). Zero. (iv). Three. (ii). One. (v).

Study Guide for Linear Algebra Exam 2

LINEAR ALGEBRA QUESTION BANK

1 Last time: inverses

Math 22 Fall 2018 Midterm 2

Math 3108: Linear Algebra

Review Notes for Linear Algebra True or False Last Updated: February 22, 2010

MATH 2210Q MIDTERM EXAM I PRACTICE PROBLEMS

(a) only (ii) and (iv) (b) only (ii) and (iii) (c) only (i) and (ii) (d) only (iv) (e) only (i) and (iii)

Math 54 HW 4 solutions

5.3.5 The eigenvalues are 3, 2, 3 (i.e., the diagonal entries of D) with corresponding eigenvalues. Null(A 3I) = Null( ), 0 0

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

Advanced Linear Algebra Math 4377 / 6308 (Spring 2015) March 5, 2015

DEPARTMENT OF MATHEMATICS

1. Determine by inspection which of the following sets of vectors is linearly independent. 3 3.

Solutions to Section 2.9 Homework Problems Problems 1 5, 7, 9, 10 15, (odd), and 38. S. F. Ellermeyer June 21, 2002

MATH2210 Notebook 3 Spring 2018

Math 544, Exam 2 Information.

MATH 1553, SPRING 2018 SAMPLE MIDTERM 2 (VERSION B), 1.7 THROUGH 2.9

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Worksheet for Lecture 15 (due October 23) Section 4.3 Linearly Independent Sets; Bases

Linear Equation: a 1 x 1 + a 2 x a n x n = b. x 1, x 2,..., x n : variables or unknowns

(c)

2. Every linear system with the same number of equations as unknowns has a unique solution.

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

Review for Chapter 1. Selected Topics

Linear Algebra- Final Exam Review

Math 308 Practice Test for Final Exam Winter 2015

MTH 464: Computational Linear Algebra

Practice Final Exam. Solutions.

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

Math 369 Exam #2 Practice Problem Solutions

4.3 - Linear Combinations and Independence of Vectors

Linear Algebra Highlights

Worksheet for Lecture 15 (due October 23) Section 4.3 Linearly Independent Sets; Bases

Final Examination 201-NYC-05 December and b =

Practice Final Exam Solutions

Linear Algebra Summary. Based on Linear Algebra and its applications by David C. Lay

Chapter 1: Linear Equations

Math 54 First Midterm Exam, Prof. Srivastava September 23, 2016, 4:10pm 5:00pm, 155 Dwinelle Hall.

Solutions to Final Exam

Final EXAM Preparation Sheet

MTH501- Linear Algebra MCQS MIDTERM EXAMINATION ~ LIBRIANSMINE ~

Midterm #2 Solutions

Math 54. Selected Solutions for Week 5

Solutions to Midterm 2 Practice Problems Written by Victoria Kala Last updated 11/10/2015

Chapter 3. Vector spaces

Section 4.5. Matrix Inverses

MA 242 LINEAR ALGEBRA C1, Solutions to First Midterm Exam

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

x y + z = 3 2y z = 1 4x + y = 0

Chapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer.

R b. x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 x 9 1 1, x h. , x p. x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 x 9

Linear Algebra Final Exam Study Guide Solutions Fall 2012

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

This MUST hold matrix multiplication satisfies the distributive property.

Final Examination 201-NYC-05 - Linear Algebra I December 8 th, and b = 4. Find the value(s) of a for which the equation Ax = b

Summer Session Practice Final Exam

Online Exercises for Linear Algebra XM511

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Chapter 1: Linear Equations

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

MATH 300, Second Exam REVIEW SOLUTIONS. NOTE: You may use a calculator for this exam- You only need something that will perform basic arithmetic.

MATH10212 Linear Algebra Lecture Notes

Math 1553 Introduction to Linear Algebra

Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th.

Instructions Please answer the five problems on your own paper. These are essay questions: you should write in complete sentences.

Chapter 2: Matrix Algebra

Math 4377/6308 Advanced Linear Algebra

T ((x 1, x 2,..., x n )) = + x x 3. , x 1. x 3. Each of the four coordinates in the range is a linear combination of the three variables x 1

Solving a system by back-substitution, checking consistency of a system (no rows of the form

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Solution to Homework 1

MATH 33A LECTURE 2 SOLUTIONS 1ST MIDTERM

Dimension. Eigenvalue and eigenvector

Math 235: Linear Algebra

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

CSL361 Problem set 4: Basic linear algebra

MATH10212 Linear Algebra B Homework Week 4

SUMMARY OF MATH 1600

MA 265 FINAL EXAM Fall 2012

Math 314H EXAM I. 1. (28 points) The row reduced echelon form of the augmented matrix for the system. is the matrix

MATH10212 Linear Algebra Lecture Notes

Math113: Linear Algebra. Beifang Chen

NAME MATH 304 Examination 2 Page 1

Review Solutions for Exam 1

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

Review for Exam 2 Solutions

Announcements Monday, October 29

Linear Equations in Linear Algebra

MAT Linear Algebra Collection of sample exams

Transcription:

Review Notes for Midterm #2 Joris Vankerschaver This version: Nov. 2, 200 Abstract This is a summary of the basic definitions and results that we discussed during class. Whenever a proof is provided, I ve taken care to present only the essential idea, without all of the supplementary details; you can consult the textbook for additional discussions and more examples. Understanding the main ideas of the proofs presented here is a good preparation for the conceptual part of the exam. Reading these notes is not a substitute for attending class and working through the book. Please signal any mistakes, omissions, etc. to jvankers@math.ucsd.edu. Matrix Algebra. Be familiar with the definition of a matrix, the addition and multiplication and transposition of matrices. Know how to compute the inverse of a square matrix by reducing [A I] to [I A ]. Note that a non-square matrix can never have an inverse. 2. For a 2 2 matrix, the determinant det A is given by: [ ] a b If A =, then det A = ad bc. c d If det A 0, then A is invertible and A = [ ] d b. () det A c a Sketch of Proof: A simple computation shows that [ ] [ ] a b d b = (det A)I c d c a 2. (2) Therefore, if det A 0, then it follows that A is given by (). Conversely, if A is invertible, then we can multiply both sides of (2) from the left by A and obtain formula () for the inverse. 3. Theorem: If A and B are n n matrices such that AB = I n, then BA = I n. In other words, both A and B are invertible (and A = B ). Sketch of Proof: This is half of the definition of invertibility. This often saves some work when showing that a matrix is invertible. We show first that B is invertible. If x is such that Bx = 0, then by left multiplying by A we get ABx = A0 = 0. But ABx = I n x = x, so that x = 0 and hence Nul(B) = {0}. As B is square, it follows that B is invertible. Multiplying both sides of AB = I n from the right by B, we get that

A = B. Multiplying this equation from the left by B, we finally we BA = BB = I n, which is what we had to prove. 4. If A, B are invertible n n matrices, then the following holds: (a) A is invertible and (A ) = A; (b) AB is invertible and (AB) = B A ; (c) A T is invertible and (A T ) = (A ) T. Sketch of Proof: use the previous theorem. 5. Theorem: A square n n matrix A is invertible iff its row-reduced echelon form is the identity matrix I n. 6. A matrix equation Ax = b with A invertible is always consistent and has exactly one solution: multiply both sides of the equation by A to get x = A b. 7. Let A be an n m matrix and consider the matrix transformation T (x) = Ax (for x R m ). The domain of T is R m and the codomain of T is R n. The null space of a matrix A is denoted by Nul(A): Nul(A) = {x R m : Ax = 0} = {Solutions of Ax = 0} = Ker(T ). The column space of A is denoted by Col(A): Col(A) = Span{columns of A} = {b R n s.t. Ax = b has a solution} = Range(T ). 8. Invertible matrix theorem (part I matrices): For a square matrix A, the following are all equivalent (either they are all true or all false). (a) A is invertible; (b) A is row-reducible to I n ; (c) A has n pivot rows; (d) A has n pivot columns; (e) A has n pivot positions. Sketch of Proof: (a) (b) follows from Theorem 3. The equivalence of the other statements follows by observing that if A is square and has n pivots, then the pivots are necessarily located along the diagonal. Hence every column is a pivot column, and every row is a pivot row. 9. Invertible matrix theorem (part II linear transformations): Consider a square matrix A and the matrix transformation T (x) = Ax. The following are equivalent: (a) A is invertible; (b) T is onto; (c) T is one-to-one. Sketch of Proof: It is sufficient to keep in mind that T is onto (resp. one-to-one) iff A has pivots in every row (resp. column). 0. Invertible matrix theorem (part III linear systems): For a square n n matrix A, the following are equivalent: (a) A is invertible; (b) Ax = b has a unique solution for every b; (c) The only solution of Ax = 0 is x = 0; 2

(d) The columns of A span R n ; (e) The columns of A are linearly independent. Sketch of Proof: left to the reader! 2 Vector Spaces and Subspaces 2. Vector Spaces. Be familiar with the definition of a vector space. It helps to keep a few relevant examples in mind: (a) R n is a vector space. The addition and scalar multiplication of column vectors is done component-wise: a b a + b a ca a 2 + b 2 = a 2 + b 2 and c a 2 = ca 2, a n b n a n + b n a n ca n and the zero vector is the column with all zeros. (b) P n, the space of polynomials of degree n, is a vector space. Let s take P 2 as an example: a typical element of P 2 is of the form p(t) = a + bt + ct 2, where a, b, c are arbitrary coefficients. Summing two polynomials and scalar multiplication are done term-wise: (a+bt+ct 2 )+(a +b t+c t 2 ) = a+a +(b+b )t+(c+c )t 2 and λ(a + bt + ct 2 ) = λa + λbt + λct 2. The zero polynomial 0 + 0t + 0t 2 (= 0) plays the role of the zero vector in this vector space. (c) The space of all n m-matrices is a vector space. Addition and scalar multiplication are again done component-wise, and the zero matrix plays the role of the zero vector. It is a good exercise to verify explicitly that all of these examples are vector spaces. In general, the main idea is that we have a set whose elements can be added, multiplied by scalars, and which comes equipped with a zero element. 2.2 Subspaces. A subspace H of a vector space V is a subset H V satisfying the following properties: (a) 0 H; (b) For all u, v H, we have u + v H; (c) For all u H and scalars c R, we have that cu H. 2. Examples: for each of these examples, verify whether they are subspaces. (a) For any vector space V, V itself is a subspace of V. (b) The set {0} consisting of only the zero vector is a subspace of any vector space. (c) The set H = {at 2 where a R} is a subspace of P 2. You can check this directly, but observe that any element of H is of the form at 2, that is, (constant) t 2. In other words, H = Span{t 2 }. In a similar vein, the following sets are all examples of subspaces of P 2 : i. K = {at + bt 2 : a, b R} = Span{t, t 2 } ii. L = {a( + t + t 2 ) : a R} = Span{ + t + t 2 }. The following are examples of subsets that are not subspaces: 3

(a) In R 2, the set of vectors with positive entries: {[ } x U = R y] 2, with x 0, y 0. (Problem with scalar multiplication). (b) The set of all polynomials p(t) = a + t 2 in P 2, where a is an arbitrary coefficient. (None of the properties to have a subspace is satisfied). On Figure, a few subsets of R 2 are depicted which are not subspaces of R 2. Try to see why. 3. Let v,..., v k be elements of a vector space V. Then the span, denoted by Span{v,..., v k }, is the set of all linear combinations of v,..., v k : Span{v,..., v k } = { c v + + c k v }{{ k } } c,..., c k are arbitrary scalars 4. Theorem: Let v,..., v k be elements of a vector space V. Then Span{v,..., v k } is a subspace of V. Sketch of proof: The zero vector is in the span, since 0 = 0v + + 0v k. The sum of two linear combinations (elements of the span) is again a linear combination: (c v + + c k v k ) + (d v + + d k v k ) = (c + d )v + + (c k + d k )v k. Similarly, you can check that the scalar multiplication property is satisfied. Figure : These sets are not subspaces of R 2. (a) The leftmost subset (the black line) does not contain the zero vector; (b) The center figure (first quadrant) is not closed under scalar multiplication, that is, the multiplication of any vector with a negative scalar is not in the subset. (c) The rightmost figure contains zero and behaves well as far as scalar multiplication is concerned, but here the addition property is missing: take two non-zero vectors on the two different lines and add them together. The result will no longer be in the subset consisting of both black lines. 5. A good way of constructing subspaces (in fact, the only way) is to consider the span of a few vectors. 2.3 Basis and dimension. A basis of a vector space V is a collection of vectors v,..., v k such that (a) V = Span{v,..., v k }; (b) {v,..., v k } is linearly independent. 2. Examples: (a) In R 3, the standard basis: 0 0 e = 0, e 2 =, e 3 = 0. 0 0 4

These vectors are linearly independent and any vector in R 3 can be expressed as a linear combination of e, e 2 and e 3. (b) In P 2, the basis consisting of the polynomials p (t) =, p 2 (t) = t and p 3 (t) = t 2. An arbitrary polynomial p(t) = a + bt + ct 2 P 2 can be expressed as a linear combination p(t) = a + bt + ct 2 = ap (t) + bp 2 (t) + cp 3 (t), so that Span{p (t), p 2 (t), p 3 (t)} is the whole of P 2. To check that these polynomials are linearly independent, we check whether the following equation has a non-zero solution (x, x 2, x 3 ): x p (t) + x 2 p 2 (t) + x 3 p 3 (t) = 0 x + x 2 t + x 3 t 2 = 0. However, if a polynomial is zero for all values of the independent variable t, then the coefficients must be zero: x = x 2 = x 3 = 0. Therefore, the polynomials are linearly independent. 3. Spanning set theorem: Consider vectors v,..., v k in a vector space V and let H = Span{v,..., v k }. A basis for H can be found by removing from v,..., v k the vectors that are linear combinations of the other vectors. 4. The dimension of a vector space is the number of vectors in an arbitrary basis. 5. Theorem: any set of n linear independent vectors in R n (or more generally in an n-dimensional vector space) is automatically a basis. Sketch of proof: To have a basis, we need to show that the n vectors span R n (we are given that they are linearly independent). Collect the n vectors v,..., v n in a square matrix A = [v v n ]. The vectors are linearly independent so that the matrix A has n pivot columns. Since A is square, it also has n pivot rows. Hence, the columns span R n. You could make this a little bit shorter by noting that A is invertible and using the invertible matrix theorem. 6. Let v,..., v n be a basis of R n. The coordinates of an arbitrary vector b R n (with respect to the chosen basis) are scalars x, x 2,..., x n such that x v + + x n v n = b. To compute the coordinates, you need to solve this vector equation. 2.4 Null space, column space, row space. The row space of a matrix A is denoted by Row(A): Row(A) = Span{rows of A} = Col(A T ) 2. The rank of A is the dimension of the row space: Ran(A) = dim Row(A) = # of linearly independent rows in A = # of pivots in A. 3. Theorem: If two matrices A and B are row equivalent, then Row(A) = Row(B). Sketch of proof: Keep in mind that Row(A) is the span of the rows of A. Now, by doing row operations (switching, scaling, adding), we are merely replacing the rows of A by linear combinations of the rows. Hence, we do not change the span. 5

4. Let A be an m n matrix. Then Nul(A) and Row(A) are subspaces of R n and Col(A) is a subspace of R m. Sketch of proof: Col(A) is the span of the columns of A, so is a subspace by the previous theorem. In the same vein, Row(A) is a subspace since Row(A) is nothing but the span of the row vectors of A. For Nul(A), we check the required properties directly. First, 0 Nul(A) since A0 = 0 regardless of A. Secondly, let u, v Nul(A) (in other words, Au = Av = 0). Then A(u + v) = Au + Av = 0 so that u + v Nul(A). Likewise, if c R is a constant and u Nul(A), then A(cu) = c(au) = 0, so that cu Nul(A). 5. Basis for Nul(A): solve the homogeneous equation Ax = 0 and put the solution in parametric form: x = x a v a + x b v b +, where x a, x b,... are the free variables. By construction v a, v b,... span Nul(A) and it is not hard to show that they are linearly independent and hence form a basis. Observe: dim Nul(A) = # of free variables. 6. Basis for Col(A). The columns of A form a spanning set for Col(A). Now use the spanning set theorem and remove the columns that are linear combinations of others to obtain a basis. 7. Caveat: at the end of the day, the linearly independent columns of A are precisely the pivot columns of A. To find the locations of the pivots, check an echelon form. However, a common mistake is to say that Col(A) is spanned by the pivot columns of the echelon form. This is not true! 8. Dimension of Col(A). We have dim Col(A) = # of pivots in A. 9. Rank-nullity theorem : for any m n matrix A, we have Ran(A) = dim Col(A) and dim Nul(A) + Ran(A) = n. Sketch of proof: We prove the second part. The dimension of Nul(A) is the number of free variables in A while Ran(A) is the number of pivot positions, but since every column either has a pivot or does not have a pivot, we have n = (# pivot columns) + (# non pivot columns) = (# pivot columns) + (# free variables) = Ran(A) + dim Nul(A). 0. Example: This is the example used in class. Take A = 5 0 0 3 6 = 0 2 4 0 3 6. 2 7 4 0 0 0 0 Label the columns by 5 0 a = 0, a 2 =, a 3 = 3, a 4 = 6. 2 7 4 We have that a 3 = 2a + 3a 2 and a 4 = 2a 3, so that these vectors are linear combinations of a, a 2. The latter are a basis for Col(A). The null space can be found by solving Ax = 0. solution is x 2 4 x 2 x 3 = x 3 3 + x 6 4 0. x 4 0 The 6

The two vectors on the right hand side span Nul(A). For the row space, we consult the echelon matrix. The pivot rows (, 0, 2, 4) and (0,, 3, 6) span Row(A) and are linearly independent by construction, so they form a basis. To conclude, you can check your answers to some extent with the rank-nullity theorem: dim Nul(A) + Ran(A) = 2 + 2 = 4 and Ran(A) = dim Col(A) = 2. 2.5 Linear transformations. A linear transformation is a map T : V W between two vector spaces V, W satisfying (a) T (u + v) = T (u) + T (v) for all u, v V ; (b) T (cu) = ct (u) for all u V and all scalars c. We refer to V as the domain and W as the codomain. 2. Given an element v V, the image of v under the transformation T is the element T (v) W. The range of T, denoted as Ran(T ), is the set of all images. 3. The kernel of a linear transformation T : V W is the set of all elements in V that are mapped onto zero: Ker(T ) = {v V s.t. T (v) = 0}. 4. Ker(T ) is a subspace of the domain V, while Ran(T ) is a subspace of the codomain W. To prove this, you could adapt the proof of why Nul(A) and Col(A) are subspaces. 5. A linear transformation T : V W is said to be one-toone if Ker(T ) = 0 and onto if Ran(T ) = W. 6. Examples: in each of these cases, check whether the transformation is linear and determine the kernel and range. (a) Any matrix transformation (T (x) = Ax for some matrix A) is linear. For this kind of transformation, we have that Ker(T ) = Nul(A) and Ran(T ) = Col(A). (b) The transformation T : P 2 R 3 given by a T (a + bt 2 + ct 2 ) = b c is linear. You can show that this transformation is both one-to-one and onto. (c) The transformation T : P P 2 given by T (a + bt) = at + b 2 t2 (integration) is linear. It is one-to-one, but not onto: the range is the set of all quadratic polynomials without constant term: Ran(T ) = {ct + dt 2, for arbitrary c, d}. Note that the range is the span of the polynomials t, t 2 P 2. (d) The transformation T : R 3 P 2 given by a T b = a + bt c 7

is linear. The transformation is onto but not one-toone: the kernel is given by 0 Ker(T ) = 0, where c is arbitrary c Note that the kernel is the span of the vector (0, 0, ) R 3. (e) The transformation T : P P 2 given by T (a + bt) = (a + bt) 2 is not linear (why not?). = a 2 + 2abt + b 2 t 2 8