Math 242 fall 2008 notes on problem session for week of This is a short overview of problems that we covered.

Similar documents
MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

MA 511, Session 10. The Four Fundamental Subspaces of a Matrix

Math 415 Exam I. Name: Student ID: Calculators, books and notes are not allowed!

MATH 167: APPLIED LINEAR ALGEBRA Chapter 2

Math 1553 Introduction to Linear Algebra

Math 314H EXAM I. 1. (28 points) The row reduced echelon form of the augmented matrix for the system. is the matrix

Math 240, 4.3 Linear Independence; Bases A. DeCelles. 1. definitions of linear independence, linear dependence, dependence relation, basis

Math 123, Week 5: Linear Independence, Basis, and Matrix Spaces. Section 1: Linear Independence

MATH 2360 REVIEW PROBLEMS

MODEL ANSWERS TO THE FIRST QUIZ. 1. (18pts) (i) Give the definition of a m n matrix. A m n matrix with entries in a field F is a function

Math 21b: Linear Algebra Spring 2018

The Four Fundamental Subspaces

Math 2174: Practice Midterm 1

Chapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer.

Row Space, Column Space, and Nullspace

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture

2. Every linear system with the same number of equations as unknowns has a unique solution.

Chapter 2 Subspaces of R n and Their Dimensions

Section 2.2: The Inverse of a Matrix

Math 344 Lecture # Linear Systems

MATH10212 Linear Algebra B Homework Week 4

Solutions to Exam I MATH 304, section 6

Math Final December 2006 C. Robinson

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

INVERSE OF A MATRIX [2.2]

2018 Fall 2210Q Section 013 Midterm Exam II Solution

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

We see that this is a linear system with 3 equations in 3 unknowns. equation is A x = b, where

Review Notes for Linear Algebra True or False Last Updated: February 22, 2010

Math 369 Exam #2 Practice Problem Solutions

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Math 3C Lecture 20. John Douglas Moore

Basis, Dimension, Kernel, Image

Quizzes for Math 304

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Math 3191 Applied Linear Algebra

3.4 Elementary Matrices and Matrix Inverse

Row Space and Column Space of a Matrix

Basis, Dimension, Kernel, Image

Math 313 Chapter 5 Review

18.06 Problem Set 3 - Solutions Due Wednesday, 26 September 2007 at 4 pm in

Solution Set 3, Fall '12

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th.

Chapter 1. Vectors, Matrices, and Linear Spaces

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

Math 4377/6308 Advanced Linear Algebra

Math 323 Exam 2 Sample Problems Solution Guide October 31, 2013

Instructions Please answer the five problems on your own paper. These are essay questions: you should write in complete sentences.

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

MTH 362: Advanced Engineering Mathematics

Math 205, Summer I, Week 3a (continued): Chapter 4, Sections 5 and 6. Week 3b. Chapter 4, [Sections 7], 8 and 9

Math 54 HW 4 solutions

Midterm 1 Review. Written by Victoria Kala SH 6432u Office Hours: R 12:30 1:30 pm Last updated 10/10/2015

18.06 Spring 2012 Problem Set 3

Test 3, Linear Algebra

Family Feud Review. Linear Algebra. October 22, 2013

SYMBOL EXPLANATION EXAMPLE

Math 205, Summer I, Week 4b:

Third Midterm Exam Name: Practice Problems November 11, Find a basis for the subspace spanned by the following vectors.

Chapter 2 Notes, Linear Algebra 5e Lay

4.9 The Rank-Nullity Theorem

Find the solution set of 2x 3y = 5. Answer: We solve for x = (5 + 3y)/2. Hence the solution space consists of all vectors of the form

Math 54 First Midterm Exam, Prof. Srivastava September 23, 2016, 4:10pm 5:00pm, 155 Dwinelle Hall.

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

Math 205, Summer I, Week 4b: Continued. Chapter 5, Section 8

MA 0540 fall 2013, Row operations on matrices

INVERSE OF A MATRIX [2.2] 8-1

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Math 217 Midterm 1. Winter Solutions. Question Points Score Total: 100

18.06 Problem Set 3 Due Wednesday, 27 February 2008 at 4 pm in

Rank and Nullity. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

Linear Algebra. Grinshpan

Definitions for Quizzes

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Linear Systems. Carlo Tomasi. June 12, r = rank(a) b range(a) n r solutions

x y + z = 3 2y z = 1 4x + y = 0

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Math 60. Rumbos Spring Solutions to Assignment #17

Vector Spaces, Orthogonality, and Linear Least Squares

This lecture is a review for the exam. The majority of the exam is on what we ve learned about rectangular matrices.

Matrices and systems of linear equations

NAME MATH 304 Examination 2 Page 1

Determine whether the following system has a trivial solution or non-trivial solution:

Solution: (a) S 1 = span. (b) S 2 = R n, x 1. x 1 + x 2 + x 3 + x 4 = 0. x 4 Solution: S 5 = x 2. x 3. (b) The standard basis vectors

Properties of Linear Transformations from R n to R m

Chapter 6 - Orthogonality

1 Last time: inverses

a s 1.3 Matrix Multiplication. Know how to multiply two matrices and be able to write down the formula

Math 54 Homework 3 Solutions 9/

Span & Linear Independence (Pop Quiz)

Mon Feb Matrix inverses, the elementary matrix approach overview of skipped section 2.5. Announcements: Warm-up Exercise:

APPENDIX: MATHEMATICAL INDUCTION AND OTHER FORMS OF PROOF

Math 308 Practice Test for Final Exam Winter 2015

10. Rank-nullity Definition Let A M m,n (F ). The row space of A is the span of the rows. The column space of A is the span of the columns.

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

Linear Systems. Carlo Tomasi

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Math 353, Practice Midterm 1

Transcription:

Math 242 fall 28 notes on problem session for week of 9-3-8 This is a short overview of problems that we covered.. For each of the following sets ask the following: Does it span R 3? Is it linearly independent? Is it a basis. S = 2, 3, 4, 5 Does S span R 3? Is S a basis for 3 4 5 6 2 S 2 = 3, 7 5 6 2 4 S 3 =,, 3 We solved these and discussed the general approach to such problems. The general approach for such questions is as follows: For T = {v, v 2,..., v n } R m form the n matrix A which has i th equal to v i and determine U for a factorization P A = LU. We describe how to answer these and then explain why the tests work. Does T span R m? If U has a row of all zeroes the answer is no and if U does not the answer is yes. In particular, if n < m there will be a row of zeroes and T will not span: we need at least m vectors to span R m. Is T linearly independent? If U has a free variable the answer is no and if U does not the answer is yes. In particular, if n > m there will be a free variable and T is not linearly independent. Is T a basis? T must span R m and be linearly independent. If n < m then T does not span R m and if n > m then T is not linearly independent. So a necessary condition is that n = m. Note that in this case U has a row of zeroes if and only if it has a free variable. So we observe that m = n (the number of vectors equals the dimension) spanning implies linear independence and vice-versa. Recall that the rank of a matrix is the number of pivots. That is, the number of nonzero rows in U for a P A = LU factorization. So using this terminology, T spans R m if rank(a) = m and it does not span if rank(a) < m (and rank(a) cannot be greater than the number of rows m). T is linearly independent if rank(a) = n and it is linearly dependent if rank(a) < n (and rank(a) cannot be greater than the number of columns n).

Explanation for spanning test. We need to test if every b R m can be written as a linear combination of vectors in T. That is, is there a solution x T = (x, x 2,..., x n ) to v x + v 2 x 2 + + v n x n = b? In matrix form this is Ax = b. Consider a factorization P A = LU. If U has a row of all s then there will be some choice of b for which there is no solution. Recall that we use the factorization to solve the system by first solving Lc = P b and then attempting to solve Ux = c. If U has a row of zeroes we can pick c with a nonzero corresponding to such a row. Then if we set b = P Lc we have no solution to Ax = b. If U has no row of zeroes we can always solve Ux = c. Explanation for linear independence test. We need to test if the only solution to v x + v 2 x 2 + +v n x n = is trivial. In matrix form this is Ax =. Consider a factorization P A = LU. recall that the set of solutions to Ax = is the same as the solutions to Ux =. There are nontrivial solutions exactly when there are free variables. For the sets above we get the following A and factorization A = LU. In each case P = I. We onlyneed U to answer our questions but it is useful to see the factorization. For S : 2 3 4 5 = 2 2 3. So S does not span R 3 and 3 4 5 6 3 is not linearly independent. 2 For S 2 : 3 7 = 5 6 independent. For S 3 : 2 4 3 = independent. It is a basis. 3 5 4 2. So S 2 does not span R 3 and is linearly 2 4 3. So S 3 spans R 3 and is linearly 2. For each of the matrices above determine a basis for the four fundamental subspaces. (We did not actually do this for these matrices in the problem session.) Also what are the dimensions and note relations between them. Bases for the four fundamental subspaces. Given A = LU we determine bases for the four fundamental subspaces as follows. Explanations as to why these methods work will be given in another set of notes and (except for the method we use for the cokernel) on pages 6-9 of the text. The matrix A for S. Here L = 2 Basis for Ker(A): Using back substitution in U the set of solutions to Ux = is.

x x 2 x 3 x 4 = 2 x 3 + 2 3 x 4 where x 3, x 4 can be any real numbers. Thus a basis is {(, 2,, ) T, (2, 3,, ) T }. The dimension is 2. Basis for Corng(A): Use nonzero rows of U: a basis is {(,,, ) T, (,, 2, 3) T }. The dimension is 2. Basis for Coker(A): Use the last row of L : a basis is {(,, ) T }. The dimension is. {(, 2, 3) T, (, 3, 4) T }. The dimension is 2. The matrix A for S 2. Here L = 3 7 4 Basis for Ker(A): There are no free variables so the kernel is trivial, ker(a) = {} which by convention has an empty basis. The dimension is. Basis for Corng(A): Use nonzero rows of U: a basis is {(, 2) T, (, ) T }. The dimension is 2. Basis for Coker(A): Use the last row of L : a basis is {( 7, 4, ) T }. The dimension is. {(, 3, 5) T, (2, 7, 6) T }. The dimension is 2. The matrix A for S 3. Here L = (although we do not need to use it since there are no zero rows in U). Basis for Ker(A): There are no free variables so the kernel is trivial, ker(a) = {} which by convention has an empty basis. The dimension is. Basis for Corng(A): Use nonzero rows of U: a basis is {(, 2, 3) T, (,, 3) T, (,, ) T }. The dimension is 3. Basis for Coker(A): There are no zero roes in U so the cokernel is trivial, coker(a) = {} which by convention has an empty basis. The dimension is. {(,, ) T, (2,, ) T, (4, 3, ) T }. The dimension is 3.. 3. We observed Theorem 2.49 in the examples of the dimensions of the subspaces. A restatement and informal explanation is given here. More formal versions and proofs can be found in the text on pages 6-9.

For and m n matrix A and a P A = LU factorization we have: dimension of the kernel of A is equal to the number of free variables in U (which is n minus the rank of A) dimension of the corange of A is equal to the number of nonzero rows in U, which is the number of pivot columns which is the rank of A. Thus we have the kernel and corange (i.e., the nullspace and row space) as subspaces of R n and the sum of their dimensions is n. We also have dimension of the cokernel of A is equal to the number of zero rows in U (which is m minus the rank of A) dimension of the range of A is equal to the number of pivot columns in U, which is the rank of A. Thus we have the cokernel and range (i.e., the left nullspace and column space) as subspaces of R m and the sum of their dimensions is m. In addition, since the corange is the range of A T the dimension of the corange is the rank of A T. Since we noted above that the dimension of the corange is the rank of A we have established that the rank of A and the rank of A T are the same. For example if A is a 42 99 matrix and the rank is 3 then elementary row operations produce a U with 3 nonzero rows and 2 zero rows. Elementary row operations on the 99 42 matrix A T will also produce 3 nonzero rows and for the transpose we will have 69 zero rows. 4. We proved Lemma 2.34: The elements v, v 2,..., v n form a basis for vector space V if and only if every x V can be written uniquely as a linear combination of the basis elements. The only if direction of the proof is in the text on page 3. The more obvious if direction is not in the tex so we do it here. Proof of if : Since every x can be written uniquely in terms of the vectors, it can be written in terms of these vectors. So v, v 2,..., v n span V. Since = v +v 2 + + v n and this trivial solution is unique, it is the only solution. Hence v, v 2,..., v n are linearly independent. The vectors span V and are linearly independent so they are a basis. 5. We showed that elementary row operations do not change the corange (row space). An alternate proof is in the text on page 9. Informally we recognize that permuting rows and multiplying a row should not change which vectors we can get as combinations of

the rows. In addition, by adding a multiple of one row to another we are replacing a row by a combination of the rows so this should not change the row space. Formally we encode the operation on A in matrix notation as B = EA. In fact we will show that left multiplication by an invertible matrix does not change the corange. We complete the proof by noting that the matrices E encoding elementary row operations are invertible. c corng(ea) y T EA = c T for some y z T A = (y T )A = Ec T where z T = y T A. So corng(ea) corng(a). c corng(a) y T A = c T for some y z T EA = y T E EA = y T A = c T where z T = y T E. So corng(a) corng(ea). corng(ea) corng(a) and corng(a) corng(ea) implies corng(a) = corng(ea). This establishes that when P A = LU we have corng(a) = corng(u). Observe that the same is not true for range. In some of the examples above each column of U has 3rd coordinate and hence the 3rd coordinate of every vector in rng(u) is while the same is not true for vectors in rng(a). In particular, columns of A do not have this property.