Linear Algebra, Spring 2005

Similar documents
Second Midterm Exam April 14, 2011 Answers., and

Mathematics 206 Solutions for HWK 13b Section 5.2

Chapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer.

Solutions to Homework 5 - Math 3410

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

Midterm 1 Solutions Math Section 55 - Spring 2018 Instructor: Daren Cheng

Linear algebra and differential equations (Math 54): Lecture 10

Advanced Linear Algebra Math 4377 / 6308 (Spring 2015) March 5, 2015

MATH 300, Second Exam REVIEW SOLUTIONS. NOTE: You may use a calculator for this exam- You only need something that will perform basic arithmetic.

Math 54. Selected Solutions for Week 5

Linear independence, span, basis, dimension - and their connection with linear systems

NAME MATH 304 Examination 2 Page 1

Abstract Vector Spaces

We see that this is a linear system with 3 equations in 3 unknowns. equation is A x = b, where

Math 54 Homework 3 Solutions 9/

1 Last time: inverses

Math 235: Linear Algebra

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

Row Space, Column Space, and Nullspace

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

1. Determine by inspection which of the following sets of vectors is linearly independent. 3 3.

Math 369 Exam #2 Practice Problem Solutions

Math 4377/6308 Advanced Linear Algebra

x y + z = 3 2y z = 1 4x + y = 0

MTH5102 Spring 2017 HW Assignment 3: Sec. 1.5, #2(e), 9, 15, 20; Sec. 1.6, #7, 13, 29 The due date for this assignment is 2/01/17.

= c. = c. c 2. We can find apply our general formula to find the inverse of the 2 2 matrix A: A 1 5 4

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 250B Midterm II Review Session Spring 2019 SOLUTIONS

Math 308 Spring Midterm Answers May 6, 2013

Math 3C Lecture 25. John Douglas Moore

The eigenvalues are the roots of the characteristic polynomial, det(a λi). We can compute

Worksheet for Lecture 15 (due October 23) Section 4.3 Linearly Independent Sets; Bases

Test 3, Linear Algebra

Study Guide for Linear Algebra Exam 2

Review Notes for Midterm #2

Math Linear Algebra Final Exam Review Sheet

Section 4.5. Matrix Inverses

Math 314H EXAM I. 1. (28 points) The row reduced echelon form of the augmented matrix for the system. is the matrix

Solutions to Math 51 First Exam April 21, 2011

Unit I Vector Spaces. Outline. definition and examples subspaces and more examples three key concepts. sums and direct sums of vector spaces

Math 54 HW 4 solutions

Math 110, Spring 2015: Midterm Solutions

Review Notes for Linear Algebra True or False Last Updated: February 22, 2010

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

MATH 310, REVIEW SHEET 2

Solution to Homework 1

MATH 54 - WORKSHEET 1 MONDAY 6/22

Math 550 Notes. Chapter 2. Jesse Crawford. Department of Mathematics Tarleton State University. Fall 2010

Basic Linear Algebra Ideas. We studied linear differential equations earlier and we noted that if one has a homogeneous linear differential equation

MATH SOLUTIONS TO PRACTICE PROBLEMS - MIDTERM I. 1. We carry out row reduction. We begin with the row operations

Math 1553 Introduction to Linear Algebra

Math Linear algebra, Spring Semester Dan Abramovich

Worksheet for Lecture 15 (due October 23) Section 4.3 Linearly Independent Sets; Bases

Math Final December 2006 C. Robinson

(II.B) Basis and dimension

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Math 4153 Exam 1 Review

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

1 Last time: determinants

Math 353, Practice Midterm 1

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

MATH 225 Summer 2005 Linear Algebra II Solutions to Assignment 1 Due: Wednesday July 13, 2005

Linear Algebra, Summer 2011, pt. 2

Comps Study Guide for Linear Algebra

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

(i) [7 points] Compute the determinant of the following matrix using cofactor expansion.

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C =

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

MATH 1553, SPRING 2018 SAMPLE MIDTERM 2 (VERSION B), 1.7 THROUGH 2.9

CSL361 Problem set 4: Basic linear algebra

Lecture 03. Math 22 Summer 2017 Section 2 June 26, 2017

Math 2331 Linear Algebra

Math 308 Midterm Answers and Comments July 18, Part A. Short answer questions

Solutions to Midterm 2 Practice Problems Written by Victoria Kala Last updated 11/10/2015

Spring 2014 Math 272 Final Exam Review Sheet

Math 4326 Fall Exam 2 Solutions

spring, math 204 (mitchell) list of theorems 1 Linear Systems Linear Transformations Matrix Algebra

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

Problem # Max points possible Actual score Total 120

Math 415 Exam I. Name: Student ID: Calculators, books and notes are not allowed!

Solution: By inspection, the standard matrix of T is: A = Where, Ae 1 = 3. , and Ae 3 = 4. , Ae 2 =

is Use at most six elementary row operations. (Partial

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

AFFINE AND PROJECTIVE GEOMETRY, E. Rosado & S.L. Rueda 4. BASES AND DIMENSION

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

MATH 304 Linear Algebra Lecture 20: Review for Test 1.

MATH 304 Linear Algebra Lecture 10: Linear independence. Wronskian.

Math 115A: Homework 4

Linear Algebra Lecture Notes-I

Math 321: Linear Algebra

We showed that adding a vector to a basis produces a linearly dependent set of vectors; more is true.

R b. x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 x 9 1 1, x h. , x p. x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 x 9

Carleton College, winter 2013 Math 232, Solutions to review problems and practice midterm 2 Prof. Jones 15. T 17. F 38. T 21. F 26. T 22. T 27.

Chapter 1: Linear Equations

BASIS AND DIMENSION. Budi Murtiyasa Muhammadiyah University of Surakarta

Exam 2 Solutions. (a) Is W closed under addition? Why or why not? W is not closed under addition. For example,

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Vector Spaces 4.5 Basis and Dimension

Transcription:

Linear Algebra, Spring 2005 Solutions May 4, 2005 Problem 4.89 To check for linear independence write the vectors as rows of a matrix. Reduce the matrix to echelon form and determine the number of non-zero rows (rank). (a) 1 2 3 1 1 2 3 1 1 2 3 1 3 7 1 2 0 1 10 5 0 1 10 5 1 3 7 4 0 1 10 5 rank = 2 < 3, therefore the vectors are linearly dependent. (b) 1 3 1 2 1 3 1 2 1 3 1 2 2 5 1 3 0 1 3 7 0 1 3 7 1 3 7 2 0 0 6 0 0 0 1 0 rank = 3, therefore the vectors are linearly independent. 1

Problem 4.90 To solve this, form the vectors u, v and w as a linear combination: au + bv + cw = 0 If the only answer to this is a = b = c = 0, then the polynomials are linearly independent. (a) a(t 3 4t 2 + 3t + 3) + b(t 3 + 2t 2 + 4t 1) + c(2t 3 t 2 3t + 5) = 0 Arranging the terms of t, (a+b+2c)t 3 +( 4a+2b c)t 2 +(3a+4b 3c)t+(3a b+5c) = 0t 3 +0t 2 +0t+0 = 0 This is an identity valid for all t, the coefficients of powers of t on left and right sides must be equal: a + b + 2c = 0 4a + 2b c = 0 3a + 4b 3c = 0 3a b + 5c = 0 The coefficient matrix of this system is 0 a 4 2 1 0 3 4 3 b = 0 c 3 1 5 0 Now reducing the matrix to echelon form: 4 2 1 0 6 7 0 6 7 3 4 3 0 1 9 0 0 61 3 1 5 0 4 1 0 0 22 0 6 7 0 0 1 0 0 0 2

Therefore c = 0 6b + 7c = 0 6b = 0 b = 0 a + b + 2c = 0 a = 0 Conclusions: Since a = b = c = 0, the polynomials are linearly independent (b) As in (a), after writing the linear combination (scalars a, b, c) and setting equal to the zero function, you get a coefficient matrix (which can be found easily becuase it s just obtained by arranging the coefficients of each polynomial as the columns of a matrix): 5 4 7 2 3 7 3 4 9 Now row reducethe matrix to echelon form: 5 4 7 0 1 3 0 1 3 2 3 7 0 1 3 0 0 0 3 4 9 0 1 3 0 0 0 The last two rows have all entries equal to zero. From the first two rows we can write the expressions for a, b as functions of c b + 3c = 0 b = 3c a + b + 2c = 0 a 3c + 2c = 0 a = c Hence, c can be any real number, b = 3c, a = c. Since there are non-zero solutions for the scalars in the linear combination the polynomials are linearly dependent. Alternate Solution, use coordinates: 3

For example for part a) write the polynomial coordinate vectors with respect to the standard mononomial basis {t 3, t 2, t, 1} for P 3 space: [1, 4, 3, 3], [1, 2, 4, 1], [2, 1, 3, 5] Now check whether these vectors are linearly independent as vectors in R 4 using the usual method. For instance, write them as rows of a matrix and row reduce. Check the rank of the echelon form. Obviously the calculations will be identical to those shown above. Problem 4.91 (a) To check the functions f(t) = e t, g(t) = sin t, h(t) = t 2 for linear independence, write a linear combination of these functions and equate it to zero function 0 using unknown scalars a, b, c: af + bg + ch = 0 This is an equation involving equality of functions as vectors, therefore it must be valid point-wise for ANY value of t (this is how the vector space operations are defined for function spaces). In school math parlance this is called an identity: af(t) + bg(t) + ch(t) = 0 for all t To show the independence show that a = b = c = 0. Selecting specific values of t and then solving the following equation will solve/eliminate the values of a, b, c. ae t + bsint + ct 2 = 0 Set t = 0 a(e 0 ) + b(sin 0) + c(0 2 ) = 0 a(1) = 0 a = 0 So now we have b sin t + ct 2 = 0. Set t = π cπ 2 = 0 c = 0. We are left with bsin t = 0 for any t, so we must have b = 0 as well. 4

Consequently the three functions are linearly independent because only the zero linear combination equals the zero function. (b) To check the functions f(t) = e t, g(t) = e 2t, h(t) = t for linear independence write a linear combination and equate to zero vector (zero function 0) using unknown scalars a, b, c: af + bg + ch = 0 This must be valid point-wise for ANY value of t, giving the identity: af(t) + bg(t) + ch(t) = 0 for all t We can select specific values of t to solve/eliminate the values a, b, c and show that they must be zero. Set t = 0 a(e 0 ) + b(e 0 ) + c(0) = 0 a + b = 0 a = b Set t = 1 a(e 1 ) a(e 2 ) + c(1) = 0 a(e e 2 ) + c = 0 (1) Set t = 1 a(e 1 ) a(e 2 ) + c( 1) = 0 a(e 1 e 2 ) c = 0 (2) Add equations (1) & (2): a(e e 2 + e 1 e 2 ) = 0 The value in the parenthesis is not zero, therefore we must have a = 0 and since a = b, therefore b = 0. Now go back to equation (1) and conclude that c = 0. Consequently the three functions are linearly independent because only the zero linear combination equals the zero function. Problem 4.93 Given u, v, w linearly independent vectors. S is a set of three vectors which is a combination of these independent vectors. To find the linear independence of S check the linear combination of the vectors in S. 5

4.93 a S = {u + v 2w, u v w, u + w} Write in matrix form and convert toechelon form 1 1 2 1 1 2 1 1 2 1 1 1 0 2 1 0 1 3 1 0 1 0 1 3 0 2 1 As rank = 3, S is independent. 1 1 2 0 1 3 0 0 5 1 1 2 0 1 3 0 0 1 4.93 b S = {u + v 3w, u + 3v w, v + w} Write in matrix form and convert toechelon form 1 1 3 1 1 3 1 1 3 1 3 1 0 2 2 0 1 1 0 1 1 0 1 1 0 0 0 As rank = 2, S is dependent. This means that one of the vectors in S can be written as a linear combination of the other two vectors, therefore S is linearly dependent. Note: Book question is incorrect for part b. Problem 4.97 Use the casting-out algorithm to get the solution. Each vector is written as a column in the matrix (the u s are shown just to indicate which vector each column refers to): 6

(a) 1 1 3 1 1 2 5 2 1 1 1 1 2 2 2 1 3 1 5 4 1 1 3 1 0 1 2 1 0 2 4 0 0 4 8 3 0 2 4 1 1 1 3 1 0 1 2 1 0 0 0 2 0 0 0 1 0 0 0 3 1 1 3 1 0 1 2 1 0 0 0 1 The columns of the original matrix which correspond to columns with pivots in the row reduced echelon form provide our basis: W = span(ū 1, ū 2, ū 4 ) and {u 1, u 2, u 4 } is a basis for W. NOTE: Book answer is incorrect for this question. (b) 1 2 1 3 2 4 3 7 1 2 1 3 3 6 2 8 1 2 1 1 1 2 1 3 0 0 1 1 0 0 1 1 0 0 2 2 1 2 1 3 0 0 1 1 The columns of the original matrix which correspond to columns with pivots in the row reduced echelon form provide our basis: W = span(ū 1, ū 3 ) and a required basis is {u 1, u 3 }. NOTE: Book answer is incorrect for this question. 7

(c) 1 1 1 1 0 1 2 2 1 2 3 1 0 1 1 1 1 0 1 1 1 1 1 1 0 1 2 2 0 1 2 0 0 1 1 1 0 1 0 0 swap(2, 5) 1 1 1 1 0 1 0 0 0 1 2 0 0 1 1 1 0 1 2 2 1 1 1 1 0 1 0 0 0 0 2 0 0 0 1 1 0 0 2 2 1 1 1 1 0 1 0 0 0 0 2 0 0 0 1 1 1 1 1 1 0 1 0 0 0 0 1 0 0 0 0 1 The columns of the original matrix which correspond to columns with pivots in the row reduced echelon form provide our basis: W = span(ū 1, ū 2, ū 3, ū 4 ) and the basis for W is {u 1, u 2, u 3, u 4 }. The four vectors are linearly independent. (d) 1 2 1 4 0 1 2 2 5 1 0 3 4 1 1 4 6 1 2 1 4 0 0 0 1 1 0 2 2 0 0 1 3 2 1 2 1 4 0 0 0 1 1 0 0 4 4 0 0 4 4 1 2 1 4 0 0 0 1 1 The columns of the original matrix which correspond to columns with pivots in the row reduced echelon form provide our basis: W = span(ū 1, ū 2, ū 3 ) and the basis for W is {u 1, u 2, u 3 }. 8

Problem 4.98 4.98 a U = {(a, b, c, d) : b 2c + d = 0} The constraint that defines U gives the form of any vector (a, b, c, d) in U: a can be any real number; b, c, and d are constrained by the equation. If we allow b and c to be any real number, then d has to be: d = 2c b We can therefore express any vector in U in the form: (a, b, c, 2c b) = a(1, 0, 0, 0) + b(0, 1, 0, 1) + c(0, 0, 1, 2) i.e. a linear combination of the three vectors {(1, 0, 0, 0), (0, 1, 0, 1), (0, 0, 1, 2)}. These three vectors are linearly independent [for completeness, e.g. on a test, this ought to be checked following the method as in problem 4.89 above]. Therefore these three vectors provide a basis for U and so U is 3 dimensional. 4.98 b W = {(a, b, c, d) : a = d, b = 2c} The constraint which defines W provides the form for any vector in W : let a and b be any real number, then c and d are fixed by the constraint: a R, b R, c = 1b, d = a 2 So any vector in W can be expressed in the form: (a, 2b, b, a) = a(1, 0, 0, 1) + b(0, 2, 1, 0) i.e. a linear combination of the two vectors {(1, 0, 0, 1), (0, 2, 1, 0)}. These two vectors are linearly independent because one is not a scalar multiple of the other [that argument is sufficient, no need to do any calculations as in problem 4.89 when you have two vectors]. Therefore these two vectors provide a basis for W and so W is 2 dimensional. 9

4.98 c U W = {(a, b, c, d) : b 2c + d = 0, a = d, b = 2c} To follow the same approach as above with three constraints like this, you need: (i) to check if the three constraints are independent, and (ii) express them in a row reduced echelon form so you can write the expression for your general vector in the subspace. This isn t so easy to do by inspection, so write out the constraints as three equations: b 2c + d = 0 a d = 0 b 2c = 0 Now row reduce these to echelon form [use a matrix for convenience]: 0 1 2 1 1 0 0 1 1 0 0 1 1 0 0 1 0 1 2 1 0 1 2 1 0 1 2 0 0 1 2 1 This gives the required form of the constraints: a d = 0, b 2c + d = 0, d = 0 to express the general form of a vector in U W by inspection: (0, 2c, c, 0) = c(0, 2, 1, 0) So all vectors in the intersection space are multiples of the vector (0, 2, 1, 0) and {(0, 2, 1, 0)} is therefore a basis. The space is one-dimensional. Another simpler approach: a) U, a subspace of R 4, is defined by one constraint [which must be independent], therefore dim U = 4 1 = 3; b) W is defined by two constraints, which you could show to be independent simply by arguing that the one is not a multiple of the other], therefore dim W = 4 2 = 2; c) U W is defined by three independent constraints, as shown in c) part above, there dim(u W ) = 4 3 = 1. Yet another variation, using the later concept of vector space sums: a) and b) as above. Then combine the three basis vectors you got for U in 10

a) and the two you got for U in b) and check linear independence for the five combined vectors, which span U + W = span{u, W }. You will see that four of these are linearly independent and so U + W = R 4 and dim(u + W ) = 4. Then conclude: dim(u W ) = dim U + dim W dim(u + W ) = 3 + 2 4 = 1. Obviously the approach taken provides different amounts of work. You should choose the method based on what the question actually asks for, e.g. does it ask for bases for the subspaces. If not there is no need to find bases, unless it provides the easiest method anyway. The most direct approach for this question would probably be the another simpler approach described above. Problem 4.101 4.101 a Write the polynomials as coordinate vectors with respect to the standard mononomial basis for P n and arrange these as the row of a matrix: 1 1 1 1 0 1 1 1 0 0 1 1.... 0 0 0 1 Here we have used the basis in the following order: {t n, t n 1, t n 2,..., t 1, 1}. The order is convenient because we automatically get the matrix of coordinate vectors in echelon form. We can see by inspection that there are n independent rows, and therefore the coordinate vectors are linearly independent. n independent coordinate vectors in R n must automatically be a basis. Finally, we conclude [by isomorphism] that the original polynomial vectors must also be a basis for P n. 11

Note the usefulness of the idea of coordinate vectors in this problem. Also, had we written the mononomial basis in the other order, say, we would have got a matrix of coordinate vectors that wasn t automatically in echelon form. We would have had to row reduce it. So choosing the order carefully for the vectors in the coordinate basis can save work. 4.101 b The dimension of P n is n + 1. The given set of vectors has only n polynomials in it. Therefore it cannot be a basis. Problem 4.102 To solve these, we can use either the row space method or the casting-out algorithm with columns. Since we used casting-out earlier, and since the question does not ask for a subset of the original vectors, we ll use the row space method to solve this problem. There is a typo in question 4.102 a) after some reverse engineering, I think v = t 3 + 3t 2 t + 4 was intended. We ll assume that the problem meant that. 4.102 a Write the polynomials as coordinate vectors using the standard mononomial basis and arrange as rowsof a matrix. ū 1 2 2 1 1 2 2 1 1 2 2 1 v 1 3 1 4 0 1 1 3 0 1 1 3 w 2 1 7 7 0 3 3 9 The rank of the row reduced echelon matrix in 2 [two non-zero rows]. Therefore, dim(w ) = 2. A basis is found by using the coefficients of the first two rows: 12

{[1, 2, 2, 1], [0, 1, 3, 3]} corresponding to the polynomial basis: {t 3 + 2t 2 2t + 1, t 2 + 3t + 3} 4.102 b ū 1 1 3 2 1 1 3 2 v 2 1 1 4 0 1 7 8 w 4 3 5 2 0 1 7 6 Rank is 3, therefore dim(w ) = 3. 1 1 3 2 0 1 7 8 0 0 14 14 1 1 3 2 0 1 7 8 0 0 1 1 A basis is found by using the coefficients of the three rows: {t 3 + t 2 3t + 2, t 2 7t + 8, t 1} Problem 4.103 To solve this directly take a linear combination of the four matrices. aa + bb + cc + dd = 0 a 1 5 + b 1 1 + c 2 4 + d 1 7 = 0 0 4 2 1 5 5 7 5 1 0 0 a + b + 2c + d 5a + b 4c 7d = 0 0 4a b 5c 5d 2a + 5b + 7c + d 0 0 Equating entries in the matrices you get four equations in the four scalars a, b, c, d, a system with zero righthand-side and coefficient matrix: 1 5 1 4 7 4 1 5 5 2 5 7 1 [Alternatively we can express the four matrices as coordinate vectors with respect to the standard basis for 13

M 2,1 : [1, 5, 4, 2], [1, 1, 1, 5], [2, 4, 5, 7], [1, 7, 5, 1] and check this set of vectors in R 4 for linear independence using the casting out algorithm and columns. You arrive at the same point this way, but more quickly.] Now row reduce the matrix to echelon form: 1 1 1 5 1 4 7 0 6 6 2 0 3 3 1 4 1 5 5 0 3 3 1 2 5 7 1 0 3 3 1 The rank is two. Therefore dim(w ) = 2. Continuing the casting out algorithm we would select the original vectors corresponding to columns in the reduced matrix that don t have pivots, i.e. throw away the vector in column 3. So the required basis is {A, B, D}. Note: This basis consists of matrices from the original set of given matrices, which is not strictly required for this problem (it doesn t ask for that). You could have used an alternative algorithm (e.g. row method) in this case. The above solution is not difficult though. 14