Third Midterm Exam Name: Practice Problems November 11, Find a basis for the subspace spanned by the following vectors.

Similar documents
Practice Final Exam. Solutions.

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

2. Every linear system with the same number of equations as unknowns has a unique solution.

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

2018 Fall 2210Q Section 013 Midterm Exam II Solution

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).

This MUST hold matrix multiplication satisfies the distributive property.

Eigenvalues, Eigenvectors, and Diagonalization

MATH 220 FINAL EXAMINATION December 13, Name ID # Section #

LINEAR ALGEBRA SUMMARY SHEET.

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Calculating determinants for larger matrices

Study Guide for Linear Algebra Exam 2

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

Math 215 HW #9 Solutions

Review Notes for Linear Algebra True or False Last Updated: February 22, 2010

SUMMARY OF MATH 1600

MATH 1553 PRACTICE MIDTERM 3 (VERSION B)

Dimension. Eigenvalue and eigenvector

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

PRACTICE PROBLEMS FOR THE FINAL

Math Linear Algebra Final Exam Review Sheet

Linear Algebra: Sample Questions for Exam 2

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Math 102, Winter 2009, Homework 7

and let s calculate the image of some vectors under the transformation T.

80 min. 65 points in total. The raw score will be normalized according to the course policy to count into the final score.

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Remark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

Question 7. Consider a linear system A x = b with 4 unknown. x = [x 1, x 2, x 3, x 4 ] T. The augmented

Solving a system by back-substitution, checking consistency of a system (no rows of the form

DIAGONALIZATION. In order to see the implications of this definition, let us consider the following example Example 1. Consider the matrix

LINEAR ALGEBRA REVIEW

Remark 1 By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.

MATH 300, Second Exam REVIEW SOLUTIONS. NOTE: You may use a calculator for this exam- You only need something that will perform basic arithmetic.

1 Last time: determinants

ft-uiowa-math2550 Assignment OptionalFinalExamReviewMultChoiceMEDIUMlengthForm due 12/31/2014 at 10:36pm CST

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

Chapter 2:Determinants. Section 2.1: Determinants by cofactor expansion

Online Exercises for Linear Algebra XM511

MTH 464: Computational Linear Algebra

Announcements Monday, October 29

Lecture Summaries for Linear Algebra M51A

Math 315: Linear Algebra Solutions to Assignment 7

Recall : Eigenvalues and Eigenvectors

235 Final exam review questions

Math Final December 2006 C. Robinson

Math 313 (Linear Algebra) Exam 2 - Practice Exam

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

(a) only (ii) and (iv) (b) only (ii) and (iii) (c) only (i) and (ii) (d) only (iv) (e) only (i) and (iii)

Exam in TMA4110 Calculus 3, June 2013 Solution

Summer Session Practice Final Exam

Math 20F Practice Final Solutions. Jor-el Briones

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

MATH 221, Spring Homework 10 Solutions

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Chapter 5. Eigenvalues and Eigenvectors

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MA 265 FINAL EXAM Fall 2012

1. General Vector Spaces

1. Determine by inspection which of the following sets of vectors is linearly independent. 3 3.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Econ Slides from Lecture 7

Chapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer.

Math 3C Lecture 20. John Douglas Moore

Math 369 Exam #2 Practice Problem Solutions

Homework Set #8 Solutions

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

MATH 2210Q MIDTERM EXAM I PRACTICE PROBLEMS

W2 ) = dim(w 1 )+ dim(w 2 ) for any two finite dimensional subspaces W 1, W 2 of V.

Math Computation Test 1 September 26 th, 2016 Debate: Computation vs. Theory Whatever wins, it ll be Huuuge!

Final Exam Practice Problems Answers Math 24 Winter 2012

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

MAT Linear Algebra Collection of sample exams

Math 240, 4.3 Linear Independence; Bases A. DeCelles. 1. definitions of linear independence, linear dependence, dependence relation, basis

MATH 235. Final ANSWERS May 5, 2015

spring, math 204 (mitchell) list of theorems 1 Linear Systems Linear Transformations Matrix Algebra

MATH 1553-C MIDTERM EXAMINATION 3

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

MATH 1553 PRACTICE FINAL EXAMINATION

Math 308 Practice Final Exam Page and vector y =

Review for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Symmetric and anti symmetric matrices

Math 308 Practice Test for Final Exam Winter 2015

Chapter 5 Eigenvalues and Eigenvectors

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Math 2114 Common Final Exam May 13, 2015 Form A

MATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product.

1 9/5 Matrices, vectors, and their applications

MATH 213 Linear Algebra and ODEs Spring 2015 Study Sheet for Midterm Exam. Topics

Math 54 HW 4 solutions

Math 205, Summer I, Week 4b:

Linear algebra II Tutorial solutions #1 A = x 1

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Problem 1: Solving a linear equation

Transcription:

Math 7 Treibergs Third Midterm Exam Name: Practice Problems November, Find a basis for the subspace spanned by the following vectors,,, We put the vectors in as columns Then row reduce and choose the pivot columns The first and second are pivot columns Thus a basis for the spanned subspace is B =, Suppose n and that the vectors H = {v, v,, v n } span the n-dimensional vector space V Show that H is a basis for V We already know that H spans V It remains to argue that the vectors in H are linearly independent, making H a basis We apply the Spanning Set Theorem, which says that some subset of H is a basis for the nonzero space V Here is the argument: suppose that the set H is not independent Then one of the vectors, say v k H, is a linear combination of the other vectors and may be removed from H leaving a smaller set that still spans V So long as there are more than two vectors remaining, we may continue to remove vectors until the reduced spanning set H is independent or there remains only one vector By assumption, the space spanned must be nonzero and so if the spanning set H consists of a single vector, it must be the basis for the space Now every basis has to have the the same number of vectors which is the dimension n = dim(v) Thus the independent spanning set H must have n vectors, thus H = H (no vectors were removed) This is part of the Basis Theorem in the text

Find the determinant by row reduction (Text problem 77[]) D = 7 7 The row operation of subtracting a multiple of one row from another does not change the determinant Swapping rows in the last equality multiplies the determinant by The determinant of an upper triangular matrix is the product of the diagonal entries D = = = = ( () ) = Let A and B be square matrices Show that even though AB and BA may not be equal, it is always true that det(ab) = det(ba) (Problem 77[] of the text) This is a simple consequence of the Theorem that the determinant of a product of square matrices is the product of determinants So det(ab) = det(a) det(b) = det(b) det(a) = det(ba) Using just Theorem of Section that gives the effect of elementary row operations on the determinant of a matrix, show that if two rows of a square matrix A are equal, then det(a) = The same is true for two columns Why? (Text problem 77[] Assume that rows R i and R j of A are equal, where i j Let E be the elementary row operation matrix that swaps R i and R j Then because the rows are equal, A = EA Taking determinants we find det(a) = det(ea) = det(a) because this row operation flips the sign of the determinant It follows that det(a) = as desired Now suppose two columns a i and a j are equal, where i j If we are allowed to transpose the matrix, then A T has the same determinant as A but now A T has two equal rows a T i and a T j, thus has zero determinant by what we showed already However, we can argue just from Theorem First observe that if A had a zero row, say R i, then det(a) = To see this, let E be the elementary row operation that multiplies R i by By Theorem, this operation doubles the determinant However A = EA because R i is still a zero row Thus det(a) = det(ea) = det(a) which also implies det(a) = Now, if A has two equal columns, let c be a column vector whose entries are zero except the ith which is and the jth which is Thus Ac = so there is a nontrivial vector in c Nul(A) It follows that A may be row-reduced to

an echelon U matrix with a free column, and therefore, a zero row Let E, E,,E p be matrices that reduce A to echelon form Hence E p E A = U Take determinant of this equation Now, by Theorem, each of the row operations E i multiplies the determinant by a nonzero constant k i It follows that k p k det(a) = det(u) = because U has a zero row But since the k i s are nonzero, it follows that det(a) = too Find basis for the column space, the row space and the null space of the matrix (Text problem []) Row reducing, we find A = A = = U

The first third and fourth columns are pivot columns, so a basis for the column space is B col =,, The nonzero rows of the echelon matrix U are a basis for the row space { [ ] [ ] [ ] } B row =,, The basis for the null space are solutions to the homogeneous equation Ax =, or what is the same, Ux = x and x are free variables The null space is thus x = x, x = x x = x and x = x x x x = x + x thus Nul(A) = x + x x x x x : x, x R so B Nul =, 7 Let A be an m n matrix Show that Ax = b has a solution for all b in R m if and only if the equations A T x = has only the trivial solution (Text problem []) Assume that A T x = has only the trivial solution This means that the dimension of the null space of the transpose is dim(nul(a T )) = This is the same as the number of free columns of A T which means that every column of A T is independent, or dim(col(a T )) = m, the number of columns of A T This says that m = dim(row(a)) In other words, every row of the echelon form of A must be a pivot row, which form the basis of Row(A) But this means that every row of the row reduction of the augmented matrix [A, b] to echelon form has every row a pivot row But this means we can solve Ax = b for every b R m Now assume that we can solve Ax = b for every b R m This means that the row reduction of the augmented matrix [A, b] to echelon form has a pivot in every row (otherwise there is a zero row that says zero equals a linear combination of the b j s so not all b make a consistent system) But this says that there are m basis elements in Row(A) so dim(row(a)) = m Thus every one of the m columns of A T is linearly independent It follows that the only dependence relation A T c = is the trivial one c = thus A T x = has only the trivial solution

In P, find the change of coordinates matrix from the basis B = { t, +t t, +t} to the standard basis Then write t as a linear combination of the polynomials in B (Problem [] of the text) The standard basis is C = {, t, t } By the theorem, the change of basis matrix is given by finding coordinates of the basic vectors in one basis in terms of the other basis P C B = [ ] [ [b ] C, [b ] C, [b ] C = [ t ] C, [ + t t ] ] C, [ + t] C = To find the coefficients of [t ] B we solve P C B [t ] B = [t ] C Row reduce the augmented matrix [ ] P C B, [t ] C Hence x =, x = x = and x = x x = ( ) = Then [t ] B = so t = b b + b = ( t ) ( + t t ) + ( + t)

Let B = {x,, x } and C = {y,, y } where x k = cos k t and y k = cos kt Let H = span(b) Show that C is another basis for H Set P = [ [y ] B,, [y ] B ] and calculate P Explain why the columns of P are the C-coordinate vectors of x,, x Then use these coordinate vectors to write trigonometric identities that express powers of cos t in terms of the functions in C (Text problems [] and [7]) We use the Pythagorean identity sin t + cos t = and the addition formulæ cos(a + B) = cos A cos B sin A sin B; sin(a + B) = sin A cos B + cosa sin B This thus we get the equations expressing the C in terms of B cos t = cos t sin t = cos t ( cos t) = cos t cos t = cos t cos t sin t sin t = ( cos t ) cos t sin t cos t = cos t cos t ( cos t) cos t = cos t cos t sin t = sin t cos t + cos t sin t = sin t cos t + ( cos t ) sin t = ( cos t ) sin t cos t = cos t = ( cos t ) = cos t cos t + cos t = cos t cos t sin t sin t = ( cos t )( cos t cos t) ( cos t sin t)( cos t ) sin t = ( cos t )( cos t cos t) ( cos t cos t)( cos t) = cos t cos t + cos t + cos t cos t + cos t = cos t cos t + cos t cos t = cos t = ( cos t cos t) = cos t cos t + cos t These formulas say that each function in C is a linear combination of the vectors in B Thus span(c) H However, the functions in C are linearly independent, thus C is a basis for span(c) making it seven dimensional, thus must equal H Reading off the coefficients starting from cos( t) = and cos( t) = cos t we find P = P B C =

We invert using the usual augmented matrix [P, I] and row reduce 7

Thus Because for any x H we have Premultiplying by the inverse we find P = P = P B C = [ [y ] B,, [y ] B ] P [x] C = P B C [x] C = [x] B P [x] B = [x] C

thus, P is the unique change of basis matrix, whose columns are basic vectors in coordinates P C B = P = P = [ [x ] C,, [x ] C ] In other words, the columns of P are C-coordinate vectors of the x j s In terms of expressing the C functions, the powers of cos t, in terms of the B functions we get cos t = + cos t cos t = cos t + cos t cos t = + cos t + cos t cos t = cos t + cos t + cos t cos t = + cos t + cos t + cos t Determine whether the matrix can be diagonalized If it can, do so (text problem []) A = 7 Expand according to the third column to find the characteristic polynomial λ λ det(a λi) = λ 7 λ λ λ = + ( λ) λ 7 λ 7 λ = [( + λ)(7 λ) + (7 λ) 7] + ( λ)[λ( + λ)(7 λ) 7 + ( + λ) + (7 λ) λ] = [ λ λ + ] + ( λ)[ λ + λ + λ + ] = λ λ + λ λ + = (λ )(λ )(λ + )

We find the eigenvectors by solving homogeneous systems For λ =, we row reduce A λ I 7 7 7 So if x =, x = 7, x = and x = For λ =, we row reduce A λ I 7 7 So if x =, x =, x = x x = and x = x x = For λ =, we row reduce A λ I So if x = and x = then x = and x = x + x = Also if x = and x = then x = and x = x + x = We build our diagonalizing matrix

P = [v, v v, v ] and check AP = P D AP = = 7 7 7 P D = = 7 7 Show that if the n n matrix A has n linearly independent eigenvectors, then so does A T (Use the Diagonalization Theorem) (Text problem []) Recall the Diagonalization Theorem which says that the n n matrix A is diagonalizable if and only if A has n independent eigenvectors Now since A is diagonalizable, the Diagonalization Theorem implies that A has n independent eigenvectors Put these vectors in as columns P = [p,, p n ] Then AP = P D where D is the diagonal matrix of eigenvalues corresponding to the p i s From this equation we can see that A T is diagonalizable as well Namely, taking the transpose of the equation we find (AP ) T = P T A T = (P D) T = D T P T = DP T because for diagonal matrices, D T = D Rearranging, Q AQ = P T A T (P T ) = D which says A T is diagonalizable by the matrix Q = (P T ) Again using the Diagonalization Theorem, the fact that A T is diagonalizable implies that A T has n linearly independent eigenvectors as desired In fact, the eigenvectors are the columns of Q = (P T ) and the eigenvalues are the same as for A, namely the diagonals of D Suppose A is an invertible n n matrix which is similar to B Show that then B is invertible and A is similar to B (Text problem []) First we show that B is invertible Being similar to A implies that there is an invertible matrix P such that P AP = B But the matrices on the left side are invertible so we consider C = P A P We claim that C is the inverse of B, therefore B is invertible Indeed CB = P A P )(P AP ) = P A P P AP = P A AP = P P = I,

thus C is a left inverse of B But a left inverse is an inverse by the Invertible Matrix Theorem Now taking the inverse of the first equation yields (P AP ) = P A (P ) = P A P = B which says that B is similar to A via the same matrix P, as to be shown For the matrix A, show that it is similar to a composition of a rotation and a scaling (Text problem []) A = First find the complex eigenvalue and eigenvector λ = det(a λi) = = ( λ)( λ) + = λ λ + λ The eigenvalues are thus given by the quadratic formula λ, λ = ± = ± i Now λ = i so a = and b = Let s find the complex eigenvector for λ = i by inspection = (A λi)v = + i + i + i Consider the matrix P = [Re v, Im v] = We claim that this matrix establishes the similarity to rotation composed with dilation Indeed AP = = P a b b = = a are the same, as claimed Let r = a + b = + = Then P AP = = cos φ sin φ sin φ cos φ

where the scaling is dilation by a factor r = and the rotation is by an angle φ where cos φ =, sin φ = This says φ = 77 + πk radians for k an integer Let A be an m n matrix Show that (Row A T ) = Nul A T ; (Col A T ) = Nul A The equality of sets is shown by establishing both and Technically, vectors in Nul A T are m column vectors whereas vectors in (Row A T ) are m row vectors, but we identify these via transpose To show (Row A T ) Nul A T choose an arbitrary v (Row A T ) This means v is orthogonal to all vectors in Row A T However, this space is spanned by the rows of A T which are the transposes of the columns of A, thus Row A T = span{a T,, a T n } In particular a T i v = for i =,, n Viewing v as a column vector in R m, we can write this as matrix multiplication of a m matrix times an m matrix a T i vt = But this occurs in matrix multiplication A T v T = a T a T n a T v T v T = a T n v T = = showing that v T Nul(A T ), so (Row A T ) Nul A T as to be shown Conversely, if v T Nul(A T ) then A T v T = which says by the row-column rule for multiplication that a T i vt = for every row of A T But this says the v is orthogonal to the vectors a T, a T n However, these vectors span the row space of A T so that v (Row A T ) so Nul A T (Row A T ) as to be shown To show (Col A T ) Nul A, choose an arbitrary vector w (Col A T ) The column space of A T is spanned by the transposes of the rows R j of A, namely Col A T = span{r T,, Rm} T Hence w is orthogonal to these Rj T w = for all j =,, m Viewing as multiplication of n matrix by a n matrices, R j w = for all j =,, m By the row-column rule for multiplication, this says Aw = R R m R w w = = = R m w Hence w Nul A as to be shown so (Col A T ) Nul A Conversely, if w Nul A, by the row-column rule for multiplication, this says R j w = for all j =,, m Thus w is orthogonal to the generating set of Col A T = span(r T,, R T m) It follows that w (Col A T ) as to be shown so Nul A (Col A T )

For the matrix A construct a matrix N whose column form a basis for Nul A and construct another matrix R whose rows form a basis for Row A Perform a matrix computation that illustrates a fact from Theorem (Text problem []) A = 7 7 Thus x and x are free It follows that x = x, x = x x + x = x + x and x = x x x x = x + x The null space and N whose columns

are a basis of the null space is thus x x x + x Nul A = x : x, x R ; N = x A matrix R whose rows are a basis of the row space is x R = Theorem says that (Row A) = Nul A This is equivalent to all the basis vectors of Row A, namely the pivotal row vectors of the echelon U i, i =,,, are orthogonal to all basis vectors of Nul A, namely the columns n j, j =, Using the matrix multiplication version, U i n j = for all i =,, and j =, We can check by matrix multiplication RN = = Verify the parallelogram law for vectors u and v in R n (Text problem []) u + v + u v = u + v u + v + u v = (u + v) (u + v) + (u v) (u v) = (u u + u v + v u + v v) + (u u u v v u + v v) = u u + v v = u + v