k is a product of elementary matrices.

Similar documents
MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MAT Linear Algebra Collection of sample exams

Second Midterm Exam April 14, 2011 Answers., and

Chapter 3 Transformations

Linear Algebra Highlights

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Reduction to the associated homogeneous system via a particular solution

(v, w) = arccos( < v, w >

Review problems for MA 54, Fall 2004.

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

(v, w) = arccos( < v, w >

Solutions to Final Exam

(v, w) = arccos( < v, w >

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Conceptual Questions for Review

Recitation 8: Graphs and Adjacency Matrices

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Lecture 7. Econ August 18

Chapter 3. Determinants and Eigenvalues

MA 265 FINAL EXAM Fall 2012

LINEAR ALGEBRA REVIEW

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

Applied Linear Algebra in Geoscience Using MATLAB

1 Matrices and Systems of Linear Equations. a 1n a 2n

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

MATH 235. Final ANSWERS May 5, 2015

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Review of Linear Algebra

Introduction to Determinants

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

Fundamentals of Engineering Analysis (650163)

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Matrices related to linear transformations

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Linear Algebra. and

Eigenvalues and Eigenvectors A =

APPLICATIONS The eigenvalues are λ = 5, 5. An orthonormal basis of eigenvectors consists of

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

There are two things that are particularly nice about the first basis

ANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2

NOTES on LINEAR ALGEBRA 1

ANSWERS. E k E 2 E 1 A = B

Extra Problems for Math 2050 Linear Algebra I

Linear Algebra. The Manga Guide. Supplemental Appendixes. Shin Takahashi, Iroha Inoue, and Trend-Pro Co., Ltd.

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

MATH 369 Linear Algebra

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

LAKELAND COMMUNITY COLLEGE COURSE OUTLINE FORM

Graduate Mathematical Economics Lecture 1

1. Let m 1 and n 1 be two natural numbers such that m > n. Which of the following is/are true?

33A Linear Algebra and Applications: Practice Final Exam - Solutions

LINEAR ALGEBRA KNOWLEDGE SURVEY

Online Exercises for Linear Algebra XM511

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

Math 4326 Fall Exam 2 Solutions

18.06 Quiz 2 April 7, 2010 Professor Strang

MAT 1302B Mathematical Methods II

Chapter 2 Notes, Linear Algebra 5e Lay

Solutions to practice questions for the final

Math 265 Linear Algebra Sample Spring 2002., rref (A) =

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra Review. Vectors

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

Practice problems for Exam 3 A =

235 Final exam review questions

Linear Equations and Vectors

Undergraduate Mathematical Economics Lecture 1

1 Matrices and Systems of Linear Equations

Exercise Sheet 1.

Diagonalizing Matrices

Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson

1. General Vector Spaces

MTH 2032 SemesterII

(a) II and III (b) I (c) I and III (d) I and II and III (e) None are true.

Elementary maths for GMT

Lecture 6: Lies, Inner Product Spaces, and Symmetric Matrices

Linear Algebra Practice Problems

Lecture Notes in Linear Algebra

Elementary linear algebra

MATH Spring 2011 Sample problems for Test 2: Solutions

Linear Algebra- Final Exam Review

and let s calculate the image of some vectors under the transformation T.

Linear Algebra Massoud Malek

MATRICES. knowledge on matrices Knowledge on matrix operations. Matrix as a tool of solving linear equations with two or three unknowns.

Math 215 HW #9 Solutions

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

Matrices and Linear transformations

MATRICES AND MATRIX OPERATIONS

3 Matrix Algebra. 3.1 Operations on matrices

Math Spring 2011 Final Exam

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Linear Algebra. Min Yan

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Math Computation Test 1 September 26 th, 2016 Debate: Computation vs. Theory Whatever wins, it ll be Huuuge!

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Transcription:

Mathematics, Spring Lecture (Wilson) Final Eam May, ANSWERS Problem (5 points) (a) There are three kinds of elementary row operations and associated elementary matrices. Describe what each kind of operation does when applied to a matri. ANSWER: The three types of elementary row operations (i) interchange two rows, (ii) multiply a row by some non-zero real number, and (iii) add a multiple of one row to a different row. (b) Why is every elementary matri non-singular? Give a proof. Your proof should make use of part (a) and should not make use of the determinant. ANSWER: An elementary matri is the result of applying an elementary row operation to an appropriately sized identity matri, and multiplication by that matri effects the corresponding row operation. For each of the three types of elementary row operations: (i) interchanging the same two rows puts the matri back as it was, (ii) multiplication by /(the non-zero real number) on the same row puts the matri back as it was, and (iii) subtracting the same multiple of the same row from the same different row will put the matri back as it was. So each elementary row operation can be undone by another elementary row operation. For any elementary matri, consider the row operation it carries out. Now look at the elementary row operation that undoes that. Finally, take the elementary matri that corresponds to the undoing operation. That elementary matri must be the inverse of the given elementary matri so that is non-singular. (c) Prove that every non-singular matri can be written as a product of elementary matrices. You may assume that the inverse of an n n matri A can be found, if it eists, by row-reducing an n n matri composed of A and I n side-by-side. ANSWER: If we write the matri A I n and row reduce it we will get I n A. Since the row reduction was accomplished by a sequence of elementary row operations, and each row operation is effected by multiplying by an elementary matri, we have that A = E k E k E E I n for some elementary matrices E,..., E k, I n is the elementary matri corresponding to multiplying the first row by. Hence A is a product of elementary matrices. But as noted in (b) the inverse of each elementary matri is itself an elementary matri, so A = ( A ) = In E E E k is a product of elementary matrices. Problem (6 points) For each of the following sets and operations, tell whether it is or is not a real vector space. If it is not, show a property a vector space must have that this one fails. (Be specific and make clear why this eample fails to have that property.) f it is a vector space, show how you know it has the closure properties for addition and scalar multiplication (i.e., that the sum and scalar product defined for it always produce results within the set): You do not have to show that it satisfies the other vector space aioms. u u (a) The set of -element real column vectors, with operations defined by y z v = w y v z w

and (for real numbers r) r y z = r ry rz ANSWER: This is not a vector space. As one eample of how it fails: The column only thing that works as a zero vector on the right, y z = work on the left. Or you could point out that addition is not commutative. y z is the, but it does not (b) Let {a n } denote a sequence of real numbers, such as the sequence a n = n which could be partially written out as,,,,.... Let V be the set of those sequences for which only finitely many terms are non-zero: For eample, that n sequence would not be in V, but if f is any polynomial, the sequence a n = f (n) () consisting of the derivatives of f evaluated at would be in V since after the k th derivative any polynomial of degree k gives. Define addition of two sequences termwise, i.e. {a n } + {b n } = {a n + b n }, and scalar multiplication similarly, r{a n } = {r a n }. ANSWER: This is a vector space. We need to show it is closed under addition and scalar multiplication. I.e., if we add two sequences each of which has only finitely many terms do we get a sequence that has only finitely many terms, and similarly for r{a n }. Given {a n } and {b n } in V, there is some number N such that whenever n > N, a n =, and there is some number N such that whenever n > N, b n =. Let N be the larger of N and N. Then whenever n > N, a n + b n = + =, so {a n } + {b n } is in V. And whenever n > N, r a n = r =, so r{a n } is in V. Problem (8 points) Each part of this problem gives you a vector space V and a set S of vectors in V. Tell whether the given set of vectors is linearly dependent or independent, and whether the given set spans the V. Be sure to give evidence justifying your answers. (a) V is the subspace of R consisting of the vectors whose first and third entries are equal, and S is the set of three vectors. ANSWER: These span V but they are not linearly independent, they are linearly dependent. Note that by subtracting the first from the second you can get and that by subtracting twice that from the first and then multiplying by you can get, and that any vector whose first and third entries must be of the form y which is + y So any vector in V is a linear combination of linear combinations of these vectors and so is a linear combination of these vectors, i.e. the vectors span V. There are many different ways to show

they are linearly dependent. For eample, you could solve and find that the linear combination + = Or you could observe that the two vectors found above, and vectors., are a basis for V and so no linearly independent set can have more than two (b) V is the subspace of P consisting of the polynomials p() satisfying p() =. S is the set of three polynomials { +,, }. ANSWER: These three polynomials are linearly independent and they span V. Any polynomial in V is of the form a + b + c +. A combination of the three polynomials in S, say r( + ) + s( ) + t = t + (r + s) + (r s), gives a + b + c if and only if t = a, r + s = b, and r s = c, which have unique solutions for any values of a, b, and c. Hence the polynomial a + b + c can be written as a linear combination of the members of S in one and only one way, i.e. the members of S are a basis for V. { } (c) V is the set of symmetric matrices, and the set S is,. ANSWER: These cannot span V since V has dimension four and there are only two vectors. But they are linearly independent: If a + b =, a + b = and a + b = from which you can get that a = b =. Problem (6 points) Problem on our second midterm said: Find a basis for the subspace W of R spanned by It turned out that a basis consisted of two vectors. (a) Could there be a basis for W that (i) consisted of some the vectors 6., and 6, and (ii) was orthogonal (using the standard dot product on R as the inner product)? Why or why not? You do not need to find such a basis, if it eists, but you must give reasons for your answer. ANSWER: No: Any basis for W would have to consist of two vectors, since one basis did. But no two vectors from these four have dot product equal to zero, so no two of these are orthogonal. (b) Could there be an orthogonal basis for W if we did not require the members of the basis to be some of those four vectors? Why or why not? You do not need to find such a basis, if it eists, but you must give reasons for your answer. ANSWER: Yes: Pick some pair from these four vectors that are linearly independent, e.g. the first two, and apply Gram-Schmidt.

Problem 5 (8 points) Let V be the set of continuous functions defined on, and taking on real values. Define a function L on V by L(f) = f(t) dt, for every f V. (a) Show that L is a linear transformation from V to R. ANSWER: The function L certainly takes members of V and produces numbers since that definite integral will eist for a continuous function. So we just have to show L(f + g) = L(f) + L(g) and L(rf) = rl(f) for any functions f and g in V and any number r. Using properties of integrals, L(f + g) = (f(t) + g(t))dt = f(t)dt + g(t)dt = L(f) + L(g) and L(rf) = r f(t)dt = r f(t)dt = rl(f). (b) What is the kernel of L? Give some eamples of functions in the kernel, and describe in terms of the graph of f what functions f are in the kernel. ANSWER: The kernel of L will be the functions f such that f(t)dt =. One such function is f() =. Another is f() = sin(π). In general a continuous function f will be in the kernel if and only if its graph on, encloses as much area above the -ais as below. (c) What is the range of L? Tell which members of R are in the range. ANSWER: The range is the set of all real numbers R. Thinking in terms of area, as in part (b), for any number r we can connect the point (, ) to the point (, r) by a straight line and form a triangle with base and height either r (if r is positive) or r, so the area (with appropriate ± sign attached) and hence the integral will be r = r, where the function f has that straight line for its graph. (If you don t want to think in pictures, let f(t) = r ( + ) and compute the integral.) Problem 6 (5 points) Let S be the basis for R consisting of v =, v =, and v = Let T be the basis for R consisting of w = and w =. Suppose L is a linear transformation from R to R such that the matri representing L with respect to S and T is. (Comment, not printed on the eam: This is problem from page in the tetbook, one of the problems assigned on the first day of class... ) (a) Compute L( v ) T, L( v ) T, and L( v ) T. ANSWER: If we were computing the matri representing L we would fill in as the first column L( v ) T, the second column L( v ) T, and the third column L( v ) T. Hence the answers are just the columns of the matri,,, and. (b) Compute L( v ), L( v ), and L( v ). ANSWER: Since we know their coordinates with respect to T, from (a), we can find the vectors themselves as linear combinations of the vectors w and w making up T : L( v ) = w w =. L( v ) = w + w =. L( v ) = w + w =.

(c) Compute L. ANSWER: First we find coordinates with respect to S of this vector: If = a v +b v +c v we have = a + c, = a + b, and = b. Then a = and c =, so we have the coordinates as = Now we can multiply by the given matri, S =, to get the coordinates of the vector we want, with respect to T. So the answer is w w =. Problem (6 points) We defined similarity of n n matrices by: An n n matri B is similar to an n n matri A if there is some nonsingular matri P such that B = P AP. Prove that similarity is an equivalence relation, i.e.: (i) for any n n matri A, A is similar to A; (ii) if B is similar to A, then A is similar to B; (iii) if B is similar to A and C is similar to B, then C is similar to A. ANSWER: For (i): Let P = I n, so P is also I n. Then A = I n A I n = P AP so A is similar to A. For (ii): If B is similar to A, there is some matri P such that B = P AP. Let Q = P, so Q = P. Then multiplying on the right by P and on the left by P we get A = P BP = Q BQ. So by our definition A is similar to B. For (iii): If B is similar to A, B = P AP for some P. If C is similar to B, C = Q BQ for some Q. Then C = Q P AP Q = (P Q) A(P Q) and we are through. Problem 8 Let A = (8 points) (a) Compute the adjoint adj(a). ANSWER: The adjoint is the transpose of the matri of cofactors. It will be a matri which has in row i and column j ( ) i+j times the determinant of the matri we would get by deleting the j th row and i th column of A. In the upper left position where i = j =, for eample, we have ( ) det = =. In the position where i = and j = we have ( ) 5 det (b) Compute the product adj(a) times A. ANSWER: = ( )( ) =. Continuing in this fashion we get

(During the eam several people asked if that should be A adj(a), i.e. put the matrices in the opposite order. Since you have in front of you two matrices you could multiply them in either order, so I was not quite sure why the question arose. My suggestion was to try both ways and compare: Since the product is either the zero matri (if A were singular) or det(a)i, i.e. adj(a) is a number times A, and any matri commutes with its inverse, you are bound to get the same answer in either order.) We know we should get a diagonal matri, so this gives a check of our work in (a). Multiplying either adj(a) A or A adj(a) we get (c) Use your answer to (b) to find the determinant of A. ANSWER: We really know more, the product should be the determinant of A times the identity matri, so the determinant is the we found on the diagonal. Problem 9 Let A = ( points) The eigenvalues of A are and. Show that A is diagonalizable, and find a matri P such that P AP is diagonal. ANSWER: We find eigenvectors associated with the given eigenvalues. For λ = we need the non-zero solutions of = Row reducing, or solving any other way, we get the solutions to be all multiples of Hence v = and v = For λ = we need the non-zero solutions of multiples of v = plus all multiples of are two linearly independent eigenvectors going with λ =. =, so the eigenvectors are the non-zero multiples of v. The solutions are all Now we know that eigenvectors going with distinct eigenvalues are linearly independent, and A is, so we know that A is diagonalizable and we can make a matri P that diagonalizes A by using as the columns of P three linearly independent eigenvectors. Hence one choice for P is Problem ( points) Let A =.

(a) What is the characteristic polynomial of A? ANSWER: The characteristic polynomial will be the determinant of (λ )(λ ) 6 = λ λ + 6 = λ λ + 6. (b) Find the eigenvalues of A. λ λ which is ANSWER: You could use the quadratic formula, but the characteristic polynomial factors nicely as (λ )(λ 6) so the roots and hence the eigenvalues must be λ = and λ = 6. (c) For each eigenvalue find the eigenvectors of A. ANSWER: For λ = we solve = and get for the eigenvectorsthe nonzero multiples of. For λ = 6 we solve = and get for the eigenvectors the nonzero multiples of. Problem (6 points) Let V be the real vector space of all polynomials with real coefficients (not restricted as to degree). Define a function on pairs of polynomials by (p(t), q(t)) = This does give an inner product on V. p(t) q(t) dt for polynomials p and q. (a) What is the magnitude p(t) of the vector (polynomial) p(t) = t? ANSWER: The square of the magnitude ( p(t) ) will be = t 5 5 t + t = 5 + = 8 5. So p(t) = 8 5. (b) What is cos θ if θ is the angle between p(t) = t and q(t) = t +? (p(t),q(t)) p(t) q(t). ANSWER: cos θ = (p(t), q(t)) = (t )(t + )dt = (t )dt = t t =. p(t) = (t t + )dt = + =. Hence p(t) =. q(t) = (t + t + )dt =. Hence q(t) =. (t )(t )dt = (t t +)dt Thus cos θ = q q =. Problem (8 points) Let W be the subspace of R spanned by.

(a) Use the Gram-Schmidt process to find an orthogonal basis for W. ANSWER: If we call those vectors u and u, we use for our basis v = u and v = u u v v v. That makes v = and v = ++ ++, so our orthogonal basis is 5 5 (b) Find an orthonormal basis for W.. (Note those are indeed orthogonal, their dot product is zero.) ANSWER: We can use those two vectors after correcting their magnitudes. Using v i v i instead 5 of v i, for i = and, we have, 5 =. 5 5 5 5