Exam questions with full solutions

Similar documents
MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

MAT Linear Algebra Collection of sample exams

MATH Linear Algebra

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

Online Exercises for Linear Algebra XM511

I. Multiple Choice Questions (Answer any eight)

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

A Review of Linear Algebra

2. Every linear system with the same number of equations as unknowns has a unique solution.

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Eigenvalues and Eigenvectors A =

Solutions to Review Problems for Chapter 6 ( ), 7.1

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Chapter 6: Orthogonality

MATH 260 LINEAR ALGEBRA EXAM III Fall 2014

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Linear Algebra V = T = ( 4 3 ).

MA 265 FINAL EXAM Fall 2012

1 Last time: least-squares problems

Problem # Max points possible Actual score Total 120

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Abstract Vector Spaces

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

A PRIMER ON SESQUILINEAR FORMS

Linear Algebra. The Manga Guide. Supplemental Appendixes. Shin Takahashi, Iroha Inoue, and Trend-Pro Co., Ltd.

PRACTICE PROBLEMS FOR THE FINAL

Worksheet for Lecture 25 Section 6.4 Gram-Schmidt Process

LINEAR ALGEBRA W W L CHEN

Review problems for MA 54, Fall 2004.

MATH 115A: SAMPLE FINAL SOLUTIONS

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

LINEAR ALGEBRA SUMMARY SHEET.

Linear Algebra problems

Cheat Sheet for MATH461

Chapter 3 Transformations

PRACTICE FINAL EXAM. why. If they are dependent, exhibit a linear dependence relation among them.

Inner Product Spaces

235 Final exam review questions

(v, w) = arccos( < v, w >

Linear Algebra Highlights

LINEAR ALGEBRA QUESTION BANK

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

ft-uiowa-math2550 Assignment NOTRequiredJustHWformatOfQuizReviewForExam3part2 due 12/31/2014 at 07:10pm CST

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

MATH 235. Final ANSWERS May 5, 2015

MATH Spring 2011 Sample problems for Test 2: Solutions

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

c Igor Zelenko, Fall

Solutions to Final Exam

Exercise Sheet 1.

Linear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form

Lecture 23: 6.1 Inner Products

MATH. 20F SAMPLE FINAL (WINTER 2010)

Chapter Two Elements of Linear Algebra

Linear Algebra Review. Vectors

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Lecture Summaries for Linear Algebra M51A

Fall 2016 MATH*1160 Final Exam

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

Lecture 9: Vector Algebra

The set of all solutions to the homogeneous equation Ax = 0 is a subspace of R n if A is m n.

Math 396. An application of Gram-Schmidt to prove connectedness

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

1. General Vector Spaces

There are two things that are particularly nice about the first basis

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

University of Colorado Denver Department of Mathematical and Statistical Sciences Applied Linear Algebra Ph.D. Preliminary Exam June 8, 2012

Exercises Chapter II.

18.06 Quiz 2 April 7, 2010 Professor Strang

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Introduction to Linear Algebra, Second Edition, Serge Lange

TBP MATH33A Review Sheet. November 24, 2018

Math 4153 Exam 3 Review. The syllabus for Exam 3 is Chapter 6 (pages ), Chapter 7 through page 137, and Chapter 8 through page 182 in Axler.

Homework 11 Solutions. Math 110, Fall 2013.

Math 20F Final Exam(ver. c)

Conceptual Questions for Review

Math 369 Exam #2 Practice Problem Solutions

Chapter 3. Vector spaces

Mathematics Department Stanford University Math 61CM/DM Inner products

1. Select the unique answer (choice) for each problem. Write only the answer.

Review of Linear Algebra

Eigenvalues and Eigenvectors

No books, no notes, no calculators. You must show work, unless the question is a true/false, yes/no, or fill-in-the-blank question.

EXAM. Exam 1. Math 5316, Fall December 2, 2012

Matrix Algebra: Summary

(v, w) = arccos( < v, w >

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

MATH 431: FIRST MIDTERM. Thursday, October 3, 2013.

Transcription:

Exam questions with full solutions MH11 Linear Algebra II May 1 QUESTION 1 Let C be the set of complex numbers. (i) Consider C as an R-vector space with the operations of addition of complex numbers and scalar multiplication by real numbers. Prove that the set S = {1 + i, 1 i, } C is linearly dependent. (ii) Show that the same set S = {1 + i, 1 i, } C is linearly independent if C is considered as a Q-vector space, that is, the addition is the usual addition of complex numbers and the scalar multiplication is multiplication with rational numbers. Solution (i) We just need to explicitly construct a zero linear combination with real coefficients such that not all of the coefficients are zeroes: 1 (1 + i) + 1 (1 i) = (i ) An alternative solution: C is -dimensional as an R-vector space and hence any three elements are linearly dependent. (i ) One more alternative solution: suppose that a(1 + i) + b(1 i) + c =, where a, b, c R. It means that a + b + c = and a b =. Applying the row reduction, we can find that there exists a nonzero solution for a, b, c. (ii) Assuming that a(1 + i) + b(1 i) + c =, where a, b, c Q, we need to show that a = b = c =. First, we have a(1 + i) + b(1 i) + c = (a + b + c ) + (a b)i = and thus a + b + c = and a b =. Substituting a = b into the first equation, we get a + c =, or c = a. If a, then c would be irrational, which means a = b = and therefore c = =. :) 1

QUESTION Let M (R) be the R-vector space of matrices with real entries and let W M (R) be the subset of all matrices X satisfying both of the two conditions X = X t and tr X = (symmetric traceless matrices). (i) Show that W is a subspace of M (R). (ii) Find a basis and the dimension of W. (iii) Show that M (R) = W W, where W M (R) is the space of matrices whose first column has both entries equal to zero. Solution First, let s verify that W is a subspace: First, let be the zero matrix. Then W since t = and tr =. (15 marks) Let A, B W. Then (A + B) t = A t + B t = A + B and tr(a + B) = tr A + tr B = + = and hence A + B W. Let A W and let k R. Then (ka) t = ka t = ka and tr(ka) = k tr A = k = and hence ka W. [ ] a b Now consider an arbitrary matrix X =. It is symmetric if and only c d if b = c and traceless if and only if a + d [ =. ] Therefore an arbitrary element of a b the space W is a matrix of the form A =. Obviously, dim W = (because b a [ ] [ ] 1 1 there are two free parameters) and a possible basis is,. 1 [ ] An arbitrary element of the space W x is B =. To verify a direct sum y decomposition, we need to check two things. First, suppose that some matrix A = B belongs to W W. Equating entries in the first column, we get a = b = and thus A = B =. Second, we need to show that any matrix can be expressed as a sum of an element of W and an element of W : [ ] [ ] [ ] c11 c 1 c11 c = 1 c1 c + 1 c 1 c c 1 c 11 c + c 11 Alternatively, to verify that W + W = M (R), notice that dim W = dim W =. Now dim(w + W ) = dim W + dim W dim(w W ) = 4 and hence W + W is the whole space because dim M (R) = 4. :)

QUESTION Let V, W be vector spaces over C and let T : V W be a linear transformation. (i) Suppose that vectors v 1,..., v k V are linearly dependent. Prove or disprove: T(v 1 ),..., T(v k ) are also always linearly dependent. (ii) Suppose that vectors u 1,..., u l V are linearly independent. Prove or disprove: T(u 1 ),..., T(u l ) are also always linearly independent. Solution (i) If vectors v 1,..., v k V are linearly dependent, then there exists a linear dependence an untrivial zero linear combination a 1 v 1 + + a k v k =, where a 1,..., a k C and not all of the coefficients a 1,..., a k are zeroes. Then we have a 1 T(v 1 ) + + a k T(v k ) = T(a 1 v 1 + + a k v k ) = T() = and hence vectors T(v 1 ),..., T(v k ) are also linearly dependent. (ii) Here, vectors T(u 1 ),..., T(u l ) might be linearly dependent even if u 1,..., u l V are linearly independent. Here is a simple counter-example: V = C 1, T(z) = for all z, k = 1, v 1 = 1. :)

QUESTION 4 Let F be one of the fields Q, R, or C, let V be a vector space over F, and let f : V F be a linear transformation. Further, assume that v N( f ), where N( f ) is the nullspace of f. Prove that V = span{v} N( f ). Solution As always, to verify a direct sum decomposition, we need to check two things. First, in order to show that span{v} N( f ) = {}, suppose that u span{v} N( f ). Since u span{v}, we conclude that u = av for some a F. Since u N( f ), we have = f (u) = f (av) = a f (v). Besides, we know that v N( f ), which means f (v). Therefore a = and u =. Second, we need to show that an arbitrary element w of the space V can be expressed as a sum of an element of span{v} and an element of N( f ). Let s try to write it down explicitly: w = av }{{} span{v} + w } {{ av }, N( f ) which means = f (w av) = f (w) a f (v), that is, a = f (w), which makes sense f (v) since f (v). Thus we get w = f (w) f (v) v + w f (w) f (v) v, } {{ }} {{ } span{v} N( f ) which completes the proof. :) Remark It is essential that the space V might be infinite-dimensional and hence considering bases doesn t make sense. 4

QUESTION 5 Let M (Q) be the Q-vector space of matrices with rational entries and consider the linear operator T : M (Q) M (Q) given by Are matrices [ ] 1 and T(X) = X t + tr(x) [ ] 1 eigenvectors? 1 Solution Let s check it according to the definition: and hence and hence T ( ) [ ] 1 1 = + [ ]. 1 [ ] [ ] = 1 [ ] 1 is an eigenvector with the eigenvalue. Further, T ( ) 1 = 1 [ ] 1 + 1 [ ] = 1 [ ] [ ] λ λ [ ] 1 is not an eigenvector. :) 1 Remark Proving that T is a linear operator is not required. Finding all eigenvalues and eigenvectors is not required either. 5

QUESTION 6 Let Compute A 1. 1 1 A = 1 1 1 1 1 1. Solution First, observe that 1 A 1 = 1 1 1 1 1 1 1 1 Further, let T : R R be the linear operator of the left multiplication with the matrix 1 Notice that T acts as follows: it sends x 1 to x 1 and rotates the plane (x, x ) by the angle π/ in the counter-clockwise direction. It means that T 6 is the identity transformation and hence T 1 is also the identity (because 1 is a multiple of 6). Thus T 1 = T, which, according to its geometric meaning, does the following: sends x 1 to x 1 and rotates the plane (x, x ) by the angle π. The rotation by the angle π is minus identity and hence the whole transformation T is minus identity. Thus, 1 A 1 = 1 1 1 1 1 1 = 1 1 1 1 1 1 1 1 1 = :). 6

QUESTION 7 Let P (R) be the R-vector space of polynomials with real coefficients of degree at most. Prove that the formula f, g = x f (x)g(x)dx, f, g P (R) defines an inner product and find an orthonormal basis of the space P (R) with respect to it. (15 marks) Solution According to the lectures, to prove that f, g is an inner product, it s enough to check that the weight function x is positive on [, 1] (possibly except for finitely many points). Since x > for all x, it is true. To find an orthonormal basis, we ll apply the Gram-Schmidt orthogonalization to any basis, say {1, x, x }. The computation is below. w 1 = v 1 = 1, w 1, w 1 = v, w 1 = x x 1dx = x 1 1dx = x dx =, x dx =, w = v v, w 1 w 1, w 1 w 1 = x / 1 = x, w, w = v, w 1 = x x xdx = x 4 dx = 5, x x 1dx = 5, v, w = x x xdx =, w = v v, w w, w w v, w 1 w 1, w 1 w 1 = x /5 / 1 = x 5, ( w, w = x x dx = (x 5) 6 65 ) x4 + 9x dx 5 = 7 1 5 + 6 5 = 8 175 Thus {w 1, w, w } is an orthogonal basis. An orthonormal one is e 1 = w 1 w 1 =, e = 5 x, e = 175 8 ( x ) 5 :) 7

QUESTION 8 Let V be an inner product space over R, let W V be a subspace, and let W be the orthogonal complement of W. Further, let R be the set of all linear operators T : V V such that T(W) W. (i) Prove that R is an R-vector space (subspace of the space of linear operators on V). (ii) Let β = {v 1,..., v n } be an orthonormal basis of V such that {v 1,..., v k } is a basis of W and {v k+1,..., v k } is a basis of W. What can you say about the matrix of any T R with respect to the basis β? Solution To show that R is a subspace, we need to verify three conditions: The zero linear operator sends everything to the zero vector, which belongs to W and hence the zero operator belongs to R. Suppose that T 1, T R. It means that T 1 (w) W and T (w) W for all w W. In other words, we have u, T 1 (w) = u, T (w) = for all u, w W. But then we have u, (T 1 + T )(w) = u, T 1 (w) + T (w) = u, T 1 (w) + u, T (w) = and hence T 1 + T also belongs to R. Suppose that T R and k R. As shown above, it means that u, T(w) = for all u, w W. Then we have u, (kt)(w) = u, kt(w) = k u, T(w) = k = and hence kt also belongs to R. The matrix of any element of R is of the block form [ [k k] X [k (n k)] Y [(n k) k] Z [(n k) (n k)] ]. :) Remark In fact, the following more general fact is true: let U, V be any vector spaces and let U U and V V be subspaces. Then the set of linear transformations from U to V that map U into V is a subspace of L(U, V). It s not hard to prove from the definitions either. 8

QUESTION 9 Consider the space M n n (R) n n matrices with real entries. Further, let X, Y = tr(xy t ). Show that X, Y is an inner product on M n n (R). With respect to this inner product, what is the orthogonal complement of the subspace of symmetric matrices? Justify your answer. Solution In order to show that X, Y = tr(xy t ) is an inner product, we need to verify the four axioms: AI X + Y, Z = tr[(x + Y)Z t ] = tr(xz t + YZ t ) = tr(xz t ) + tr(yz t ) = X, Z + Y, Z checked. AII ax, Y = tr[(ax)y t ] = tr(axy t ) = a tr(xy t ) = a X, Y checked. AIII Y, X = tr(yx t ) = tr(yx t ) t = tr(xy t ) = X, Y checked. AIV X, X = tr(xx t ) = tr A = n i=1 a ii, where A = XX t. Notice that a ii is the dot product of the i th row of the matrix X and the i column of the matrix X t, which is also the i th row of the matrix X. In other words, we get a ii = n j=1 x ij. Thus X, X = i,j=1 n x ij, which is positive unless all entries are zero, which would mean that X is the zero matrix checked. The next question is to find the orthogonal complement of the subspace of the symmetric matrices. The guess is that it ll be the subspace of anti-symmetric matrices. First, let s prove that M asym n n (R) [Msym n n (R)]. Let X M asym n n (R) and let s show that X, Y = whenever Y = Y t. We have X, Y = tr(xy t ) = tr( X t Y) = tr(yx t ) = Y, X = X, Y and hence X, Y =. Since X, Y = for any Y M sym n n (R), it precisely means that X [Msym n n (R)] and therefore we indeed obtain M asym n n (R) [Msym n n (R)]. In order to show that M asym n n (R) = [Msym n n (R)], it s sufficient to check that dim M asym n n (R) = dim[msym n n (R)]. This equality easily follows from the two direct sum decompositions we proved in the lectures and tutorials: M n n (R) = M sym n n (R) Masym n n (R) = Msym n n (R) [Msym n n (R)]. :) Remark It s easy to compute directly that dim M sym n(n + 1) n n (R) = which was probably done in tutorials too. 9, dim M osym n n n(n 1) (R) =,