MIDTERM I LINEAR ALGEBRA. Friday February 16, Name PRACTICE EXAM SOLUTIONS

Similar documents
Math Linear Algebra

Linear Algebra Final Exam Solutions, December 13, 2008

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Math 113 Winter 2013 Prof. Church Midterm Solutions

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

y 2 . = x 1y 1 + x 2 y x + + x n y n 2 7 = 1(2) + 3(7) 5(4) = 3. x x = x x x2 n.

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

YORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions

Math 121 Homework 4: Notes on Selected Problems

Elementary linear algebra

MATH PRACTICE EXAM 1 SOLUTIONS

Math 21b: Linear Algebra Spring 2018

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

MATH Linear Algebra

MATH 260 LINEAR ALGEBRA EXAM III Fall 2014

which are not all zero. The proof in the case where some vector other than combination of the other vectors in S is similar.

Math 24 Spring 2012 Sample Homework Solutions Week 8

Math 113 Final Exam: Solutions

Linear Algebra Practice Problems

PRACTICE PROBLEMS FOR THE FINAL

Systems of Linear Equations

PRACTICE FINAL EXAM. why. If they are dependent, exhibit a linear dependence relation among them.

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

Math 414: Linear Algebra II, Fall 2015 Final Exam

Linear Models Review

Spring 2014 Math 272 Final Exam Review Sheet

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Math 235: Linear Algebra

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

Linear Algebra. Min Yan

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

Rank & nullity. Defn. Let T : V W be linear. We define the rank of T to be rank T = dim im T & the nullity of T to be nullt = dim ker T.

Review 1 Math 321: Linear Algebra Spring 2010

Math 113 Midterm Exam Solutions

Math Topology II: Smooth Manifolds. Spring Homework 2 Solution Submit solutions to the following problems:

Mathematics Department Stanford University Math 61CM/DM Inner products

Math 261 Lecture Notes: Sections 6.1, 6.2, 6.3 and 6.4 Orthogonal Sets and Projections

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

Linear Algebra Massoud Malek

MTH 2310, FALL Introduction

12x + 18y = 30? ax + by = m

Homework 11 Solutions. Math 110, Fall 2013.

Finite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product

Math 593: Problem Set 10

1. General Vector Spaces

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Linear Algebra. Session 12

Linear Algebra. Paul Yiu. Department of Mathematics Florida Atlantic University. Fall A: Inner products

Math Linear Algebra II. 1. Inner Products and Norms

2018 Fall 2210Q Section 013 Midterm Exam II Solution

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Reduction to the associated homogeneous system via a particular solution

2.3. VECTOR SPACES 25

Math 308 Practice Test for Final Exam Winter 2015

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Archive of past papers, solutions and homeworks for. MATH 224, Linear Algebra 2, Spring 2013, Laurence Barker

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

LINEAR ALGEBRA REVIEW

SOLUTION KEY TO THE LINEAR ALGEBRA FINAL EXAM 1 2 ( 2) ( 1) c a = 1 0

Honors Algebra 4, MATH 371 Winter 2010 Assignment 4 Due Wednesday, February 17 at 08:35

Chapter 2. Vectors and Vector Spaces

Carleton College, winter 2013 Math 232, Solutions to review problems and practice midterm 2 Prof. Jones 15. T 17. F 38. T 21. F 26. T 22. T 27.

Exercise Solutions to Functional Analysis

Chapter 2 Linear Transformations

Exercise Sheet 1.

Math 115A: Homework 4

Homework set 4 - Solutions

Math 396. Quotient spaces

Functional Analysis F3/F4/NVP (2005) Homework assignment 2

Honours Algebra 2, Assignment 8

Linear and Bilinear Algebra (2WF04) Jan Draisma

Homework 5. (due Wednesday 8 th Nov midnight)

MAT Linear Algebra Collection of sample exams

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

HILBERT SPACES AND THE RADON-NIKODYM THEOREM. where the bar in the first equation denotes complex conjugation. In either case, for any x V define

Kernel and range. Definition: A homogeneous linear equation is an equation of the form A v = 0

Some notes on Coxeter groups

Typical Problem: Compute.

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

A PRIMER ON SESQUILINEAR FORMS

Midterm solutions. (50 points) 2 (10 points) 3 (10 points) 4 (10 points) 5 (10 points)

Supplementary Notes on Linear Algebra

Topics from linear algebra

Chapter 3 Transformations

Family Feud Review. Linear Algebra. October 22, 2013

MTH 2032 SemesterII

MATH 115A: SAMPLE FINAL SOLUTIONS

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

Your first day at work MATH 806 (Fall 2015)

CHAPTER II HILBERT SPACES

LAKELAND COMMUNITY COLLEGE COURSE OUTLINE FORM

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

Test 3, Linear Algebra

Chapter 6. Orthogonality and Least Squares

Math 290, Midterm II-key

Analysis Preliminary Exam Workshop: Hilbert Spaces

MATH 235. Final ANSWERS May 5, 2015

Orthonormal Bases; Gram-Schmidt Process; QR-Decomposition

Transcription:

MIDTERM I LIEAR ALGEBRA MATH 215 Friday February 16, 2018. ame PRACTICE EXAM SOLUTIOS Please answer the all of the questions, and show your work. You must explain your answers to get credit. You will be graded on the clarity of your exposition! 1 2 4 5 20 20 20 20 20 total Date: February 12, 2018. 1

1. Give the definition of a vector space. 1 20 points SOLUTIO See the pdf online: www.math.colorado.edu/~casa/teaching/18spring/215/hw/linalghw.pdf 2

2. Let V be the R-vector space of sequences of real numbers {x n } with addition and scaling given by: {x n } + {y n } = {x n + y n }, λ {x n} = {λx n }, 2 20 points for all {x n }, {y n} V and λ R. Let L 1 V be the subset of sequences {x n } V such that x n converges. Let L 2 V be the subset of sequences {x n } V such that x2 n converges. 2.(a). Show that L 1 is a subspace of V. 2.(b). Show that L 2 is a subspace of V. 2.(c). Is L 1 contained in L 2? 2.(d). Is L 2 contained in L 1? SOLUTIO We will repeatedly use the following fact from Calculus II: ( ) Given a series a n with all the a n being nonnegative real numbers, if there exists a real number M such that for all natural numbers we have a n M, then a n is convergent. (a) We must show that L 1 is closed under addition and scaling. To show it is closed under addition, let x = {x n }, y = {y n} L1. Since x + y = {x n + y n }, we must show that x n + y n is convergent. To this end, for any natural number, we have using the triangle inequality (for real numbers) that x n + y n x n + y n = x n + y n x n + y n. Thus the series x n + y n converges, using the fact ( ) above. To show that L 1 is closed under scaling, let λ R and x = {x n } L1. We must show that λx n is convergent. To this end, for any natural number we have again we conclude using ( ). λx n = λ x n λ (b) We must show that L 2 is closed under addition and scaling. To show it is closed under addition, let x = {x n }, y = {y n} L2. We must show that (x n + y n ) 2 x n ;

is convergent. For this we have (x n + y n ) 2 = xn 2 + y 2 n + x n y n, assuming the series on the right are all convergent. Thus it suffices to show that x ny n is convergent, and therefore to show that the series x ny n is convergent. To this end, first observe that it is clear that { x n }, { y n } L 2. ow, using the fact that the standard dot product on R +1 is an inner product, we have from Cauchy Schwartz that x n y n = x n y n ( ) 1/2 ( ) 1/2 x n 2 y n 2 ( ) 1/2 ( ) 1/2 x n 2 y n 2 Again we conclude using ( ). To show that L 2 is closed under scaling, let λ R and x = {x n } L2. We must show that (λx n) 2 is convergent. For this we have again we conclude using ( ). (λx n ) 2 = λ 2 xn 2 λ 2 xn; 2 (c) Yes. The subspace L 1 is contained in the subspace L 2. Indeed, suppose that {x n } L 1. Then we have x n converges. ow there is some natural number 0 such that 0 x n < 1 for all n 0. Therefore, for all n 0, we have 0 x 2 n x n, so that consequently, for all 0 xn 2 = 0 1 xn 2 + n= 0 x 2 n 0 1 xn 2 + n= 0 x n Thus by ( ) the series x2 n converges, so that {x n } L 2. 0 1 xn 2 + n= 0 x n. (d) o. The subspace L 2 is not contained in the subspace L 1. For instance taking x n = 1 n for all n 1 (and x 0 arbitrary), we have {x n } L 2, but {x n } / L 1 (one can use the integral test, for instance, for both of these). 4

. Let L 2 be the R-vector space consisting of sequences of real numbers {x n } such that x2 n converges. We have seen there is an inner product on L 2 defined by setting for x = {x n }, y = {y n} L2. (x, y) = x n y n ow consider the sequences {x n }, {y n} defined by: 20 points x n = 1 2 n, { 1 n = 0, y n = 0 n > 0..(a). Perform the Graeme Schmidt process to obtain an orthonomal basis for the vector space spanned by {x n } and {y n }..(b). Consider the sequence {z n } defined by z n = 1/n!. Find the orthogonal projection of {z n } onto the vector space spanned by {x n } and {y n }. SOLUTIO We start with a few useful computations, just to have them all in one place: ( ) 1 2 ( ) 1 n (x, x) = 2 n = 2 2 = 1 1 1 = 4, 4 (y, x) = 1 1 + 0 1 2 + = 1, (y, y) = 1 1 + 0 0 + = 1, (z, x) = (a) The solution to the problem is: 1 n! (1/2)n = e 1/2, (z, y) = 1 1 + 1 0 + 1 2 0 + = 1. x = 2 x y = 2y 2 x 5

The proof is as follows. The Graeme Schmidt process starts by constructing an orthogonal basis: x = x. y = y (y, x) (x, x) x = y 1 4/ x y = y 4 x. I.e., the vectors x, y form an orthogonal basis for the subspace spanned by x and y. To obtain an orthonormal basis, we should compute: Thus (y, y ) = (y 4 x, y 9 x) = (y, y) + 4 16 (x, x) 2 (x, y) 4 = 1 + 9 16 4 2 1 = 1 4. x = y = x (x, x ) 1/2 = x (x, x) 1/2 = y (y, y ) 1/2 = 2y = 2y 2 x 2 2 x form an orthonormal basis for the subspace spanned by x and y. Specifically, we have x n 1 = 2 n = 2 n+1 y n = 2 2 1 2 n = 1 2 n = 0 2 1 2 n = 2 n+1 n > 0 Remark 0.1. We can do a quick double check that x and y are the correct solution. We clearly have that x and y are in the span of x and y, and that x is in the span of x. We have (x, x ) = ( 2 x, 2 x) = 4 (x, x) = 1, (y, y ) = (2y 2 x, 2y 2 x) = 4(y, y) 2 2 2(x, y) + 9 4 (x, x) = 4 6 + 9 4 4 = 1, (x, y ) = ( 2 x, 2y 2 x) = (x, y) 2 2 (x, x) = (x, y) 4 (x, x) = 1 4 4 = 0. (b) The solution to the problem is: (0.1) w = ( + e 1/2 )x + (4 e 1/2 )y 6

The proof is as follows. Having computed the orthonormal basis x, y in the previous part of the problem, the orthogonal projection w = {w n } of z = {z n } is given by We compute: (z, x ) = (z, Therefore finally, we have w = (z, x )x + (z, y )y. 2 x) = (z, x) = 2 2 e1/2. (z, y ) = (z, 2y 2 x) = 2(z, y) 2 (z, x) = 2 2 e1/2. w = (z, x )x + (z, y )y = 2 e1/2 ( 2 x) + (2 2 e1/2 )(2y 2 x)) = [ 4 e1/2 + ( + 9 4 e1/2 )]x + (4 e 1/2 )y = ( + e 1/2 )x + (4 e 1/2 )y In other words, the solution to the problem is that the orthogonal projection of z is given by (0.1). Specifically, we have 1 n = 0 w n = ( + e 1/2 ) 1 2 n n > 0 Remark 0.2. We can check that we have not made an arithmetic mistake in (0.1) as follows. We clearly have w in the span of x and y. To check that w is the orthogonal projection, we can check that w := z w is in the orthogonal complement of the span of x and y. In other words, we can check that (z w, x) = (z w, y) = 0. (z w, x) = (z, x) (w, x) = e 1/2 (( + e 1/2 )x + (4 e 1/2 )y, x) = e 1/2 [( + e 1/2 )(x, x) + (4 e 1/2 )(y, x)] = e 1/2 [( + e 1/2 ) 4 + (4 e1/2 )] = 0. (z w, y) = (z, y) (w, y) = 1 [(( + e 1/2 )x + (4 e 1/2 )y, y)] = 1 [( + e 1/2 )(x, y) + (4 e 1/2 )(y, y)] = 1 [( + e 1/2 ) + (4 e 1/2 )] = 0. 7

4. Let (V, (, )) be a real Euclidean space. Let W be the vector space of linear maps L : V R. 4.(a). For each v V, show that the map of sets is a linear map. (, v) : V R (, v)(v ) = (v, v) 4 20 points 4.(b). Show that the map of sets is a linear map. φ : V W v (, v) SOLUTIO (a) Let v V. Then for all v 1, v 2 V, we have (, v)(v 1 + v 2 ) := (v 1 + v 2, v) = (v 1, v) + (v 2, v) =: (, v)(v 1 ) + (, v)(v 2 ). The second equality above uses the bilinearity of the inner product. Similarly, for all λ R, we have (, v)(λ v 1 ) := (λ v 1, v) = λ (v 1, v) =: λ (, v)(v 1 ). The second equality above uses the bilinearity of the inner product. (b) For all v, v 1, v 2 V, and λ R, we must show that φ(v 1 + v 2 ) = φ(v 1 ) + φ(v 2 ), φ(λv) = λφ(v). These are supposed to be equalities of maps V R, and it suffices to check that the maps agree for all x V. First we have φ(v 1 + v 2 )(x) := (, v 1 + v 2 )(x) := (x, v 1 + v 2 ) = (x, v 1 ) + (x, v 2 ) =: (, v 1 )(x) + (, v 2 )(x) =: (φ(v 1 ) + φ(v 2 ))(x). The third equality above uses the bilinearity of the inner product. Thus φ(v 1 + v 2 ) = φ(v 1 ) + φ(v 2 ). Similarly, we have φ(λ v)(x) := (, λ v)(x) := (x, λ v) = λ (x, v) =: λ (, v)(x) =: λ φ(x); the third equality above uses the bilinearity of the inner product. 8

5. True or False. Give a brief explanation for each answer. This should be at most a sentence or two with the main idea, or the main theorem you use, or the example, or the counter example, etc. 5 20 points 5.(a). Every complex vector space is isomorphic to C n for some nonnegative integer n. T F FALSE. For instance, the complex vector space of polynomials with complex coefficients C[x] is not finite dimensional (for each natural number n it contains the (n + 1)-dimensional vector space C[x] n of polynomials of dimension at most n). 5.(b). Let V be a vector space, and let V V be a subspace. Then the additive identity of V is equal to the additive identity of V. T F TRUE. This was a homework exercise. We showed for instance that for any v V, we have 0 v = O. We can then see that O = 0 O = 0 O = O. 5.(c). In the vector space C 0 (R) of continuous maps f : R R, the elements 1, cos t, sin t are linearly independent. T F TRUE. If a + b cos t + c sin t = 0, then taking the derivative, we have c cos t = b sin t, which is only possibly if b = c = 0, since otherwise the values of t for which c cos t and b sin t are zero, are different. Then a = 0 and we are done. 5.(d). A linear map is surjective if and only if it has trivial kernel. T F FALSE. Take for instance R 0 R. 5.(e). Let (V, (, )) be a Euclidean vector space, and let W be a subspace. Then we have W W = {O}, where O is the additive identity of V. T F TRUE. We saw this in class: if w W W, then 0 = (w, w), which, from the definition of an inner product, is only possibble if w = O. 5.(f). Let (V, (, )) be a Euclidean vector space, and let W be a finite dimensional subspace with orthonormal basis w 1,..., w n. Then for any v V we have v n i=1 (v, w i)w i is in W. T F FALSE. This is true only if v W, since v n i=1 (v, w i)w i W v = (v n i=1 (v, w i)w i ) + n i=1 (v, w i)w i W. 5.(g). If v 1,..., v n are nonzero, mutually orthogonal elements of a Euclidean space, then they are linearly indepenent. T F TRUE. Suppose that n i=1 α iv i = 0. Then for each v j, j = 1,..., n, we have that 0 = ( n i=1 α iv i, v j ) = α j (v j, v j ), which, since v j = O, is only possibly if α j = 0. 5.(h). If f : R n R m is a surjective linear map, then n m. T F TRUE. The Rank-ullity Theorem implies that n = m + dim ker f m. 9

5.(i). A linear map f : R n V is injective if and only if dim Im( f ) = n. T F TRUE. This was a theorem we proved in class. The argument was that we have f injective ker( f ) = 0 dim Im( f ) = n. The first equivalence was a theorem we proved in class, and the second equivalence follows from the Rank ullity theorem. 5.(j). Elements v 1,..., v n of a vector space V form a basis of V if and only if they span V. T F FALSE: The definition requires they also be linearly independent. For instance (1, 0), (0, 1), (1, 1) span R 2, but are not linearly independent, so do not form a basis. 10