LINEAR ALGEBRA REVIEW

Similar documents
(v, w) = arccos( < v, w >

(v, w) = arccos( < v, w >

Review of Linear Algebra

(v, w) = arccos( < v, w >

Review of linear algebra

Lecture 3: Linear Algebra Review, Part II

Math 110, Spring 2015: Midterm Solutions

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Math Linear Algebra

MATH Linear Algebra

SOLUTIONS TO EXERCISES FOR MATHEMATICS 133 Part 1. I. Topics from linear algebra

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Chapter 6: Orthogonality

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

Definition 1. A set V is a vector space over the scalar field F {R, C} iff. there are two operations defined on V, called vector addition

Solutions to Section 2.9 Homework Problems Problems 1 5, 7, 9, 10 15, (odd), and 38. S. F. Ellermeyer June 21, 2002

Spring 2014 Math 272 Final Exam Review Sheet

MTH 2310, FALL Introduction

LINEAR ALGEBRA REVIEW

There are two things that are particularly nice about the first basis

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C =

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

Math 24 Spring 2012 Questions (mostly) from the Textbook

Dot Products, Transposes, and Orthogonal Projections

DEPARTMENT OF MATHEMATICS

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

A PRIMER ON SESQUILINEAR FORMS

Chapter 6. Orthogonality

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Algebra Workshops 10 and 11

March 27 Math 3260 sec. 56 Spring 2018

2. (10 pts) How many vectors are in the null space of the matrix A = 0 1 1? (i). Zero. (iv). Three. (ii). One. (v).

22m:033 Notes: 6.1 Inner Product, Length and Orthogonality

PRACTICE PROBLEMS FOR THE FINAL

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

Worksheet for Lecture 23 (due December 4) Section 6.1 Inner product, length, and orthogonality

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Review 1 Math 321: Linear Algebra Spring 2010

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Linear Algebra, Summer 2011, pt. 3

Projections and Least Square Solutions. Recall that given an inner product space V with subspace W and orthogonal basis for

7. Dimension and Structure.

Math 3191 Applied Linear Algebra

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

Orthogonality and Least Squares

1 9/5 Matrices, vectors, and their applications

ORTHOGONALITY AND LEAST-SQUARES [CHAP. 6]

2018 Fall 2210Q Section 013 Midterm Exam II Solution

W2 ) = dim(w 1 )+ dim(w 2 ) for any two finite dimensional subspaces W 1, W 2 of V.

Math Linear Algebra II. 1. Inner Products and Norms

6. Orthogonality and Least-Squares

P = A(A T A) 1 A T. A Om (m n)

MATH 300, Second Exam REVIEW SOLUTIONS. NOTE: You may use a calculator for this exam- You only need something that will perform basic arithmetic.

Linear Algebra Review. Vectors

Linear Algebra Highlights

1 Last time: least-squares problems

Vector Spaces and Linear Transformations

Fourier and Wavelet Signal Processing

2. Every linear system with the same number of equations as unknowns has a unique solution.

Math 308 Discussion Problems #4 Chapter 4 (after 4.3)

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

MATH 1553, SPRING 2018 SAMPLE MIDTERM 2 (VERSION B), 1.7 THROUGH 2.9

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

v = v 1 2 +v 2 2. Two successive applications of this idea give the length of the vector v R 3 :

Final Review Written by Victoria Kala SH 6432u Office Hours R 12:30 1:30pm Last Updated 11/30/2015

1 Linear Algebra Problems

. = V c = V [x]v (5.1) c 1. c k

SUMMARY OF MATH 1600

Chapter 3. Vector spaces

The Four Fundamental Subspaces

Vector Spaces, Orthogonality, and Linear Least Squares

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

EE5120 Linear Algebra: Tutorial 3, July-Dec

Math 113 Practice Final Solutions

Lecture 23: 6.1 Inner Products

MATH 431: FIRST MIDTERM. Thursday, October 3, 2013.

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

The geometry of least squares

Advanced Linear Algebra Math 4377 / 6308 (Spring 2015) March 5, 2015

4 Linear Algebra Review

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

MATH 221, Spring Homework 10 Solutions

Math 54 Homework 3 Solutions 9/

Problem Set 1. Homeworks will graded based on content and clarity. Please show your work clearly for full credit.

Solutions to Math 51 First Exam October 13, 2015

2.3. VECTOR SPACES 25

Vector Spaces. distributive law u,v. Associative Law. 1 v v. Let 1 be the unit element in F, then

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

ALGEBRA. 1. Some elementary number theory 1.1. Primes and divisibility. We denote the collection of integers

1. General Vector Spaces

I. Multiple Choice Questions (Answer any eight)

Solving a system by back-substitution, checking consistency of a system (no rows of the form

Typical Problem: Compute.

Exercise Sheet 1.

Exercises for Unit I (Topics from linear algebra)

Transcription:

LINEAR ALGEBRA REVIEW When we define a term, we put it in boldface. This is a very compressed review; please read it very carefully and be sure to ask questions on parts you aren t sure of. x 1 WedenotethesetofrealnumbersbyR. Ifx =. R n isacolumnvectorwithith x n entry x i and y = y 1.. y n is another column vector, their dot product is x y = n i=1 x iy i, which can also be written as the transpose, x, times y, i.e., x y. The two vectors are called orthogonal if x y = ; in this case, we write x y. The norm of x is x = x x; this is the length of the vector x. If A = [a 1 a 2 a n ] is a matrix with n columns, where a i is the ith column, and x 1 x =.. R n is a column vector with ith entry x i, then Ax = n i=1 x ia i. In particular, x n... a i = A, where the 1 is in the ith place. 1.. Supposethematrixproduct AB isdefined. IfB = [b 1 b 2 b m ]withcolumnsb j, then AB = [Ab 1 Ab 2 Ab m ], i.e., the columns of AB are obtained by multiplying the columns of B by A. The matrix product is also B A defined, and we have B A = (AB). The span of a list (v 1,...,v k ) of vectors is the set of vectors that can be written as linear combinations k i=1 x iv i of vectors in the list for some real numbers x 1,...,x k. A list (v 1,...,v k ) of vectors is linearly independent if the only linear combination k i=1 x iv i that equals the vector is the combination in which all the coefficients x i are themselves. Equivalently, the list is linearly independent when the only column vector solution x of the matrix equation [v 1 v k ]x = n is x = n. In this case, the matrix [v 1 v k ] is called one-to-one or injective. The list is called orthonormal if each vector in the list has norm 1 and each pair from the list is orthogonal. A list (v 1,...,v k ) in a vector space V is a basis for V if the list is linearly independent and the span of the list is V. Equivalently, every vector in V can be written in exactly one way as a linear combination of the list. In this case, k is uniquely determined by V (this 1

is proved in a linear algebra course) and is called the dimension of V, written dimv. Suppose that there is an inner (dot) product in V, like there is in R n. Then a list of vectors (v 1,...,v k ) is orthonormal if v i v j = { if i j, 1 if i = j. In linear algebra, it is proved that every (finite-dimensional) vector space with an inner product has an orthonormal basis. In fact, it is proved that every orthonormal list may be extended to an orthonormal basis. A linear map T:R n R m is a map that satisfies T(ax+by) = at(x)+bt(y) for all vectors x,y and all reals a,b. Given a basis (v 1,...,v n ) of R n and a basis (w 1,...,w m ) of R m, such a map has an m n matrix with respect to these bases; the (i,j)-entry t i,j of the matrix is given by the coefficient of w i in T(v j ); i.e., T(v j ) = m i=1 t i,jw i. If the basis (w 1,...,w m ) happens to be orthonormal, then this coefficient can be easily calculated by the equation t i,j = w i T(v j ). Unless otherwise specified, we use the standard basis of R n for all n, which is orthonormal. Often, this standard basis is written (e 1,...,e n ), where e i = (,,...,,1,,...,) with the 1 in the ith place, so that a vector (a 1,...,a n ) is written as n i=1 a ie i. A subset W of R n is a subspace if n W and W is closed under vector addition and scalar multiplication. A theorem says that if W is a subspace of R n and dimw = n, then W = R n. The span of (v 1,...,v k ) is the same as the image of the linear map whose matrix (in the standard bases) is A = [v 1 v k ]; see Linear Algebra Homework problem 2. This is a subspace, called the column space of A and written cola. Its dimension is called the rank of A. Another subspace associated to A is its null space, the set of vectors x that satisfy Ax = n. Thus, A is injective iff [ if and only if ] its null space is just { n }. A theorem says that if A is n n, then A is injective iff its rank is n (so it is surjective) iff A is invertible. In order to illustrate {[ some] new concepts, } we ll use the following examples: {[ ] In R} 2, a denote one axis by R 1 := ; a R and the other axis by R 2 := ; a R. a (The semi-colon stands for such that. The letter a is a dummy variable. Any other variable could have been used and it would not change the sets.) Then R 1 and R 2 are subspaces of R 2. In fact, R i is the span of (e i ) for i = 1,2. Note that every vector in R 1 is orthogonal to every vector in R 2. We write this as R 1 R 2. Also note that every vector in R 2 can be written [ ](in only one way) as a sum of one vector from R 1 and one vector a from R 2, namely, = ae b 1 +be 2. We write this as R 2 = R 1 R 2. 2

Note too that the only vectors that are orthogonal to all vectors in R 1 are the vectors in R 2, and vice versa. We write this as R 1 = R 2 and R 2 = R 1. More generally, for any subset W of R n, we write v W for a vector v if v w for every w W. We write W for the set of vectors v that satisfy v W. This set is a subspace (check!). In the preceding {[ ]} {[ ]} 1 example, besides what we wrote, we also have R 2 = and R 2 =. Assume now that W is not merely a subset of R n, but also a subspace. Call d the dimension of W. Choose an orthonormal basis (w 1,...,w d ) of W and extend it to an orthonormal basis (w 1,...,w d,w d+1,...,w n ) of R n. We can write any vector v R n in this basis: v = n i=1 a iw i. When is v W? Because a vector can be written as a linear combination of basis vectors in only one way, v W iff a i = for all i d+1. When is v W? We claim this holds iff a j = for all j d. First, if v W, then v w j for all j d (because all such vectors w j lie in W), i.e., v w j = for all j d, i.e., a j = for all j d. Second, if a j = for all j d, then v w = for all w W since for w = d j=1 b jw j, ( n ) ( all dot products in the expansion of v w = i=d+1 a d ) iw i j=1 b jw j are. In other words, we may write d n v = a j w j + a i w i, j=1 i=d+1 where the first sum is a vector in W and the second sum is a vector in W. Clearly this was the only way to write v as a sum of a vector in W plus a vector in W. We have proved that R n = W W. Also, we proved that (w d+1,...,w n ) form a basis for W. Finally, similar reasoning shows that the only vectors orthogonal to W are the vectors in W, i.e., ( W ) = W. To recap, R n = W W, which means, by definition, that for every v R n, there are unique w W and x W such that v = w+x. Since w x, we have the Pythagorean Theorem: v 2 = w 2 + x 2 (check!). Since to each v there corresponds a unique w in this way, this defines the orthogonal projection P W :R n R n onto W as the map ( [ ] [ ] a ) a P W (v) = w. In the example above, P R1 =. In general, with the orthonormal b basis convenient for W, we have ( n ) P W a i w i = i=1 d a i w i. i=1 In other words, we just set to all the coefficients of the basis that lie outside (and thus perpendicular to) W. Note in particular that for w W, we have P W (w) = w and for x W, we have P W (x) = n. 3

Orthogonal projection is always a linear map (check!) and a theorem says that it finds the closest point in W to v, i.e., v P W (v) < v u for all u W except for u = P W (v). (Proof: Write v = w+x with w W and x W. Thus, w = P W (v). For all u W, we have w u W, whence w u x, which implies that v u 2 = (w u)+x 2 = w u 2 + x 2 by the Pythagorean Theorem. This is a minimum exactly when w u =, i.e., when u = w, as desired.) If A is a matrix of P W, then A 2 = A (check!). [ In the ] example above, 1 the matrix of P R1 with respect to the standard basis of R 2 is. If the basis is orthonormal, then a theorem says that A = A. The column space of A is W and its null space is W (check!). In particular, the rank of A equals dimw. If I denotes the identity map, then I P W is the orthogonal projection onto W (check!). If W 1 and W 2 are subspaces of R n with W 1 W 2, then P W1 P W2 = P W1 (check!). Here are some more examples of the above concepts and definitions. You should verify that what is stated is true. Let U := a ; a R, W := a ; a R, and V := a a ; a R. Then a b ; a,b R = U W = U V, meaning that there is exactly one way to write each vector a b as a sum of a vector in U plus a vector in W; and likewise as a sum of a vector in U plus a vector in V. A list of orthonormal vectors is automatically linearly independent. The list 1/2 1/2 1/2 1/2 1/2 1/2 1/2 1/2,,, 1/2 1/2 1/2 1/2 1/2 1/2 1/2 1/2 in R 4 is orthonormal. Since the list has 4 vectors, it is also a basis for R 4. Let u 1 := 2 5, u 2 := 2 1, v := 5 1, w := 2 1, and x := 7. Let W 1 1 15 1 14 be the span of (u 1,u 2 ). Then v = w + x, w = (3/2)u 1 + (5/2)u 2 W, x W, and w = P W (v). 4

In case you had difficulty checking the things above that were marked check!, here are the solutions. (But make sure you try to do them yourself first. The proofs below are not the only ways to prove these things. You may prefer proofs that use orthonormal bases.) That W is a subspace: We have to check the definition of subspace. To show n W, we need to show that n w for all w W. Since n w =, this is obvious. Next, we need to show that for all x,y W, we have x+y W. So take w W. We need to show (x+y) w. But (x+y) w = x w+y w = + =, so this is true. Finally, we need to show that for every real a and every x W, we have ax W. Again, take w W. We have (ax) w = (ax) w = ax w = a =, so this holds. This completes the proof. That w x implies v 2 = w 2 + x 2, where v = w+x: We have v 2 = v v = (w+x) (w+x) = w w+w x+x w+x x = w w+++x x = w 2 + x 2. That P W is a linear map: We have to show that for every x,y R n and every a,b R, we have P W (ax+by) = ap W (x)+bp W (y). Since R n = W+W, we may write x = w 1 +u 1 and y = w 2 +u 2, where w 1,w 2 W and u 1,u 2 W. Then P W x = w 1 and P W y = w 2 by definition of P W. Now ax+by = (aw 1 +bw 2 )+(au 1 +bu 2 ). Furthermore, the first summand is in W since W is a subspace and the second summand is in W since W is also a subspace. This means that P W (ax + by) = aw 1 + bw 2 by definition of P W. This is the same as ap W (x)+bp W (y). ThatifAisamatrixofP W, thena 2 = A: ThisisthesameassayingthatP W (P W v) = P W (v) for all v R n. So take v R n. Write v = w +x with w W and x W. Then w = P W (v) by definition of P W. Since w W, we have P W (w) = w. Substituting P W (v) for w here, we get what we wanted to show. That the column space of A is W and its null space is W, where A is a matrix of P W : The column space of A is the range of P W. By definition of P W, its range is included in W. Furthermore, for every w W, we have P W (w) = w. This means that the range of P W is actually equal to W. Thus, so is cola. The null space of A is the set of vectors v with P W (v) = n, i.e., with v = n +x for some x W (by definition of P W ). This is precisely the set of v W, so the null space is W. 5

That I P W is the orthogonal projection onto W : For any v R n, write v = w+x with w W and x W. Since also v = x + w and since W = ( W ), this shows that x = P W (v). Since x = v w = I(v) P W (v) = (I P W )(v), this shows that P W = I P W. That if W 1 and W 2 are subspaces of R n with W 1 W 2, then P W1 P W2 = P W1 : Let v R n and write v = w + x with w W 2 and x W 2. Then also x W 1. We have P W1 (v) = P W1 (w) + P W1 (x) = P W1 (w) = P W1 P W2 (v), which proves the identity since v was arbitrary. Test your understanding by doing the following quiz. Quiz: Say whether each is true or false and why. You will be given this same quiz in class. 1. The orthogonal projection of v onto the span of a single vector w is always a scalar multiple of v. 2. If a vector v coincides with its orthogonal projection onto a subspace W, then v W. 3. If W is a subspace of R n, then W and W have no vectors in common. 4. If W is a subspace of R n, then P W (v) 2 + v P W (v) 2 = v 2 for all v R n. 6