A SHORT SUMMARY OF VECTOR SPACES AND MATRICES

Similar documents
MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

Math 54 HW 4 solutions

Answer Key for Exam #2

PRACTICE PROBLEMS FOR THE FINAL

Least squares problems Linear Algebra with Computer Science Application

Solutions to Exam I MATH 304, section 6

SECTION 3.3. PROBLEM 22. The null space of a matrix A is: N(A) = {X : AX = 0}. Here are the calculations of AX for X = a,b,c,d, and e. =

Lecture 3: Linear Algebra Review, Part II

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

3.4 Elementary Matrices and Matrix Inverse

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

MATH 152 Exam 1-Solutions 135 pts. Write your answers on separate paper. You do not need to copy the questions. Show your work!!!

Linear Algebra in A Nutshell

MATH 167: APPLIED LINEAR ALGEBRA Chapter 2

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

MATH 2360 REVIEW PROBLEMS

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

Linear Algebra, Summer 2011, pt. 2

Math 205, Summer I, Week 3a (continued): Chapter 4, Sections 5 and 6. Week 3b. Chapter 4, [Sections 7], 8 and 9

18.06 Problem Set 3 Solution Due Wednesday, 25 February 2009 at 4 pm in Total: 160 points.

Chapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer.

is Use at most six elementary row operations. (Partial

This lecture is a review for the exam. The majority of the exam is on what we ve learned about rectangular matrices.

Math 3C Lecture 25. John Douglas Moore

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

GEOMETRY OF MATRICES x 1

Math 2030 Assignment 5 Solutions

Chapter 2: Matrix Algebra

The definition of a vector space (V, +, )

Solutions to Math 51 First Exam October 13, 2015

Linear Algebra Exam 1 Spring 2007

March 27 Math 3260 sec. 56 Spring 2018

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Solutions to Math 51 First Exam April 21, 2011

Math 1553, Introduction to Linear Algebra

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

c i r i i=1 r 1 = [1, 2] r 2 = [0, 1] r 3 = [3, 4].

Chapter 2 Notes, Linear Algebra 5e Lay

MATH10212 Linear Algebra B Homework 6. Be prepared to answer the following oral questions if asked in the supervision class:

Chapter 6 - Orthogonality

The Four Fundamental Subspaces

Math 2174: Practice Midterm 1

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

1 Systems of equations

Third Midterm Exam Name: Practice Problems November 11, Find a basis for the subspace spanned by the following vectors.

Review Notes for Linear Algebra True or False Last Updated: February 22, 2010

Department of Aerospace Engineering AE602 Mathematics for Aerospace Engineers Assignment No. 4

MATH 2050 Assignment 6 Fall 2018 Due: Thursday, November 1. x + y + 2z = 2 x + y + z = c 4x + 2z = 2

Math 308 Discussion Problems #4 Chapter 4 (after 4.3)

Math 21b: Linear Algebra Spring 2018

Answer Key for Exam #2

Linear algebra I Homework #1 due Thursday, Oct Show that the diagonals of a square are orthogonal to one another.

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6

MAT Linear Algebra Collection of sample exams

Math Final December 2006 C. Robinson

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

2. Every linear system with the same number of equations as unknowns has a unique solution.

Miderm II Solutions To find the inverse we row-reduce the augumented matrix [I A]. In our case, we row reduce

Linear algebra II Tutorial solutions #1 A = x 1

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

A Brief Outline of Math 355

Linear Algebra, Summer 2011, pt. 3

Vector Spaces, Orthogonality, and Linear Least Squares

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture

1. Determine by inspection which of the following sets of vectors is linearly independent. 3 3.

Math 323 Exam 2 Sample Problems Solution Guide October 31, 2013

Solving Linear Systems Using Gaussian Elimination

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

Lecture 13: Orthogonal projections and least squares (Section ) Thang Huynh, UC San Diego 2/9/2018

Kevin James. MTHSC 3110 Section 4.3 Linear Independence in Vector Sp

Math 240, 4.3 Linear Independence; Bases A. DeCelles. 1. definitions of linear independence, linear dependence, dependence relation, basis

Linear Algebra. Min Yan

Inverting Matrices. 1 Properties of Transpose. 2 Matrix Algebra. P. Danziger 3.2, 3.3

1. What is the determinant of the following matrix? a 1 a 2 4a 3 2a 2 b 1 b 2 4b 3 2b c 1. = 4, then det

Overview. Motivation for the inner product. Question. Definition

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Linear Algebra Quiz 4. Problem 1 (Linear Transformations): 4 POINTS Show all Work! Consider the tranformation T : R 3 R 3 given by:

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

INVERSE OF A MATRIX [2.2]

LINEAR ALGEBRA SUMMARY SHEET.

Mon Feb Matrix inverses, the elementary matrix approach overview of skipped section 2.5. Announcements: Warm-up Exercise:

14: Hunting the Nullspace. Lecture

Elementary Linear Algebra Review for Exam 2 Exam is Monday, November 16th.

Math 369 Exam #2 Practice Problem Solutions

(v, w) = arccos( < v, w >

1 Last time: inverses

We see that this is a linear system with 3 equations in 3 unknowns. equation is A x = b, where

Math 3191 Applied Linear Algebra

7. Dimension and Structure.

Chapter 4 Fundamentals of Subspaces

18.06SC Final Exam Solutions

Math 314H EXAM I. 1. (28 points) The row reduced echelon form of the augmented matrix for the system. is the matrix

Math 215 HW #9 Solutions

Section 2.2: The Inverse of a Matrix

MODEL ANSWERS TO THE FIRST QUIZ. 1. (18pts) (i) Give the definition of a m n matrix. A m n matrix with entries in a field F is a function

3 - Vector Spaces Definition vector space linear space u, v,

Transcription:

A SHORT SUMMARY OF VECTOR SPACES AND MATRICES This is a little summary of some of the essential points of linear algebra we have covered so far. If you have followed the course so far you should have no trouble understing these notes. I suggest that you flesh out this text with your own examples. 1. Solving linear equations We are given an m n matrix A a vector b consider the system of linear equations A x = b. (1) The basic questions are: a) Is there a solution? b) Is there a solution for every b R m? c) If there is one, is it unique? d) If it is not unique, how can we describe all of them? The technical tool that we use to answer these questions is elimination which leads to the row reduced echelon form R. More precisely, elimination for the augmented matrix [A b] leads to [R c] where R is in row reduced echelon form. Each pivot is one, above below a pivot there are only zeros. This form is unique, i.e., it does not depend on how you do the elimination. I will assume that you know how to work the row reduction algorithm. The set of solutions of (1) of R x = c (2) is the same. The number of pivots is called the rank of the matrix A denoted by r(a) or just r. There are four spaces associated with the matrix A. The column space C(A) R m, the null space N(A) R n, the row space which is the same as the column space of the transposed matrix A T, C(A T ) R n the null space N(A T ) R m. The column space C(A) consists of all linear combinations of the column vectors the null space consists of all vectors x R n such that A x = 0. Note that C(A), N(A T ) are subspace of the same space R m C(A T ), N(A) are subspaces of the same space R n. Elimination does not change the row space, it does not change the null space of A. Hence we have C(A T ) = C(R T ), N(A) = N(R). The spaces C(A) N(A T ) will change, however. 1

2 A SHORT SUMMARY OF VECTOR SPACES AND MATRICES The dimensions of these various spaces are related by dimc(a) = r(a) = dimc(a T ) dimn(a) = n r(a), dimn(a T ) = m r(a). These relations follow from the row reduced echelon form. Here is the argument: For b to be in the column space it must be a linear combination of the column vectors of A = [ a 1,..., a n ], i.e., of the form n b = y j a j, thus, the vector y = (y 1,..., y n ) is a solution of the equation (1). After row reduction this equation transforms into n c = y j r j where r j are the column vectors of the matrix R. The y s stay the same because the solutions of (1) are the same as the ones of (2)! Thus, whenever, a set of column vectors of A are linearly independent so are the corresponding vectors in the matrix R conversely. Thus, the dimensions of the column space of A of R are the same. Not the spaces, but the dimensions. The dimension of the column space of R equals the number of pivots, i.e., the rank of A. In fact the columns of R that contain a pivot are the basis vectors for C(R). Therefore the corresponding columns in A, which we call the pivot columns of A, form a basis for C(A) (Why?). Again, let me emphasize that C(R) C(A). A little bit easier is the argument for the row space C(A T ), since this space does not change the rows in R that contain a pivot form a basis for C(A T ). Hence dimec(a T ) = r(a). The rest follows by noting that the number of free variables is the dimension of the null space of A this number plus R yields the number of columns. Now we can give some answers to the questions mentioned at the beginning. There exists a solution if only if b C(A). This is a bit of a triviality but a usefull one, as we shall see later. The equation(1) has a solution for every b R m if an only if r(a) = m. This simply expresses that fact the C(A) being a subspace of R m having dimension m must be equals to R m. Recall that the n m matrix B is a right inverse for the matrix A if AB = I m where I m is the identity matrix in R m. Note that a right inverse exists if only if (1) has a solution for every b R m. If the right inverse B exists, then B b solves A(B b) = I m b = b. So we have a solution for every b R m. To see the converse, we know, by assumption, that A x i = e i has a solution for every e i R m. The solution might not be unique but that does not matter. From the matrix B = [ x 1,..., x m ] note that AB = I m. Another way of saying this is that A has a right inverse if only if r(a) = m. To make further progress we observe: Every solution of (1) is of the form x p + z where z is a solution of the homogeneous equation A z = 0 x p is any particular solution of (1). Thus, we find that the solution is unique if only if N(A) = { 0} which is the same as

A SHORT SUMMARY OF VECTOR SPACES AND MATRICES 3 dimn(a) = 0. In this case r(a) = n. If the solution is not unique we can describe all the solutions of the equation (1). Pick any basis of N(A), { v 1,..., v n r }. The complete solution set of (1) is then given by n r x p + t j v j where t 1,..., t n r are real numbers x p is any particular solution. A basis for N(A) can be found by solving the row reduced system in terms of the free variables. The matrix A has a left inverse if the exists an n m matrix C with CA = I n. The matrix A has a left inverse if only if N(A) = { 0}. If A has a left inverse C, then applying C to the equation A x = 0 yields x = CA x = 0. Hence N(A) = { 0}. Conversely, if N(A) = { 0} then r(a) = n. This means that the equation A T y = c has a solution for every c R n. Hence, A T has a right inverse A T D = I n therefore D T is a left inverse of A. To summarize, the matrix A has a left inverse if only if dimn(a) = 0. Recall that the matrix A is invertible if it has a right a left inverse in which case the two matrices B C are the same. Thus, for A invertible it is necessary sufficient that both, n = m r(a) = n hold for the matrix A. Equivalently, A is invertible if only if n = m dimn(a) = 0. Another way of saying this is, that the matrix A is invertible if only if the equation (1) has a unique solution for any b R m. For this to be possible we necessarily need that m = n. Still another way of characterizing an invertible matrix is by saying that the column vectors are linearly independent, since n such vectors form for a basis for R n. You see that this part of linear algebra is about formulating the same statement in many different ways. This somewhat confusing but useful. Think of it as acquiring a language. 2. Least square approximations So far we argued only via the dimensions of the various subspaces, but they have special positions. Recall that the orthogonal complement of a subspace V R n consists of all vectors in R n that are perpendicular to every vector in V. The orthogonal complement is denoted by V. V V have only the zero vector in common we have that dimv + dimv = n. Moreover, V = V. The orthogonal complement of the row space of A consists of all vectors that are perpendicular to all row vectors of A hence to all column vectors of A T. Hence C(A T ) = N(A), N(A) = C(A T ) C(A) = N(A T ), N(A T ) = C(A). With these concepts one can formulate some interesting problems. What is the distance of the tip of a vector b R n to a subspace V R n, or better what is vector in V that is closest to b. If b is in V then, of course, b itself is the answer the distance is zero. So we assume that b / V. Suppose we have a vector v V with the property that b v is perpendicular

4 A SHORT SUMMARY OF VECTOR SPACES AND MATRICES to V, or what amounts to the same, that b v V. I claim that b v is the distance between V the tip of b. To see this, pick any other vector w V. Then, we write b w 2 = ( b v) + ( v w) 2 = b v 2 + v w) 2 + 2( b v) ( v w) where the last term vanishes, since ( v w) V ( b v) V. Hence, b w 2 = b v 2 + v w) 2 b v 2 with equality only if w = v. Thus, we have to find v with b v V in order to solve our distance problem. This can be readily solved. Pick any basis v 1,..., v k in V. The vector v in question must be of the form v = k x k v k. We can reformulate this by using the matrix A = [ v 1,..., v k ]. so that v = A x. Now b A x V = C(A) hence b A x N(A T ) so that A T ( b A x) = 0 or A T A x = A T b. These are the normal equations. The always have a unique solution, in fact the matrix A T A is invertible. It suffices to show that the column vectors of A T A are linearly independent. To check this we solve A T A y = 0 which means that A y N(A T ). Moreover A y C(A) hence A y = 0. Since the column vectors of A are linearly independent, y = 0. The vector v is now given by v = A(A T A) 1 A T b we set P V = A(A T A) 1 A which is the orthogonal projection of R n onto V. Note that P V does not depend on the choice of basis. Why? We have that P T V = P V P 2 V = P V. These observations are the foundations of the least square approximation. Imagine that a data set is given by vectors in R n you would like to see that this data set fits onto some lower dimensional subspace, e.g., a line. Very often this problem can be brought into the form of seeking the least square approximation to a system A x b. This is almost the problem we solved above. The vector b is given as well as the matrix A. The column vectors, however, do not in general form a basis for C(A), they may be linearly dependent. Moreover, we are not primarily interested in the vector v C(A) that is closest to the vector b. Our interest is in x. The normal equations are still valid A T A x = A T b

A SHORT SUMMARY OF VECTOR SPACES AND MATRICES 5 this system of equation always has a solution, the right side is always in the column space of A T hence of the column space of A T A. The solution for x, however, need not be unique. The error of the least square approximation is given by A x b where x is any solution of the normal equations. One might think that this number depends on which solution we choose, but this is not the case (Why?).