LINEAR ALGEBRA: THEORY. Version: August 12,

Similar documents
DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Linear Algebra. Preliminary Lecture Notes

NOTES on LINEAR ALGEBRA 1

Linear Algebra. Preliminary Lecture Notes

Solving Systems of Equations Row Reduction

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Spanning, linear dependence, dimension

Section 6.1. Inner Product, Length, and Orthogonality

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

(v, w) = arccos( < v, w >

2. Every linear system with the same number of equations as unknowns has a unique solution.

v = v 1 2 +v 2 2. Two successive applications of this idea give the length of the vector v R 3 :

Contents. 1 Vectors, Lines and Planes 1. 2 Gaussian Elimination Matrices Vector Spaces and Subspaces 124

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Linear Algebra. Min Yan

(v, w) = arccos( < v, w >

Abstract Vector Spaces

Systems of Linear Equations

MATH Linear Algebra

Chapter 6. Orthogonality and Least Squares

x 1 + 2x 2 + 3x 3 = 0 x 1 + 2x 2 + 3x 3 = 0, x 2 + x 3 = 0 x 3 3 x 3 1

Math 123, Week 5: Linear Independence, Basis, and Matrix Spaces. Section 1: Linear Independence

Lecture 22: Section 4.7

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Algebraic Methods in Combinatorics

The value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.

Linear Algebra, Summer 2011, pt. 2

LECTURE 6: VECTOR SPACES II (CHAPTER 3 IN THE BOOK)

Lecture Notes 1: Vector spaces

Elementary maths for GMT

Linear Algebra. Chapter Linear Equations

SUPPLEMENT TO CHAPTER 3

Lecture 9: Vector Algebra

7. Dimension and Structure.

(v, w) = arccos( < v, w >

MTH 2032 Semester II

Assignment 1 Math 5341 Linear Algebra Review. Give complete answers to each of the following questions. Show all of your work.

Linear Algebra I. Ronald van Luijk, 2015

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

August 23, 2017 Let us measure everything that is measurable, and make measurable everything that is not yet so. Galileo Galilei. 1.

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

1.3 Linear Dependence & span K

Chapter Two Elements of Linear Algebra

MATH 221: SOLUTIONS TO SELECTED HOMEWORK PROBLEMS

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

Lecture 3: Linear Algebra Review, Part II

October 25, 2013 INNER PRODUCT SPACES

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03

Chapter 1: Systems of Linear Equations

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Solutions: We leave the conversione between relation form and span form for the reader to verify. x 1 + 2x 2 + 3x 3 = 0

Math 24 Spring 2012 Questions (mostly) from the Textbook

Elementary linear algebra

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Exam 2 Solutions. (a) Is W closed under addition? Why or why not? W is not closed under addition. For example,

Span and Linear Independence

PRACTICE PROBLEMS (TEST I).

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

y 2 . = x 1y 1 + x 2 y x + + x n y n 2 7 = 1(2) + 3(7) 5(4) = 3. x x = x x x2 n.

Solutions to Selected Questions from Denis Sevee s Vector Geometry. (Updated )

DS-GA 1002 Lecture notes 10 November 23, Linear models

Linear Equations in Linear Algebra

Let V be a vector space, and let X be a subset. We say X is a Basis if it is both linearly independent and a generating set.

HOW TO THINK ABOUT POINTS AND VECTORS WITHOUT COORDINATES. Math 225

2 Systems of Linear Equations

Math 3C Lecture 25. John Douglas Moore

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

UNDERSTANDING THE DIAGONALIZATION PROBLEM. Roy Skjelnes. 1.- Linear Maps 1.1. Linear maps. A map T : R n R m is a linear map if

Math Linear algebra, Spring Semester Dan Abramovich

6.4 BASIS AND DIMENSION (Review) DEF 1 Vectors v 1, v 2,, v k in a vector space V are said to form a basis for V if. (a) v 1,, v k span V and

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

,, rectilinear,, spherical,, cylindrical. (6.1)

Matrices and Matrix Algebra.

MATH 320, WEEK 7: Matrices, Matrix Operations

MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics

Pre-sessional Mathematics for Big Data MSc Class 2: Linear Algebra

Chapter 6: Orthogonality

a (b + c) = a b + a c

4.3 - Linear Combinations and Independence of Vectors

Inner Product, Length, and Orthogonality

We showed that adding a vector to a basis produces a linearly dependent set of vectors; more is true.

LINEAR ALGEBRA W W L CHEN

Lecture 11: Dimension. How does Span(S) vary with S

Chapter 3. Directions: For questions 1-11 mark each statement True or False. Justify each answer.

Volume in n Dimensions

NAME MATH 304 Examination 2 Page 1

x 1 x 2. x 1, x 2,..., x n R. x n

Announcements Wednesday, November 15

The Four Fundamental Subspaces

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

Lecture 2: Linear Algebra

Inner Product Spaces

Tune-Up Lecture Notes Linear Algebra I

Algebraic Methods in Combinatorics

Transcription:

LINEAR ALGEBRA: THEORY. Version: August 12, 2000 13 2 Basic concepts We will assume that the following concepts are known: Vector, column vector, row vector, transpose. Recall that x is a column vector, x t is a row vector. Addition of vectors (always of the same size!) and multiplication of vectors by a number (scalar). Scalar product (or dot product) of vectors (always of the same size). Recall that x y t = y x t but x y t x t y (!!!) (the latter is an n n matrix if the size of x is n). Two vectors with scalar product zero are called orthogonal. Remark: In many books the t superscript is omitted, i.e. x y is used for x y t With this slightly abusing convention the scalar product is commutative, i.e. x y = y x Matrix, transpose matrix, matrix addition, multiplication of matrices by a scalar. Note that the vector is a special case of the matrix (n 1 matrix) Matrix-vector and matrix-matrix multiplication (sizes must match). Not commutative. Basic algebraic manipulations with vectors and matrices (associativity, commutativity, distributivity for addition and scalar multiplication). Null vector and null matrix (or zero matrix), identity matrix. n 1 vectors can be represented in the n-dimensional Euclidean space R n by arrows starting from the origin and ending at the point (x 1,x 2,...x n ), where x =(x 1,x 2,...x n ) t. Of course we can see it only for n =2,3, but this is just our limitation. In almost all cases the geometric intuition based upon the 2 or 3-dimensional picture is valid in higher dimensions. It is usually worth making pictures.

LINEAR ALGEBRA: THEORY. Version: August 12, 2000 14 More generally, one can consider arrows pointing from any point A =(a 1,a 2,...a n )to any other point B =(b 1,b 2,...b n )ofthen-dimensional space. This arrow is identified with the vector (b 1 a 1,b 2 a 2,...b n a n ). In other words, this arrow is the vector (b 1 a 1,b 2 a 2,...b n a n ) with its starting point shifted to A. In most cases people talk about the vector from A to B, or the vector AB, which is a slight abuse of language for the arrow from A to B, but it never causes any confusion. Geometry of vector addition and scalar multiplication (paralellogramm rule) Length of a vector x is x = x t x = x 2 1 +...+x 2 n. The vector is called normalized if its length is 1. The angle θ between two vectors x, y is defined via its cosine as cos θ = xt y x y In case of n =2,3 these definitions coincide with the usual geometric notions of the length and angle. In higher dimensional Euclidean spaces these are the basic definitions of the geometry in those spaces. Elementary geometry of matrix multiplications in the plane and space. Scaling, rotation, projection, reflection on the level of [D] Chapter 4. and Chapter 12 of Salas-Hille. The following concepts are also supposedly known, nevertheless we list them to fix the terminology and notation: Definition 2.1 The linear combination of the vectors x 1, x 2,...x k R n with the numbers (scalars) c 1,c 2,...c k is the vector c 1 x 1 + c 2 x 2 +...+c k x k

LINEAR ALGEBRA: THEORY. Version: August 12, 2000 15 in R n. A vector y R n is said to be a linear combination of the vectors x 1, x 2,...x k R n if there exist numbers c 1,c 2,...c k such that y = c 1 x 1 + c 2 x 2 +...+c k x k. (2.1) In this case we also say that y depend linearly on the vectors x 1, x 2,...x k. It can happen that the numbers c 1,c 2,...c k are not unique in the above representation. Definition 2.2 The span of the vectors x 1, x 2,...x k R n is the set of all possible linear combinations of these vectors, i.e. all the vectors of the form c 1 x 1 + c 2 x 2 +...+c k x k as c 1,c 2,...c k independently runs through the real numbers. Definition 2.3 The set of vectors x 1, x 2,...x k R n is linearly independent if c 1 x 1 + c 2 x 2 +...+c k x k =0 implies c 1 = c 2 =...=c k =0. Otherwise this set is called linearly dependent. REMARK: Linear independence or dependence is the property of a SET OF VECTORS! With a slight abuse of language we often say that the vectors x 1, x 2,...x k are linearly dependent or independent, but keep in mind that this is a property of the ensemble of these vectors and not a property of individual vectors. It can happen that x 1, x 2,...x k is linearly independent, but adding one more vector x k+1 to this set, the new set becomes linearly dependent. The following statements are equivalent definitions of linear independence: Equivalent definition 2.4 The set of vectors x 1, x 2,...x k R n is linearly dependent if at least one of them is expressible as a linear combination of the rest.

LINEAR ALGEBRA: THEORY. Version: August 12, 2000 16 Equivalent definition 2.5 The set of vectors x 1, x 2,...x k R n is linearly independent if any vector in the span of these vectors can be uniquely represented as a linear combination of x 1, x 2,...x k, i.e. the numbers c 1,c 2,...c k in (2.1) are unique. REMARK: The key word here is the unique. Every vector in the span can be represented as a linear combination of the spanning vectors, but only linearly independent spanning vectors give rise to unique representation. In particular the vector y = 0 has a unique representation using c 1 = c 2 =...=c k = 0, this is exactly the first definition. REMARK: It cannot happen that some vectors in the span of x 1, x 2,...x k have a unique representation, some others have several. Either ALL vectors in the span are uniquely represented (case of lin. independent vectors) or ALL vectors have many (in fact infinitely many) representations (case of lin. dependent vectors). We already mentioned that the linear dependence/independence is a the property of the set of vectors. If you remove a vector from a linearly independent set, it remains lin. independent. Similarly, if you add a new element to a lin. dependent set, it remains lin. dependent. However, if you remove a vector from a lin. dependent set, both situations can occur, i.e. the smaller set could remain lin. dependent, but it also could become lin. independent (Give examples for both (*)). Similarly, if you add a new vector to a lin. dependent set, both situations can occur (again, give examples (*)). The following fact is often useful Lemma 2.6 Let x 1, x 2,...x k be nonzero, pairwise orthogonal vectors in R n. Then they are linearly independent. Proof: Suppose that there is a linear combination which gives the zero vector c 1 x 1 + c 2 x 2 +...+c k x k =0 (2.2)

LINEAR ALGEBRA: THEORY. Version: August 12, 2000 17 Take the scalar product of this equation with the vector x t 1 x t 1 (c 1 x 1 + c 2 x 2 +...+c k x k ) =x t 1 0=0 Notice that all products x t 1 x j are zero by orthogonality, except for j = 1. Hence c 1 x 1 2 =0 and since x 1 is nonzero, we must have c 1 = 0. Similarly, if you multiply the equation (2.2) by x t 2, you get c 2 = 0 etc. Hence from (2.2) it follows that all coefficients are zero, i.e. the original vectors are linearly independent. 2 Definition 2.7 A subset S of vectors in R n is called a linear subspace if it is closed under addition and scalar multiplication. This means that if v, w S then v + w S and αv S as well for any number α. REMARK: In most cases we omit the word linear and we just refer to it as subspace or even only space. Sometimes we add that a subspace of R n. REMARK: The linear span of any set of vectors x 1, x 2,...x k R n is a subspace (CHECK(*)). Definition 2.8 A set of vectors v 1, v 2,...v m in a given subspace S of R n is called a basis of S if: S = Span{v 1,v 2,...v m } AND v 1, v 2,...v m are linearly independent. Notice that it is not clear that basis exists. You might think it is an unnecessary subtlety, but as we explain below, this allows us thinking of subspaces as objects like lines, planes etc. It is actually the first nontrivial theorem of linear algebra, and we will prove it below.

LINEAR ALGEBRA: THEORY. Version: August 12, 2000 18 Theorem 2.9 Any subspace S of R n has a basis REMARK: A basis is the most economical set to span a given subspace S in a sense that it contains the fewest possible elements. In particular, if you remove any element from a basis, then it does not span S any more (otherwise the original set were not lin. independent CHECK (*)). If, on contrary, you add a new element to a basis, then it stops being linearly independent (CHECK (*)). In fact you can use these properties to give equivalent definitions of the basis Equivalent definition 2.10 Any maximal set of linearly independent vectors { } v 1, v 2,...v m in a given subspace S of R n is a basis of S. Maximality means that if you add any more vector to this set from S, it stops being linearly indepedent. Equivalent definition 2.11 Any minimal set of vectors { } v 1, v 2,...v m that span a given subspace S of R n is a basis of S. Minimality means that if you remove any vector from this set, then it will not span S any more. It is not completely trivial that these definitions are really equivalent to the original one. The key is the following lemma, which states that every linearly independent set can be extended to a basis by adding more vectors, and conversely, any set of spanning vectors can be reduced to a basis by deleting some vectors. In particular it gives good way to construct bases and in particular it proves Theorem 2.9. Lemma 2.12 (i) Let { } v 1, v 2,...v m be a set of linearly independent vectors in a nonzero subspace S of R n. Then it can be extended by adding some more vectors to form a basis of S, i.e. there exists further vectors v m+1,...v k such that { } v 1, v 2,...v k is a basis in S. (ii) Let { } v 1, v 2,...v m be a (finite) set of vectors that span a nonzero subspace S of R n. Then it can be reduced to a basis by deleting certain elements from it.

LINEAR ALGEBRA: THEORY. Version: August 12, 2000 19 We omit the detailed proof, but here is the idea (DO THE PROOF yourself (*)): Part (i) If the set does not span yet, then add any vector (call v k+1 )toitfromthe complement of the span. First show that this new bigger set is also linearly independent (assume not and prove by contradiction!). Then ask again if this bigger set spans or not. If yes, then you have a basis. If not, then again adjoin a new vector, call v k+2, show that the new set is linearly independent etc. The procedure must stop at most when you have n vectors, otherwise you would have n + 1 linearly independent vector in R n. This is apparently impossible. But the rigorous proof is not completely trivial, and in fact we will need the Gaussian elimination to see this (see Theorem 3.3). Part (ii) If the given vectors are linearly independent, then we have a basis. If not, then x 1 v 1 + x 2 v 2 +...+x m v m =0 for some numbers x j, and not all them are zero. Suppose x j 0. Thenv j can be expressed as a linear combination of the rest (CHECK (*)), hence one can remove v j from the set and the remaining vectors still span S (CHECK!) Now keep on reducing until what remains is indepedent. Since you started with finitely many vectors, it will happen at worst by the time we have reduced the set to a single nonzero vector. 2 DELICATE REMARK: Note that both constructions prove independently Theorem 2.9. However we needed some extra information. If we want to use (i) to prove Theorem 2.9, then in the proof of (i) we needed Theorem 3.3 (to be proven later by Gaussian elimination). It seems simpler to use (ii) to prove Theorem 2.9, i.e. start with a spanning set and reduce it to a basis. Apparently nothing extra was used, EXCEPT, that you need a finite spanning set! Infinite spanning set exists, just take all vectors in S, but if you read the proof of (ii) above carefully, then finiteness of the set was heavily used (WHY?(*)). It seems completely trivial that every subspace S of R n has a finite spanning set if you think about lines, planes etc. But

LINEAR ALGEBRA: THEORY. Version: August 12, 2000 20 do not forget that a subspace was defined only by a few properties (Definition 2.7), and apriori it is not clear that a subspace looks like a line or a plane. In fact it is exactly Theorem 3.3 which proves that every subspace is spanned by a finitely many linearly independent vectors, hence it looks like a line or a plane or higher dimensional generalization of them. The conclusion is that we can prove Theorem 2.9 if we EITHER know Theorem 3.3 OR we deal with subspaces with a finite spanning set. The following crucial property follows immediately from these definitions: Theorem 2.13 Fix a basis v 1, v 2,...v m in a given subspace S of R n. Then any vector v S can be written as a linear combination of the basis vectors as v = c 1 v 1 + c 2 v 2 +...+c m v m with some numbers c 1,c 2,...,c n, and these numbers are uniquely determined. In this case we say that the vector v is written in the basis v 1, v 2,...v m with coefficients (or coordinates) c 1,c 2,...c m. There are many many different bases for a given subspace. However, the number of elements in all basis sets is the same. This is the second nontrivial theorem of linear algebra (for the proof see Proposition 2.4.20 of [HH]) Theorem 2.14 Any two bases of a subspace S of R n has the same number of elements and this number is called the dimension of S. EXAMPLE 1: The zero vector itself, S = {0}, and the full space S = R n are subspaces of R n. The dimension of S = {0} is zero, the dimension of S = R n is n. Sometimes these are

LINEAR ALGEBRA: THEORY. Version: August 12, 2000 21 called trivial subspaces. The standard basis of R n is denoted by 1 0 0 e 1 = 0., e 2 = 1.,... e n = 0.. (2.3) 0 0 n EXAMPLE 2: For n = 2, apart from the trivial subspaces of R 2, there are one dimensional subspaces. These are straight lines passing through the origin, i.e. they are of the form S = {tv : t R} for some nonzero vector v R 2. A line that does not go through the origin is NOT a subspace (is not closed under addition). EXAMPLE 3: For n = 3, apart from the trivial subspaces of R 3, there are one and two dimensional subspaces. Every straight line passing through the origin is a one dimensional subspace, they are of the form S = {tv : t R} for some nonzero vector v R 3. Every plane passing through the origin is a two dimensional subspace. They can be represented as the span of their two lin. independent vectors, i.e. they are of the form S = {tv + sw : t, s R} for some linearly independent set {v, w} of vectors in R 3. Naturally, these representations are not unique. Lines or planes not passing through the origin are NOT subspaces. However, these can be represented as u + S, wheresis a line or plane and u S. These are called affine sets. (Sometimes people call them affine subspaces, but this is too much abuse of language, since they are not subspaces). The dimension measures the size of a subspace; clearly a point (zero dimensional space) is smaller than a line (one dimensional space) which is smaller than a plane etc. We have Theorem 2.15 Let S 1 and S 2 be subspaces of R n, and suppose that S 1 S 2. Then dim S 1 dim S 2. Moreover if S 1 is strictly contained in S 2, i.e. S 1 S 2, then dim S 1 < dim S 2

LINEAR ALGEBRA: THEORY. Version: August 12, 2000 22 Proof: Start with a basis in S 1. These vectors form a linearly independent set in S 2, but they do not span the whole S 2 unless S 1 = S 2. Hence one can add more vectors to this set from S 2 to keep linear independence. In other words the number of maximal linearly independent set in S 2 is bigger than the number of basis elements of S 1. 2 Finally we introduce the concept of orthogonal and orthonormal basis: Definition 2.16 A basis v 1, v 2,...v m of a subspace S of R n is called orthogonal if the basis vectors are pairwise orthogonal, i.e. vi t v j =0for all i j. If, in addition, the vectors are normalized, i.e. v i =1, then the basis is called orthonormal. A different and bit more formal treatment of this material can be found on http://www.math.gatech.edu/ carlen/1502/html/pdf/dim.pdf