Lectures on Linear Algebra for IT by Mgr. Tereza Kovářová, Ph.D. following content of lectures by Ing. Petr Beremlijski, Ph.D. Department of Applied Mathematics, VSB - TU Ostrava Czech Republic
5. Linear Independence and Basis 5.1 Dependent and Independent Sets of Vectors 5.2 Dependency and Linear Combinations 5.3 Sufficient Conditions for Independency of Functions 5.4 Basis of a Vector Space 5.5 Vector Coordinates 5.6 Use of Coordinates
5.1 Dependent and Independent Sets of Vectors Definition 1 A nonempty finite set of vectors S = {v 1,..., v k } that are elements of a vector space V is called (linearly) independent, if the equation α 1 v 1 + + α k v k = o ( ) has the unique solution α 1 = = α k = 0. Whenever the set S = {v 1,..., v k } is (linearly) independent, we also say that the vectors v 1,..., v k are (linearly) independent. In case that the equation ( ) has another solution, we call the set S (linearly) dependent and also the vectors v 1,..., v k are (linearly) dependent.
5.1 Dependent and Independent Sets of Vectors Geometric illustration of dependency for two-dimensional vectors of R 2. α 2 v v w α 1 u = u o u α 1 u v o α 3 w = -w α 2 v = -u α 1 u + α 2 v + α 3 w = o α 1 u + α 2 v = o
Example 1 5.1 Dependent and Independent Sets of Vectors Let s consider vectors v 1 = [2, 1, 0], v 2 = [1, 2, 5] and v 3 = [7, 1, 5].Because 3v 1 + v 2 v 3 = o, the vector set S = {v 1, v 2, v 3 } is linearly dependent. Example 2 Polynomials p 1 (x) = 1 x, p 2 (x) = 5 + 3x 2x 2 and p 3 (x) = 1 + 3x x 2 satisfy the equation 3p 1 (x) p 2 (x) + 2p 3 (x) = 0 for each x R.Therefore given polynomials p 1, p 2, p 3 form linearly dependent set in P 3. Example 3 For vectors e 1 = [1, 0, 0], e 2 = [0, 1, 0] a e 3 = [0, 0, 1] the equation α 1 e 1 + α 2 e 2 + α 3 e 3 = o has only the unique solution α 1 = 0, α 2 = 0, α 3 = 0. Therefore e 1, e 2, e 3 form linearly independent set in R 3.
5.1 Dependent and Independent Sets of Vectors Theorem 1 A finite set of nonzero vectors S = {v 1,... v m } is linearly dependent, if and only if there exists k 2 such that v k is a linear combination of v 1,..., v k 1. Proof: (if part) Suppose S is dependent vector set. Consider sets S 1 = {v 1 }, S 2 = {v 1, v 2 },..., S m = {v 1,..., v m } and let S k be the least dependent set, so that α 1 v 1 + + α k v k = o ( ) with some of the coefficients α 1,..., α k different from zero. Because S 1 is obviously independent, must be k 2. Also α k 0, otherwise S k 1 would be dependent. Therefore we can rewrite the equation ( ) using vector space axioms as ( ) α1 v k = v 1 + + α k ( αk 1 α k ) v k 1.
5.1 Dependent and Independent Sets of Vectors Proof: (only if part) Suppose for 2 k m is v k = α 1 v 1 + + α k 1 v k 1, then (α 1 v 1 )... (α k 1 v k 1 ) + 1v k + 0v k+1 + + 0v m = o. It means, that at least the coefficient α k = 1 is different from zero, and so the set S m is linearly dependent.
5.2 The Sufficient Condition for Independency of Functions Let S = {f 1,..., f k } be a finite set of real functions from a vector space F. The set S is independent if and only if the equation α 1 f 1 (x) + + α k f k (x) = 0 for all x R, has the unique solution (the zero solution) α 1 = = α k = 0. sufficient condition: By substituting k different real numbers for x, x 1,..., x k, we obtain the linear system of k linear equations with k variables α 1,..., α k : α 1 f 1 (x 1 ) +... + α k f k (x 1 ) = 0....... α 1 f 1 (x k ) +... + α k f k (x k ) = 0 If the coefficient matrix of this system is regular (invertible), then the only solution is α 1 = = α k = 0, and so the set S is independent.
5.2 The Sufficient Condition for Independency of Functions Example 4 Are the power functions x, x 2 and x 3 linearly independent? Solution: We choose three values of x arbitrarily. For inst. x 1 = 1, x 2 = 2 and x 3 = 3. By substituting these values into functions f 1 (x) = x, f 2 (x) = x 2, f 3 (x) = x 3 we obtain the linear system: α 1 + α 2 + α 3 = 0 2α 1 + 4α 2 + 8α 3 = 0 3α 1 + 9α 2 + 27α 3 = 0 To find out about the solution of the system, we transform the coefficient matrix into an echelon form. 1 1 1 2 4 8 2r 1 1 1 1 0 2 6 1 1 1 0 2 6. 3 9 27 3r 1 0 6 24 3r 2 0 0 6 Since the system has only the zero solution α 1 = α 2 = α 3 = 0 (coefficient matrix is invertible), the functions x, x 2 and x 3 are linearly independent.
5.3 Basis of a Vector Space Definition 2 A finite set B of vectors in a vector space V is called a basis for V, if B is linearly independent set, and each vector v V is a linear combination of vectors in B. Note: The second condition of the above definition can be also rephrased as: The vector space V is the span of B, V = B. The definition of a basis applies also to the case when V is a vector subspace in a vector space W, because any vector subspace is a vector space of itself. Not every vector space has a basis in a sense of our definition. For instance a finite set of real functions, that would span all F does not exists.
Example 5 5.3 Basis of a Vector Space Vectors e 1 = [1, 0, 0], e 2 = [0, 1, 0], e 3 = [0, 0, 1] form a basis for V = R 3. The vectors e 1, e 2, e 3 are linearly independent and any vector v = [v 1, v 2, v 3 ] in V can be expressed as a linear combination: v = v 1 e 1 + v 2 e 2 + v 3 e 3. The basis E = (e 1, e 2, e 3 ) is called a standard basis for R n, and the basis vectors form columns (or rows) of the identity matrix I n. Example 6 Polynomials p 1 (x) = 1 and p 2 (x) = x form a basis for P 2. Certainly each polynomial p(x) = a 0 + a 1 x can be written in the form p(x) = a 0 p 1 (x) + a 1 p 2 (x). To shaw that p 1, p 2 are linearly independent, suppose that a 0, a 1 satisfy a 0 p 1 (x) + a 1 p 2 (x) = o(x) for all x R. From here a 0 + a 1 x = 0. For x = 0 we get a 0 + a 1 0 = 0, and so a 0 = 0. For x = 1 we get a 1 1 = 0, and so a 1 = 0. This proves that p 1 and p 2 are linearly independent.
5.3 Vector Coordinates Definition 3 Suppose B = (b 1,..., b n ) is an ordered basis for V and v is in V. The coordinates of v relative to the basis B (or the B coordinates of v) are the weights (scalars) c 1,..., c n, such that v = c 1 b 1 + + c n b n. Theorem 2 Let B = (b 1,..., b n ) be an ordered basis for a vector space V. Suppose x 1,..., x n and y 1,..., y n are both B coordinates of v V. Then x 1 = y 1,..., x n = y n. Proof: Since B is a basis for V, is v = x 1 b 1 + + x n b n = and = y 1 b 1 + + y n b n. Then o = v + ( 1)v = x 1 b 1 + + x n b n + +( 1)(y 1 b 1 + + y n b n ) = (x 1 y 1 )b 1 + + (x n y n )b n. Because basis vectors are independent, must be x 1 = y 1,..., x n = y n.
5.3 Vector Coordinates For each vector v V representation by coordinates relative to the given basis B is unique. If c 1,..., c n are the B-coordinates of v, then the vector in R n denoted [v] B = [c 1,..., c n ] is called the coordinate vector of v (relative to B). Example 7 For any arithmetic vector v = [v 1, v 2, v 3 ] the coordinates relative to the standard basis E = (e 1, e 2, e 3 ) from example 6 are v 1, v 2, v 3, because [v 1, v 2, v 3 ] = v 1 [1, 0, 0] + v 2 [0, 1, 0] + v 3 [0, 0, 1]. The coordinate vector of v relative to E is [v] E = [v 1, v 2, v 3 ]. Example 8 The coordinates of the polynomial p(x) = x + 2 relative to the basis P = (p 1, p 2 ) where p 1 (x) = 1 and p 2 (x) = x, from example 7 are 2, 1, because p(x) = x + 2 = 2p 1 (x) + 1p 2 (x). The P-coordinate vector of p(x) is [p] P = [2, 1].
5.4 Use of Coordinates The coordinates are used to transform problems involving vectors from V to problems involving only arithmetic vectors from R n. Such a transformation corresponds to a one-to-one mapping of the elements of V into R n. The mapping V v [v] B R n is the coordinate mapping (determined by B). Lemma 1 For any two vectors u, v V and a scalar α 1. [u + v] B = [u] B + [v] B 2. [αu] B = α[u] B Proof: Suppose [u] B = [u 1,..., u n] and [v] B = [v 1,..., v n], where B = (b 1,..., b n) is a basis for V. It means u = u 1 b 1 + + u nb n and v = v 1 b 1 + + v nb n.then u+v = u 1 b 1 + +u nb n+v 1 b 1 + +v nb n = (u 1 +v 1 )b 1 + +(u n+v n)b n and αu = αu 1 b 1 + + αu nb n, from where [u + v] B = [u] B + [v] B and [αu] B = α [u] B.
5.4 Use of Coordinates When we are solving problems involving linear combinations of vectors, as for instance to find out if a vector is a linear combination of some other vectors, or to decide if the given set of vectors is linearly independent, we proceed as follows: We make such a choice of a basis B for the given vector space, that the representations of all the vectors relative to the basis B are to be found easily. We find B-coordinate vectors of all the vectors occurring in the problem description. We solve the problem obtained from the original assignment by the interchange of all the vectors for B-coordinate vectors. Note: The procedure described above assumes, that we can find a suitable basis for the given vector space. This is not always possible.
Example 9 5.4 Use of Coordinates For the polynomial p(x) = x 2 1 find the coordinates relative to the basis P = (p 1, p 2, p 3 ), where p 1 (x) = 1, p 2 (x) = x + 1, p 3 (x) = x 2 + x + 1. Solution: We choose the basis E = (e 1, e 2, e 3 ), where e 1 (x) = 1, e 2 (x) = x, e 3 (x) = x 2 (E is the standard basis for P). Then we find E-coordinate vectors for the polynomials p, p 1, p 2, p 3. They are [p] E = [ 1, 0, 1], [p 1 ] E = [1, 0, 0], [p 2 ] E = [1, 1, 0], [p 3 ] E = [1, 1, 1]. Now we solve the linear system [p] E = α 1 [p 1 ] E + α 2 [p 2 ] E + α 3 [p 3 ] E. By writing equations for the corresponding entries we obtain. 1 = α 1 + α 2 + α 3 0 = α 2 + α 3 1 = α 3 The system has the unique solution α 1 = 1, α 2 = 1, α 3 = 1. It is easy to verify, that p = p 1 p 2 + p 3.
Example 10 5.4 Use of Coordinates Are the polynomials p 1 (x) = x 2 + x + 1, p 2 (x) = x 2 + 2x + 1, p 3 (x) = x 2 + x + 2 linearly dependent or independent? Solution: We choose the standard basis E = (e 1, e 2, e 3 ), where e 1 (x) = 1, e 2 (x) = x, e 3 (x) = x 2. For the vectors p, p 1, p 2, p 3 we find the E-coordinate vectors. They are [p 1 ] E = [1, 1, 1], [p 2 ] E = [1, 2, 1], [p 3 ] E = [2, 1, 1]. We solve the linear system α 1 [p 1 ] E + α 2 [p 2 ] E + α 3 [p 3 ] E = o. By entry equations are α1 + α 2 + 2α 3 = 0 α 1 + 2α 2 + α 3 = 0. α 1 + α 2 + α 3 = 0 Continue by row reduction of the augmented matrix 1 1 2 0 1 2 1 0 1 1 1 0 r 1 r 1 1 1 1 0 0 1 1 0 0 0 1 0 The system has the unique solution α 1 = 0, α 2 = 0, α 3 = 0. Therefore the polynomials p 1, p 2, p 3 are linearly independent..