Algebra II. Paulius Drungilas and Jonas Jankauskas

Size: px
Start display at page:

Download "Algebra II. Paulius Drungilas and Jonas Jankauskas"

Transcription

1 Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive definite quadratic forms. 9 Sylvester s criterion. 9 Exercises Euclidean space 13 Euclidean space 13 Component and projection 16 Gram-Schmidt orthogonalization process 17 Orthogonal complement 18 Finding the orthogonal complement 19 Finding the projection 20 Exercises Linear maps 24 What is linear map? 24 What is matrix of linear map? 25 Change of basis 30 Important properties of matrices 31 Exercises Eigenvalues and eigenvectors 38 1

2 What is eigenvalue and eigenvector? 38 How to find eigenvalues? 39 How to find eigenvectors? 40 Exercises 44 References 46 2

3 1. Quadratic forms 3 What is quadratic form? Let k be a field and let X := (x 1, x 2,..., x n ) be the vector (1 n matrix) of independent variables x 1, x 2,..., x n. We say that a polynomial f(x 1, x 2,..., x n ) k[x 1, x 2,..., x n ] is a quadratic form if there exists an n n symmetric matrix A = (a ij ), a ij k, such that n n f(x 1, x 2,..., x n ) = XAX t = a ij x i x j = i=1 j=1 n n 1 a ii x 2 i + 2 i=1 n i=1 j=i+1 a ij x i x j. Hence a quadratic form is a homogenious polynomial of degree two. We also say that A is the matrix of the quadratic form f(x 1, x 2,..., x n ). The rank of A is called the rank of the quadratic form f(x 1, x 2,..., x n ). Example 1. The matrix of the quadratic form f(x 1, x 2, x 3 ) = x x 2 2 x x 1 x 2 3x 1 x 3 is since 1 2 3/2 A = 2 2 0, 3/ /2 f(x 1, x 2, x 3 ) = XAX t = (x 1 x 2 x 3 ) /2 0 1 x 1 x 2 x 3. Change of variables. The linear transformation x 1 = c 11 y 1 + c 12 y c 1n y n x 2 = c 21 y 1 + c 22 y c 2n y n x n = c n1 y 1 + c n2 y c nn y n, (1.1)

4 c ij k, is called a linear change of variables over the field k. In what follows, we shall say change of variables instead of linear change of variables over the field k. The matrix C = (c ij ) is called the matrix of the change of variables (1.1). Then the change of variables (1.1) can be rewritten as X = Y C t, where Y = (y 1, y 2,..., y n ). We say that the change of variables (1.1) is nonsingular if its matrix C is nonsingular, i. e. if det(c) 0. In what follows, we will consider only nonsingular changes of variables, so that change of variables means nonsingular change of variables, unless stated otherwise. The composition of two changes of variables X = Y C1 t and Y = ZC2 t is also a change of variables X = ZC2C t 1 t = Z(C 1 C 2 ) t whose matrix is C 1 C 2. Let f(x 1, x 2,..., x n ) be a quadratc form whose matrix is A. Consider a change of variables X = Y C. Then f(y C) = (Y C)A(Y C) t = Y (CAC t )Y t. Hence the matrix of the quadratic form f(y C) is CAC t. Equivalence of quadratic forms. Let X = (x 1, x 2,..., x n ) and Y = (y 1, y 2,..., y n ). We say that quadratic forms f(x 1, x 2,..., x n ) and g(y 1, y 2,..., y n ) are equivalent if there exists a change of variables X = Y C such that f(y C) = g(y ). Proposition 2. Equivalent quadratic forms have the same rank. A quadratic form f(x 1, x 2,..., x n ) is called canonical if its matrix is diagonal, i. e. if f(x 1, x 2,..., x n ) = a 1 x a 2 x a n x 2 n. For example, the quadratic form f(x 1, x 2, x 3 ) = x x 2 2 5x 2 3 is canonical, whereas g(y 1, y 2, y 3 ) = 2y1 2 y2 2 + y 1 y 2 is not. Theorem 3. Every quadratic form over the field k whose characteristic char(k) 2 is equivalent to some canonical quadratic form. Canonical form. We say that a quadratic form g(y 1, y 2,..., y n ) is a canonical expression of a quadratic form f(x 1, x 2,..., x n ) if forms f and g are equivalent and g is canonical. In general canonical expression of a given quadratic form is not uniquely determined. Indeed, if g is a canonical expression of a quadratic form f then for any c k \ {0} the form c g is also a canonical expression of f. 4

5 Example 4. We will find a canonical expression g(y 1, y 2, y 3 ) of the quadratic form f(x 1, x 2, x 3 ) = x 2 1 x x 2 3 2x 1 x 2 + 4x 1 x 3 + 8x 2 x 3 and a change of variables X = Y C such that g(y ) = f(y C). We will find a form g by a procedure known as Lagrange s Reduction which consists essentially of repeated completing of the square. First of all, we collect all the terms with x 1 and complete the resulting expression to a square. f(x 1, x 2, x 3 ) = (x 2 1 2x 1 x 2 + 4x 1 x 3 ) x x x 2 x 3 = (x x x 2 3 2x 1 x 2 + 4x 1 x 3 4x 2 x 3 x 2 2 4x x 2 x 3 ) x x x 2 x 3 = (x 1 x 2 + 2x 3 ) 2 2x x x 2 x 3. The quadratic form 2x x x 2 x 3 is in two variables x 2 and x 3 and does not depend on x 1. Now we repeat the above described procedure for this quadratic form: f(x 1, x 2, x 3 ) = (x 1 x 2 + 2x 3 ) 2 2x x x 2 x 3 = (x 1 x 2 + 2x 3 ) 2 2(x 2 2 6x 2 x 3 ) 15x 2 3 = (x 1 x 2 + 2x 3 ) 2 5 2(x 2 2 6x 2 x 3 + 9x 2 3 9x 2 3) 15x 2 3 = (x 1 x 2 + 2x 3 ) 2 2(x 2 3x 3 ) 2 + 3x 2 3. Putting y 1 = x 1 x 2 + 2x 3 y 2 = x 2 3x 3 y 3 = x 3 we obtain the change of variables X = Y C, given by x 1 = y 1 + y 2 + y 3 x 2 = y 2 + 3y 3, x 3 = y 3 which transforms the quadratic form f(x 1, x 2, x 3 ) into its canonical expression g(y 1, y 2, y 3 ) = y 2 1 2y y 2 3.

6 Example 5. We will find a canonical expression g(y 1, y 2, y 3 ) of the quadratic form f(x 1, x 2, x 3 ) = x 1 x 2 + 4x 1 x 3 8x 2 x 3 and a change of variables X = Y C such that g(y ) = f(y C). The given quadratic form contains no squares of variables, therefore we consider an auxiliary change of variables x 1 = z 1 + z 2 x 2 = z 1 z 2. (1.2) x 3 = z 3 Then 6 f(z 1 + z 2, z 1 z 2, z 3 ) = (z 1 + z 2 )(z 1 z 2 ) + 4(z 1 + z 2 )z 3 8(z 1 z 2 )z 3 = z 2 1 z 2 2 4z 1 z z 2 z 3. Now the quadratic form z 2 1 z 2 2 4z 1 z 3 +12z 2 z 3 has a square z 2 1, and therefore we can proceed as in Example 4. f(z 1 + z 2, z 1 z 2, z 3 ) = z 2 1 z 2 2 4z 1 z z 2 z 3 = (z 2 1 4z 1 z 3 + 4z 2 3 4z 2 3) z z 2 z 3 = (z 1 2z 3 ) 2 (z z 2 z 3 ) 4z 2 3 = (z 1 2z 3 ) 2 (z 2 6z 3 ) z 2 3. Putting y 1 = z 1 2z 3 y 2 = z 2 6z 3 y 3 = z 3 we obtain the change of variables z 1 = y 1 +2y 3 z 2 = y 2 + 6y 3 (1.3) z 3 = y 3 which transforms the quadratic form f(z 1 + z 2, z 1 z 2, z 3 ) into its canonical expression g(y 1, y 2, y 3 ) = y 2 1 y y 2 3. Finally, from (1.2) and (1.3) we

7 7 obtain the change of variables X = Y C, given by x 1 = y 1 + y 2 + 8y 3 x 2 = y 1 y 2 4y 3, x 3 = y 3 which transforms f(x 1, x 2, x 3 ) into its canonical expression g(y 1, y 2, y 3 ) = y 2 1 y y 2 3. Normal form. Consider a quadratic form f(x 1, x 2,..., x n ) over the field k. If k = R (resp. k = C) then we say that f(x 1, x 2,..., x n ) is a real (resp. complex) quadratic form. Complex canonical quadratic form is called normal if all its coeficients belong to {0, 1}. Real canonical quadratic form is called normal if all its coeficients belong to { 1, 0, 1}. We say that a quadratic form (either real or complex) g(y 1, y 2,..., y n ) is a normal expression of a quadratic form f(x 1, x 2,..., x n ) if forms f and g are equivalent and g is normal. Theorem 6. Every complex quadratic form is equivalent to some normal quadratic form. Moreover, two complex quadratic forms are equivalent if and only if they have the same rank. A normal expression of a real quadratic form of rank r is a sum of r squares of distinct variables with coefficients ±1. Theorem 7. Every real quadratic form is equivalent to some normal quadratic form. Example 8. We will find a normal expression h(z 1, z 2, z 3 ) of the real quadratic form f(x 1, x 2, x 3 ) = x 2 1 x x 2 3 2x 1 x 2 +4x 1 x 3 +8x 2 x 3 and a change of variables X = ZC such that h(z) = f(zc). (Here X = (x 1, x 2, x 3 ) and Z = (z 1, z 2, z 3 ).) In Example 4 we obtained that the change of variables x 1 = y 1 + y 2 + y 3 x 2 = y 2 + 3y 3 (1.4) x 3 = y 3

8 8 transforms the form f(x 1, x 2, x 3 ) into its canonical expression g(y 1, y 2, y 3 ) = y 2 1 2y y 2 3. Note that Hence putting g(y 1, y 2, y 3 ) = y 2 1 ( 2y 2 ) 2 + ( 3y 3 ) 2. z 1 = y 1 z 2 = z 3 = 2y2 3y3, we obtain the change of variables y 1 = z 1 y 2 = 2 1 z 2 (1.5) 1 y 3 = 3 z 3 which transforms the quadratic form g(y 1, y 2, y 3 ) into its normal expression h(z 1, z 2, z 3 ) = z 2 1 z z 2 3. Finally, from (1.4) and (1.5) we obtain the change of variables X = ZC, given by x 1 = z z z 3 x 2 = 2 1 z 2 + 3z 3, 1 x 3 = 3 z 3 which transforms f(x 1, x 2, x 3 ) into its normal expression h(z 1, z 2, z 3 ) = z 2 1 z z 2 3. Theorem 9 (Sylvester s law of inertia). Let f(x 1, x 2,..., x n ) be an arbitrary real quadratic form. Then any two normal expressions of f have the same number of positive squares of variables. Let g(y 1, y 2,..., y n ) be a normal expression of a real quadratic form f(x 1, x 2,..., x n ). Denote by p(f) (resp. q(f)) the number of positive (resp. negative) squares of variables of the form g. The number p(f) (resp. q(f)) is called the positive index of inertia of f (resp. the negative index of

9 inertia of f) and the number s(f) := p(f) q(f) is called the signature of f. Note the sum p(f) + q(f) equals the rank of f. Theorem 10. Two real quadratic forms are equivalent if and only if they have the same rank and their signatures coincide. Positive definite quadratic forms. We say that a real quadratic form f(x 1, x 2,..., x n ) is positive definite if the inequality f(x 1, x 2,..., x n ) > 0 holds for any (x 1, x 2,..., x n ) R n \ {(0, 0,..., 0)}. Similarly, we say that f(x 1, x 2,..., x n ) is negative definite if the inequality f(x 1, x 2,..., x n ) < 0 holds for any (x 1, x 2,..., x n ) R n \ {(0, 0,..., 0)}. Note that a real quadratic form f(x 1, x 2,..., x n ) is negative definite if and only if the form f(x 1, x 2,..., x n ) is positive definite. Theorem 11. A real quadratic form f(x 1, x 2,..., x n ) is positive definite (resp. negative definite) if and only if p(f) = n (resp. q(f) = n). Sylvester s criterion. The leading principal minors of a symmetric matrix a 11 a 12 a 1n a 21 a 22 a 2n A = a n1 a n2 a nn are defined as a 11 a 12 a 1n a 1 := a 11, 2 := 11 a 12 a 21 a 22,..., a 21 a 22 a 2n n :=. a n1 a n2 a nn Theorem 12 (Sylvester s criterion). A real quadratic form f(x 1, x 2,..., x n ) is positive definite if and only if all the principal minors of its matrix are positive, i. e. if j > 0, j = 1, 2,..., n. 9

10 A real quadratic form f(x 1, x 2,..., x n ) is negative definite if and only if all the principal minors of its matrix satisfy ( 1) j j > 0, j = 1, 2,..., n. Example 13. We will show that the quadratic form f(x 1, x 2, x 3 ) = x x x 2 3 2x 1 x 2 + 4x 1 x 3 20x 2 x 3 is positive definite. Principal minors of the matrix of f(x 1, x 2, x 3 ) are M 1 = 1, M 2 = 1 3 = 2, M 3 = = Therefore, by Sylvester s criterion, the quadratic form f(x 1, x 2, x 3 ) is positive definite. 10 Exercises. Exercise 1. Find a canonical expression g(y ) of a given quadratic form f(x) and a change of variables X = Y C such that g(y ) = f(y C). a) f(x 1, x 2, x 3 ) = x x x 2 3 2x 1 x 2 + 4x 1 x 3 6x 2 x 3 ; b) f(x 1, x 2, x 3 ) = x x x 2 3 2x 1 x 2 + 4x 1 x 3 16x 2 x 3 ; c) f(x 1, x 2, x 3 ) = x x x 2 3 2x 1 x 2 + 4x 1 x 3 16x 2 x 3 ; d) f(x 1, x 2, x 3 ) = x 2 1 2x x x 1 x 2 4x 1 x x 2 x 3 ; e) f(x 1, x 2, x 3 ) = 2x x x x 1 x x 1 x 3 6x 2 x 3 ; f) f(x 1, x 2, x 3, x 4 ) = x 1 x 2 + x 3 x 4 ; g) f(x 1, x 2, x 3 ) = x 1 x 2 + x 2 x 3 ;

11 11 Answer: x 1 = z 1 + z 2 z 3 a)f K (z 1, z 2, z 3 ) = z1 2 + z z3, 2 x 2 = z 2 + z 3 x 3 = z 3 x 1 = z 1 + z 2 + z 3 b)f K (z 1, z 2, z 3 ) = z z z3, 2 x 2 = z 2 + 3z 3 x 3 = z 3 x 1 = z 1 + z 2 + z 3 c)f K (z 1, z 2, z 3 ) = z z2, 2 x 2 = z 2 + 3z 3 x 3 = z 3 x 1 = z 1 z 2 z 3 d)f K (z 1, z 2, z 3 ) = z1 2 3z2 2 4z3, 2 x 2 = z 2 + 3z 3 x 3 = z 3 x 1 = z 1 2z 2 13z 3 e)f K (z 1, z 2, z 3 ) = 2z z z3, 2 x 2 = z 2 + 5z 3 x 3 = z 3 f)f K (z 1, z 2, z 3, z 4 ) = z 2 1 z z 2 3 z 2 4, g)f K (z 1, z 2, z 3 ) = z 2 1 z 2 2, x 1 = z 1 + z 2 x 2 = z 1 z 2 x 1 = z 1 z 2 z 3 x 2 = z 1 + z 2 x 3 = z 3 x 3 = z 3 + z 4 x 4 = z 3 z 4 Exercise 2. Find normal expression of each of the quadratic form in Exercise 1 and apropriate changes of variables.

12 12 Exercise 3. Are the following quadratic forms equivalent? a) f(x 1, x 2, x 3 ) = x x x 2 3 2x 1 x 2 + 4x 1 x 3 6x 2 x 3, g(y 1, y 2, y 3 ) = y y y 2 3 2y 1 y 2 + 4y 1 y 3 16y 2 y 3 ; b) f(x 1, x 2, x 3 ) = x x x 2 3 4x 1 x 2 + 6x 1 x 3 + 6x 2 x 3, g(y 1, y 2, y 3 ) = 2y y y 2 3 4y 1 y y 1 y 3 + 2y 2 y 3 ; c) f(x 1, x 2, x 3 ) = x x x 2 3 2x 1 x 2 + 4x 1 x 3 6x 2 x 3, g(y 1, y 2, y 3 ) = y y y 2 3 2y 1 y 2 + 8y 1 y y 2 y 3 ; d) f(x 1, x 2, x 3 ) = x x 2 2 9x 2 3 2x 1 x 2 + 8x 1 x 3 26x 2 x 3, g(y 1, y 2, y 3 ) = 3y y y 2 3 6y 1 y y 1 y y 2 y 3 ; Answer: a) yes; b) yes; c) no; d) yes. Exercise 4. Is the following quadratic form positive definite? a) f(x 1, x 2, x 3 ) = x x x 2 3 2x 1 x 2 + 4x 1 x 3 6x 2 x 3 ; b) f(x 1, x 2, x 3 ) = x x x 2 3 2x 1 x 2 + 4x 1 x 3 16x 2 x 3 ; c) f(x 1, x 2, x 3 ) = x x x 2 3 2x 1 x 2 + 4x 1 x 3 16x 2 x 3 ; d) f(x 1, x 2, x 3 ) = x 2 1 2x x x 1 x 2 4x 1 x x 2 x 3 ; e) f(x 1, x 2, x 3 ) = 2x x x x 1 x x 1 x 3 6x 2 x 3 ; f) f(x 1, x 2, x 3, x 4 ) = x 1 x 2 + x 3 x 4. Answer: a) yes; b) yes; c) no; d) no; e) yes; f) no.

13 2. Euclidean space 13 Euclidean space. Let V be a vector space over the field of real numbers R. A map, : V V R is called an inner product of the space V if it satisfies the following conditions for all vectors u, v, v 1, v 2 V and any real number a R: 1) v 1 + v 2, u = v 1, u + v 2, u ; 2) v, u = u, v ; 3) a v, u = a v, u ; 4) v, v 0; 5) v, v = 0 if and only if v = 0. If, : V V R is an inner product of the space V then the pair (V,, ) is called Euclidean space. Example 5. Let u, v R n, u = (α 1, α 2,..., α n ), v = (β 1, β 2,..., β n ). Then the map, : R n R n R defined by u, v = α 1 β 1 + α 2 β α n β n is an inner product of the vector space R n. Example 6. The ring of polynomials R[t] is a vector space over R. Then the map, : R[t] R[t] R defined by f(x), g(x) = 1 0 f(x) g(x) dx, f(x), g(x) R[t], is an inner product of the space R[t]. Example 7. Let V := R[t] and define the map, : V V R by f(x), g(x) = f(x) g(x)e x2 dx, where f(x), g(x) R[t]. Then the pair (R[t],, ) is an Euclidean space. Example 8. Denote by C([0, 1]) the set of all continuous real-valued functions on the interval [0, 1]. Then V :=C([0, 1]) is a vector space over R. Define the map, : V V R by f(x), g(x) = 1 0 f(x) g(x) dx,

14 where f(x), g(x) V. Then the pair (C([0, 1]),, ) is an Euclidean space. Example 9. Denote by M n (R) the set of n n matrixes whose coefficients are real numbers. Then M n (R) is a vector space over R and the map, : M n (R) M n (R) R defined by A, B = Tr(AB t ), A, B M n (R), is an inner product of the space M n (R). Here B t denotes the transpose of B and Tr(A) stands for the trace of A, i. e., if A = (a ij ) then Tr(A) = i a ii. Proposition 10. Suppose that V is a finite dimensional vector space over R. Then there exists an inner product of the space V. Example 11. Let (V,, ) be an Euclidean space and denote by O the zero vector of V. Then for any v V we have O, v = 0. Indeed, the first condition in the definition of an inner product implies and therefore O, v = 0. O, v = O + O, v = O, v + O, v, Theorem 12 (Cauchy-Bunyakovsky-Schwarz inequality). Suppose that (V,, ) is an Euclidean space. Then for all vectors u, v V, u, v 2 u, u v, v. Moreover, the equality holds if and only if vectors u and v are linearly dependent. Example 13. Let (R n,, ) be an Euclidean space as in Example 5. Cauchy- Bunyakovsky-Schwarz inequality implies 14 ( n ) 2 α i β i i=1 n i=1 α 2 i n i=1 β 2 i for any α i, β j R. Equality holds if and only if there exists t R such that either (α 1, α 2,..., α n ) = (tβ 1, tβ 2,..., tβ n ) or (β 1, β 2,..., β n ) = (tα 1, tα 2,..., tα n ).

15 Example 14. Let (R[x],, ) be an Euclidean space as in Example 6 and let f(x), g(x) R[x]. Cauchy-Bunyakovsky-Schwarz inequality implies ( f(x) g(x) dx) f(x) 2 dx g(x) 2 dx. 0 Equality holds if and only if there exists t R such that either f(x) = t g(x) or g(x) = t f(x). Example 15. Let (R[x],, ) be an Euclidean space as in Example 7 and let f(x), g(x) R[x]. Cauchy-Bunyakovsky-Schwarz inequality implies ( 2 f(x) g(x)e dx) x2 f(x) 2 e x2 dx g(x) 2 e x2 dx. Equality holds if and only if there exists t R such that either f(x) = t g(x) or g(x) = t f(x). Example 16. Let (C([0, 1]),, ) be an Euclidean space as in Example 8 and let f(x), g(x) C([0, 1]). Cauchy-Bunyakovsky-Schwarz inequality implies ( f(x) g(x) dx) f(x) 2 dx g(x) 2 dx. 0 Equality holds if and only if there exists t R such that either f(x) = t g(x) or g(x) = t f(x). Example 17. Let (M n (R),, ) be an Euclidean space as in Example 9 and let A, B M n (R). Cauchy-Bunyakovsky-Schwarz inequality implies ( Tr(AB t ) ) 2 Tr(AA t ) Tr(BB t ) Equality holds if and only if there exists t R such that either A = t B or B = t A. Let (V,, ) be an Euclidean space and let v V. The number v := < v, v > is called the length or the norm of v. The map : V R>0 has the following three properties: 1) For every vector v V, v 0. Equality holds if and only if v = O. 2) For any v V, a R, av = a v. 3) For any u, v V, u + v u + v.

16 The third property is called the triangle inequality. The vector v V is called a unit vector if v = 1. If v V and v O then v/ v is a unit vector. We define vectors u, v V to be orthogonal or perpendicular, and write u v, if their inner product u, v is zero. Vector system {v 1, v 2,..., v n } of V is called orthogonal if its elements are mutually perpendicular, i. e., v i, v j = 0 whenever i j. If in addition each vector of the system has length 1 then the system is called orthonormal. Component and projection. Let u be a nonzero vector of an Euclidean space (V,, ). (Then u > 0.) For any v V there exists a unique number c such that the vector v cu is perpendicular to u. Indeed, we have v cu u v cu, u = 0 c = We call c the component of v along u. projection of v along u. v, u u, u = v, u u. 16 The vector cu is called the Proposition 18. Let {v 1, v 2,..., v n } be an orthogonal system of nonzero vectors of an Euclidean space (V,, ) and let c i be the component of v V along v i. Then the vector is perpendicular to each v i. v c 1 v 1 c 2 v 2 c n v n The next theorem shows that c 1 v 1 + c 2 v c n v n gives the closest approximation of v as a linear combination of v 1, v 2,..., v n. Theorem 19. Let {v 1, v 2,..., v n } be an orthogonal system of nonzero vectors of an Euclidean space (V,, ) and let c i be the component of v V along v i. Let a 1, a 2,..., a n be numbers. Then n v n c i v i v a i v i. i=1 Theorem 20 (Bassel inequality). Let {v 1, v 2,..., v n } be a orthonormal system of vectors of an Euclidean space (V,, ) and let c i be the component of v V along v i. Then n c 2 i v 2. i=1 i=1

17 Gram-Schmidt orthogonalization process. Denote by L(v 1, v 2,..., v n ) the subspace of the vector space V generated by the vector system {v 1, v 2,..., v n } of V. Then L(v 1, v 2,..., v n ) = {α 1 v 1 + α 2 v α n v n α 1, α 2,..., α n R}. Theorem 21. Suppose that (V,, ) is an Euclidean space and let {v 1, v 2,..., v n } be a vector system of V which is linearly independent over R. Then there exists an orthonormal vector system {u 1, u 2,..., u n } such that for every j {1, 2,..., n} we have L(v 1, v 2,..., v j ) = L(u 1, u 2,..., u j ). Corollary 22. Let (V,, ) be a finite dimensional Euclidean space. Assume that V O. Then V has an orthogonal basis. The method of finding the orthonormal vector system {u 1, u 2,..., u n } in Theorem 21 is known as Gram-Schmidt orthogonalization process. This procedure can be described explicitly. First we construct an orthogonal vector system {v 1, v 2,..., v n} as follows. v 1 = v 1, 17 v 2 = v 2 v 2, v 1 v 1, v 1 v 1, v 3 = v 3 v 3, v 2 v 2, v 2 v 2 v 3, v 1 v 1, v 1 v 1, (2.1) v n = v n v n, v n 1 v n 1, v n 1 v n 1 v n, v 1 v 1, v 1 v 1. Then the vector system {u 1 := v 1/ v 1, u 2 := v 2/ v 2,..., u n := v n/ v n } is orthonormal and satisfies the statement of Theorem 21. Example 23. Ortogonalize the vector system v 1 = (1, 2, 1), v 2 = ( 3, 4, 1), v 3 = ( 4, 7, 0). Let v 1 := v 1 and v 2 = v 2 v 2, v 1 v 1, v 1 v 1. In other words, we subtract from v 2 its projection along v 1. Hence the vector v 2 is perpendicular to v 1, by Proposition 18. We find v 2 = ( 1, 0, 1).

18 18 Now we subtract from v 3 its projection along v 1 and v 2: v 3 = v 3 v 3, v 2 v 2, v 2 v 2 v 3, v 1 v 1, v 1 v 1. The vector v 3 is perpendicular to v 1 and v 2, by Proposition 18. We find v 3 = (1, 1, 1). Now the vector system {v 1, v 2, v 3} is orthogonal and satisfies the statement of Theorem 21, i. e., L(v 1 ) = L(v 1), L(v 1, v 2 ) = L(v 1, v 2), L(v 1, v 2, v 3 ) = L(v 1, v 2, v 3). If we wish to have an orthonormal vector system then we divide these vectors by their length: u 1 := v 1 v 1 = 1 6 (1, 2, 1), u 2 := v 2 v 2 = 1 2 ( 1, 0, 1), u 3 := v 3 v 3 = 1 3 (1, 1, 1). Orthogonal complement. Let (V,, ) be an Euclidean space and let W be a subspace of V. Denote by W the set of all vectors of V which are perpendicular to every vector of W, i. e., W = {u V u v for every v W }. Then W is a subspace of V which is called the orthogonal complement of W. Theorem 24. Let W be a subspace of of an Euclidean space (V,, ). Then V is a direct sum of W and the orthogonal complement W. In other words, W W = {O} and dim W + dim W = dim V. (2.2)

19 Finding the orthogonal complement. Let W be a subspace of of V and let v V. Since V is a direct sum of W and W, there exist unique vectors u W and w W such that v = u + w. The vector u is called the projection of v onto the subspace W. Similarly, the vector w is called the perpendicular of v to the subspace W. Note that the vector w is the projection of v onto the orthogonal complement W. Suppose that W = L(v 1, v 2,..., v m ) is a subspace of V and we want to find the orthogonal complement W. There exist vectors v m+1,..., v n such that {v 1, v 2,..., v n } is a basis of V. Orthogonalising the system {v 1, v 2,..., v n } we obtain an orthonormal vector system {u 1, u 2,..., u n } which satisfies the statement of Theorem 21. Then {u m+1, u m+2,..., u n } is an orthonormal basis of W, i. e., W = L(u m+1, u m+2,..., u n ). Example 25. Consider the usual Euclidean space R 4. Find a basis of the orthogonal complement W of a subspace W generated by vectors v 1 = (1, 4, 5, 2) and v 2 = (2, 7, 1, 3). One could find a basis of W by considering a basis of V which contains vectors v 1 and v 2 and then orthogonalising it. However we will do it in a different way. Indeed, a vector v = (x 1, x 2, x 3, x 4 ) R 4 lies in W if and only if it is perpendicular to both vectors v 1 and v 2 which generate W, i. e., v v v W 1 v, v 1 = 0 v v 2 v, v 2 = 0. Hence the orthogonal complement W coincides with the set of solutions of the system of liear equations 19 x 1 + 4x 2 + 5x 3 2x 4 = 0 2x 1 + 7x 2 + x 3 + 3x 4 = 0. (2.3)

20 20 We find that x 1 = 31x 3 26x 4 x 2 = 9x 3 + 7x 4. Hence the set of solutions of the system (2.3) is {(31x 3 26x 4, 9x 3 + 7x 4, x 3, x 4 ): x 3, x 4 R}. Substituting x 3 = 1 and x 4 = 0 we obtain the vector u 1 = (31, 9, 1, 0) W, and substituting x 3 = 0 and x 4 = 1 we obtain the vector u 1 = ( 26, 7, 0, 1) W. On the other hand, it follows from (2.2) that dim W = dim R 4 dim W = 4 2 = 2. Hence the system {u 1, u 2 } is a basis of W. Finding the projection. Let W be a subspace of an Euclidean space (V,, ). Suppose that {u 1, u 2,..., u m } is an orthogonal basis of W and {u m+1, u m+2,..., u n } is an orthogonal basis of W. Let v be a vector. Then the projection of v onto the subspace W is α 1 u 1 + α 2 u , α m u m, α j = v, u j, j = 1, 2,..., m, u j, u j and the perpendicular of v to the subspace W is α m+1 u m+1 +α m+2 u m , α n u n, α j = v, u j, j = m+1, m+2,..., n. u j, u j Example 26. Find the projection and the perpendicular of the vector v = (2, 2, 3, 3) to the subspace W = L((1, 1, 2, 3), ( 1, 3, 1, 5)) of Euclidean space R 4. Denote by u the projection of v onto W. Then u W, and therefore u = x (1, 1, 2, 3) + y ( 1, 3, 1, 5) for some x, y R. Since v u = (2 x + y, 3 + x 3y, 3 2x y, 3 3x 5y) is the perpendicular of v to W, this vector is orthogonal to both vectors (1, 1, 2, 3) and ( 1, 3, 1, 5), i. e., v u, (1, 1, 2, 3) = 0 and v u, ( 1, 3, 1, 5).

21 21 This implies solving the system of linear equations 15x + 13y = 2 13x + 36y = 23. We find x = 1, y = 1. Therefore the projection of v onto the subspace W is u = (2, 4, 1, 2) and the perpendicular of v to W is (0, 1, 2, 1). Exercises. Exercise 1. Orthogonalise the following vector systems: a) v 1 = (2, 1, 3), v 2 = (5, 3, 5), v 3 = (4, 4, 6) ; b) v 1 = (1, 1, 2, 1), v 2 = (0, 6, 1, 1), v 3 = (7, 10, 5, 0) ; c) v 1 = (2, 2, 1, 4), v 2 = ( 4, 5, 1, 14), v 3 = ( 5, 8, 5, 9); d) v 1 = (1, 3, 4, 2), v 2 = (5, 1, 5, 1), v 3 = ( 5, 13, 5, 3), v 4 = (6, 8, 8, 10). Answer: a) u 1 = (2, 1, 3), u 2 = (1, 1, 1), u 3 = (4, 5, 1) ; b) u 1 = (1, 1, 2, 1), u 2 = (1, 5, 1, 2), u 3 = (4, 1, 1, 5); c) u 1 = (2, 2, 1, 4), u 2 = (2, 1, 2, 2), u 3 = ( 7, 8, 2, 1); d) u 1 = (1, 3, 4, 2), u 2 = (4, 2, 1, 1), u 3 = (1, 3, 1, 3), u 4 = (5, 3, 7, 7). Exercise 2. Append the following vector systems to obtain an orthonormal basis of Euclidean space R n. a) v 1 = 1 3 (1, 1, 5), v 3 2 = 1 14 ( 2, 3, 1) ; b) v 1 = 1(0, 1, 2, 2), v 3 2 = 1 3 (3, 4, 1, 1) ; 3 c) v 1 = 1(5, 3, 1, 1), v 6 2 = 1 (1, 1, 5, 3). 6 Answer: a) u 1 = (1, 1, 5), u 2 = 1 14 ( 2, 3, 1), u 3 = 1 b) u 1 = 1 3 (0, 1, 2, 2), u 2 = (3, 4, 1, 1), u 3 = 1 3 ( 16, 11, 1) ; 42 2 (0, 0, 1, 1), u 4 = 1 3 ( 6, 4, 1, 1) ; 6 c) u 1 = 1(5, 3, 1, 1), u 6 2 = 1(1, 1, 5, 3), u 6 3 = 1 ( 3, 5, 1, 1), 6 u 4 = 1 (1, 1, 3, 5). 6

22 Exercise 3. Consider the Euclidean space defined in Example 8 which consists of all continuous real-valued functions on the interval [0, 1]. Let W be the subspace of functions generated by the two functions f(x) and g(x) such that f(x) = x, g(x) = x 2. Find an orthonormal basis of W. Exercise 4. Consider the Euclidean space defined in Example 8. Let W be the subspace generated by the three functions 1, x and x 2. Find an orthonormal basis of W. Exercise 5. Find a basis of the orthogonal complement W of a subspace W of Euclidean space R 4. a) W = L ((1, 4, 5, 2), (2, 7, 1, 3)) ; b) W = L ((1, 3, 5, 7), (2, 5, 3, 4), (3, 7, 2, 0)) ; c) W = L ((2, 2, 5, 3), (3, 4, 1, 2), (5, 8, 13, 12)). Answer: a) {(31, 9, 1, 0), ( 26, 7, 0, 1)}; b) {( 241, 103, 1, 9)}; c) {(22, 17, 2, 0), ( 16, 13, 0, 2)}. Exercise 6. Consider the Euclidean space defined in Example 9 which consists of all n n matrixes whose coefficients are real numbers. Describe the orthogonal complement of the subspace of diagonal matrices. What is the dimension of this complement? Exercise 7. Find the projection and the perpendicular of a vector v to a subspace W of Euclidean space R 4. a) v = (8, 2, 7, 9), W =< (4, 5, 1, 3), ( 1, 2, 7, 4) > ; b) v = (3, 2, 3, 3), W =< (1, 0, 1, 0), (0, 0, 1, 1), (2, 0, 0, 1) > ; c) v = (2, 0, 1, 1), W =< (1, 1, 0, 0), (0, 1, 1, 0), (0, 0, 1, 4), (0, 0, 0, 1) >. Answer: a) The projection is (3, 7, 6, 7) and the perpendicular is (5, 5, 1, 2) ; b) The projection is (3, 0, 3, 3) and the perpendicular is (0, 2, 0, 0) ; c) The projection is (2, 0, 1, 1) and the perpendicular is (0, 0, 0, 0). 22

23 Exercise 8. Find a basis of the orthogonal complement W of a subspace 2x 1 3x 2 + 4x 3 4x 4 = 0 W = (x 1, x 2, x 3, x 4 ) : 3x 1 x x 3 13x 4 = 0 4x 1 + x x 3 23x 4 = 0 of Euclidean space R 4. Answer: {(2, 3, 4, 4), (3, 1, 11, 13), (4, 1, 18, 23)}. Exercise 9. Let (V,, ) be a finite dimensional Euclidean space. Let {v 1, v 2,..., v n } be an orthonormal system of vectors in V. Assume that for every v V we have n v, v i 2 = v 2. i=1 Show that {v 1, v 2,..., v n } is a basis of V. Exercise 10. Let (V,, ) be an Euclidean space. Prove the parallelogram law, for any vectors u, v V, u + v 2 + u v 2 = 2 ( u 2 + v 2). 23

24 3. Linear maps 24 What is linear map? Let (V, +) and (W, +) be two (possibly, distinct) vector spaces over the field k. A map (function) f : V W is said to be linear if f satisfies two important requirements: 1) f(αv) = αf(v) for all v V, α K; 2) f(u + v) = f(u) + f(v) for all vectors u, v V. In other words, linear map preserves the linear operations (the addition of vectors and the multiplication by scalars) in vector spaces. Example 11. Check whether a map f : R 2 R 2, which maps the vector v = (x 1, x 2 ) R 2 into is linear. f(v) = (2x 1 x 2, 3x 1 + x 2 ) Solution. We shall check both properties. Let α R be a scalar. Then αv = (αx 1, αx 2 ). By the definition of f, f(αv) = (2αx 1 αx 2, 3αx 1 + αx 2 ) = α(2x 1 x 2, 3x 1 + x 2 ) = αf(v). Hence the first requirement is met. Now we check the second. Let u = (x 1, x 2 ) and v = (y 1, y 2 ) be two vectors in R 2. Then u+v = (x 1 +y 1, x 2 +y 2 ). Using the definition of f, we get: f(u + v) = (2(x 1 + y 1 ) (x 2 + y 2 ), 3(x 1 + y 1 ) + (x 2 + y 2 )) = = (2x 1 x 2, 3x 1 + x 2 ) + (2y 1 y 2, 3y 1 + y 2 ) = f(u) + f(v). Thus f is a linear map from V = R 2 to W = R 2. Example 12. We consider the set V = C[0, 1] of continuous real functions, defined in the interval [0, 1] as a vector space over R. We define the map f which takes the vector v = v(t) and maps it into the real number f(v) = 1 0 v(t)dt. Check whether this map is a linear map from C[0, 1] to R.

25 Solution. Let α be a real scalar and let u(t), v(t) be two continuous functions, defined in [0, 1]. By the properties of a definite integral, one has f(αv) = 1 αv(t)dt = α v(t)dt = αf(v), 25 f(u + v) = 1 (u(t) + v(t))dt = 1 u(t)dt Hence f is a linear map from V = C[0, 1] to W = R. v(t)dt = f(u) + f(v). Obviously, not every linear map f : V W is linear. Example 13. Check whether the function f(x) = x 2 is a linear map from R to R. Solution. In order to show that the map is non linear, it suffices to prove that at least one of the two properties in the definition of linear map are not satisfied. We will check both. First, let x 0, α 0, 1. Then f(αx) = α 2 x 2 αx 2 = αf(x), Secondly, let x, y be non-zero real numbers. Then f(x + y) = (x + y) 2 = x 2 + 2xy + y 2 x 2 + y 2 = f(x + y). Thus, neither of the two linear properties hold. What is matrix of linear map? Suppose that V and W have finite dimensions, say, dim K V = m, dim K W = n. First, lets pick bases for these vector spaces: let v 1, v 2,..., v m be the basis of V, and let w 1, w 2,..., w n be the basis of W. Let f : V W be a linear map. Express the images of vectors v 1, v 2,..., v m under f in the basis of W : f(v 1 ) f(v 2 ) = α 11 w 1 + α 12 w α 1n w n = α 21 w 1 + α 22 w α 2n w n f(v m ) = α m1 w 1 + α m2 w α mn w n.

26 Now, copy the coordinates α ij into the table if size m n. The matrix we obtain this way is called the matrix of the linear map f (in respective bases of V and W ): α 11 α 12 α 1n α A = 21 α 12 α 1n... α m1 α m2 α mn Given the matrix of f, one can find the image f(v) of any vector v V. Suppose that v = β 1 v 1 + β 2 v β m v m, so that the coordinates of v in the basis v 1, v 2,..., v m are Then the coordinates of the image v = (β 1, β 2,..., β m ). f(v) = γ 1 w 1 + γ 2 w γ n w n, 26 or, f(v) = (γ 1, γ 2,..., γ n ) in the basis w 1, w 2,..., w n can be computed easily using matrix multiplication: α 11 α 12 α 1n α f(v) = va = (β 1, β 2,..., β m ) 21 α 12 α 1n... = (γ 1, γ 2,..., γ n ). α m1 α m2 α mn Example 14. A linear map f maps the vectors v 1, v 2, v 3, which form the basis of three dimentional space V into two dimentional vector space W : f(v 1 ) = w 1 + 4w 2, f(v 2 ) = 5w 1 + 3w 2, f(v 3 ) = 2w 1 5w 2,

27 where w 1 and w 2 are the basis of W. Write the matrix of the linear map f and find the image f(v) of a vector v = 7v 1 2v 2 v 3. Solution. We copy the coordinates of the images f(v 1 ), f(v 2 ), f(v 3 ) in W into 3 2 matrix 1 4 A = The coordinates of v V in basis v 1, v 2, v 3 are v = (7, 2, 1). Hence, 1 4 f(v) = va = (7, 2, 1) 5 3 = ( 19, 27) Thus f(v) = 19w w 2. Remark 15. If vector spaces V and W coincide (V = W, m = n), then the linear map f : V V is called a linear transformation of V. The matrix of a linear transformation (in basis v 1 = w 1, v 2 = w 2,..., v n = w n ) is called transformation matrix. Transformation matrix is a square matrix of the size n n. Example 16. Write the transformation matrix of the linear transformation f : R 3 R 3 f(x 1, x 2, x 3 ) = (x 1 x 3, 2x 2 + x 3, 4x 1 x 2 ) in the standard basis e 1, e 2, e 3 and find the image of the vector v = (1, 1, 1). Solution. The standard basis of R 3 is e 1 = (1, 0, 0), e 2 = (0, 1, 0), e 3 = (0, 0, 1). Thus f(e 1 ) = (1, 0, 4) = e 1 + 0e 2 + 4e 3 ; f(e 2 ) = (0, 2, 1) = 0e 1 + 2e 2 e 3 ; f(e 3 ) = ( 1, 1, 0) = e 1 + e 2 + 0e 3.

28 28 Hence, the transformation matrix is A = If we plug in the coordinates of v into f, we get f(1, 1, 1) = (0, 3, 3). We could get the same result by computing f(v) = va. Example 17. Find the matrix of the linear map f : V V, which maps the vectors u 1 = (1, 2, 3), u 2 = ( 1, 1, 5), u 3 = (2, 5, 5) into vectors v 1 = (1, 0, 1), v 2 = (0, 1, 1), v 3 = ( 1, 1, 0), respectively (the coordinates of the vectors are given in some unknown basis of V ). Solution. In this example, we do not know the V basis. However, we do not need it, since the coordinates are given. Since dim K V = 3, the size of the transformation matrix should be 3 3. Let α 11 α 12 α 13 A = α 21 α 22 α 23, α 31 α 32 α 33 where α i,j are unknown coefficients. After multiplying vectors u 1, u 2, u 3 by the j th column of the matrix A, we will obtain j th coordinates of vectors v 1, v 2, v 3. Hence we obtain three different systems of linear equations: α 11 +2α 21 +3α 31 = 1 α 12 +2α 22 +3α 32 = 0 α 11 α 21 5α 31 = 0, α 12 α 22 +5α 32 = 1, 2α 11 +5α 21 +5α 31 = 1 2α 12 +5α 22 +5α 32 = 1 α 13 +2α 23 +3α 33 = 1 α 13 α 23 5α 33 = 1. 2α 13 +5α 23 +5α 33 = 0 The solution of these three systems would be a heroic endeavor. However, one can make one important simplification. Let us observe that the coefficients on the left side are the same in all three systems. Only the constants

29 on right side are different. We will use this fact to our advantage by solving all three systems simultaneously. Let us write all the coefficients into one matrix: The rows in the left side of the matrix are simply the vectors u 1, u 2, u 3 ; rows in the right side are vectors v 1, v 2, v 3. We will do the Gaussian elimination on rows to reduce the left side into the identity matrix: ( 2) ( 3) ( 1) ( 2) The solutions to the linear systems of equations are located in the right hand side of the above matrix. Hence the transformation matrix is: A = Remark 18. The method we applied in the solution can be used to solve more general matrix equations AX = B, where A, B - are two given matrices, and X is an unknown matrix. We have already used it to compute the inverse matrices. 29

30 Change of basis. Let f : V W be a linear map. Suppose that a new basis v 1, v 2,..., v m have been picked in the vector space V ; while another new basis w 1, w 2,..., w n have been chosen for the space W. Then the new matrix A of the linear mat f can be computed as follows: A = T AR 1, where A is a matrix of f in old bases, T is a transition matrix from the basis v 1, v 2,..., v m to the basis v 1, v 2,..., v m in space V, and R is a transition matrix from the basis w 1, w 2,..., w n to the basis w 1, w 2,..., w n in space W. As a separate case of this formula, for a linear transformation f (i.e. V = W, and the bases coincide), one has T = R and the formula simplifies to: A = T AT 1 The matrix T AT 1 is called conjugate to A. Example 19. The matrix A is the matrix of a linear transformation f : V V in basis v 1, v 2, v 3 : A = Find the transformation matrix in the new basis u 1 = 2v 1 +v 2, u 2 = v 1 +v 2, u 3 = v 3. Solution. The transition matrix T from basis v 1, v 2, v 3 to basis u 1, u 2, u 3 is T = 1 1 0, T 1 = By the conjugate matrix formula, the transformation matrix in the new basis u 1, u 2, u 3 : A = T AT 1 = =

31 31 Important properties of matrices. Let f and g be linear maps from V to W with respective matrices A ir B (in some bases of V and W ). Then for any scalar α k, the matrices of linear maps αf and f + g are αa and A + B, respectively. Next, let us suppose that linear maps f : U V, g : V W have matrices A and B. Then the matrix of a composite map g(f(v)) is equal to the product AB of matrices A and B. The identity matrix 1l n is a matrix of the identity transformation f(v) = v. A linear transformation f : V V is said to be non degenerate if the inverse map f 1 exists. The transformation f is invertible if and only if the matrix A is non degenerate, det A 0. In such case, the inverse matrix A 1 is the matrix of the inverse transformation f 1. Example 20. Linear transformations f and g in space R 2 have matrices A and B, respectively: A = 2 1, B = Find whether a linear transformation h(v) = g(f 1 (v)) 3g(v) + 2v exists and compute the matrix C of the transformation h. Solution. Since det A = 1, we know that f is non degenerate and inverse transformation f 1 exists. In the formula, we replace the transformations with matrices (v = 1l 2 v) and find C = A 1 B 3B + 21l = Hence C =

32 32 Exercises. Exercise 1. Check whether a given map f : V W is linear: a) V = W = R 2, f(x 1, x 2 ) = (x 1 x 2, 3x 1 + 4x 2 ); b) V = W = R 2, f(x 1, x 2 ) = (x 2, x 1 + 5); c) V = C[0, 1], W = R, f(v) = 1 0 v2 (t)dt; d) V = C[0, 1], W = R, f(v) = 1 0 v(t3 ) cos tdt; e) V = W = R[x],f(v) = (x 3 + 1) v; f) V = W = R[x], f(v) = x v. Answer: a) Yes; b) No; c) No; d) Yes; e) Yes; f) Yes. Exercise 2. Find the matrices of linear maps f : V W in given bases; compute the image f(v) of a given vector v V : a) f(v 1 ) = w 1 + w 2 w 3, f(v 2 ) = w 1 w 2 w 3, f(v 3 ) = w 1 + w 3, where v 1, v 2, v 3 are the basis of V and w 1, w 2, w 3 are the basis of W ; vector v = v 1 v 2 v 3 ; b) V = R 3, W = R 2, f(x 1, x 2, x 3 ) = (x 1 + x 2 x 3, x 1 x 2 x 3 ) in standard bases; v = (2, 0, 1); c) V = R 2, W = R 3, f(x 1, x 2 ) = (4x 1 + 5x 2, 3x 1 2x 2, x 1 + x 2 ), V has basis v 1 = ( 1, 1), v 2 = (2, 1), W has basis e 1, e 2 e 3 ; v = (1, 1); d) V = R 4 [x], W = R 3 [x]; f(v) = v (v - denotes a derivative of a polynomial v) in standard basis 1, x,..., x n of R n [x]; v = 1 + x + x 2 + x 4 ; e) V = R 2 [x], W = R 3 [x]; f(v) = x v(t)dt in standard bases; v = x + x 2 ; f) V = W = M 2 [R], f(v) = Bv, in standard basis e 1, e 2, e 3, e 4, where B = 1 3, 1 7 e 1 = 1 0, e 2 = 0 1, e 3 = 0 0, e 4 = 0 0, v = Answer:

33 1 1 1 a) A = 1 1 1, f(v) = w 1 + w 3 ; b) A = 1 1, f(v) = (1, 1); 1 1 c) A = 4 3 1, f(v) = (9, 1, 2); d) A = , f(v) = 1 + 2x + 4x 3 ; e) A = 0 0 1/2 0, f(v) = x + x2 + x 3 /3; / f) A =, f(v) = Exercise 3. Find the matrix of a linear transformation f : V V, which maps vectors v 1, v 2,..., v n into vectors f(v 1 ), f(v 2 ),..., f(v n ): a) v 1 = (1, 3), v 2 = ( 2, 5), f(v 1 ) = ( 2, 1), f(v 2 ) = (2, 1); b) v 1 = (1, 0, 1), v 2 = ( 1, 1, 0), v 3 = (1, 1, 1), f(v 1 ) = ( 2, 1, 0), f(v 2 ) = (0, 1, 1), f(v 3 ) = ( 1, 1, 1); c) v 1 = (1, 1, 2), v 2 = (1, 2, 3), v 3 = (1, 2, 4), f(v 1 ) = (3, 1, 0), f(v 2 ) = (2, 4, 1), f(v 3 ) = (1, 2, 0);

34 d) v 1 = (1, 2, 1, 1), v 2 = (2, 3, 2, 3), v 3 = (1, 2, 0, 2), v 4 = (1, 3, 2, 0), f(v 1 ) = (1, 1, 0, 2), f(v 2 ) = (0, 2, 0, 1), f(v 3 ) = (1, 0, 1, 1), f(v 4 ) = (2, 0, 0, 1); 34 Answer: a) A = ; b) A = ; c) A = ; d) A = Exercise 4. Given the matrix A of a linear map f : V W in old basis, find the matrix of f in new basis: a) V = W = R 2, A = 3 1 in old basis v 1, v 2 ; new basis u 1 = 0 1 3v 1 + 4v 2, u 2 = 2v 1 + 3v 2 ; b) V = W = R 2, A = 2 0 old basis v 1, v 2 ; new basis u 1 = 1 3 v 1 + 3v 2, u 2 = 3v 1 + 8v 2 ; c) V = W = R 3, A = old basis v 1, v 2, v 3 ; new basis u 1 = v 1, u 2 = v 2 5v 3, u 3 = 4v 1 + v 2 6v 3 ; d) V = W = R 3, A = old basis v 1, v 2, v 3 ; new basis u 1 = v 1 3v 2 + v 3, u 2 = 2v 1 + 7v 2, u 3 = v 1 + v 2 + 8v 3 ;

35 Answer: 1 1 e) V = R 3, W = R 2 A = 1 1 old bases are standard bases; new 0 1 basis for V is v 1 = (1, 4, 0), v 2 = ( 5, 19, 0), v 3 = ( 1, 1, 1), new basis for W is w 1 = (3, 2), w 2 = (1, 1); f) V = R 2, W = R 3 A = old bases are standard bases; new basis for V is v 1 = (5, 2), v 2 = (2, 1), new basis for W is w 1 = (1, 0, 6), w 2 = (0, 1, 1), w 3 = (2, 0, 11). a) b) c) T 1 = 3 4 ; T AT 1 = ; T 1 = 8 3 ; T AT 1 = ; T 1 = ; T AT 1 = ; d) T 1 = ; T AT 1 = ; e) R 1 = ; T AR 1 = ; 1 3

36 36 f) R 1 = ; T AR 1 = ; Exercise 5. Linear maps f : R 2 R 2 g : R 2 R 2 h : R 3 R 2 have matrices A, B, C in standard bases of R 2 and R 3 : A = 1 2, B = , C = Find which of the linear maps, given bellow, exist, and compute their matrices in standard bases: a) f(v) + g 1 (v), v R 2 ; b) g(f(v)) v, v R 2 ; c) (f + g) 1 (v) d) f 1 (g(v) v), v R 2 ; e) h(g(v)) + 2v, v R 2 ; f) h 1 (f(v))), v R 2 ; g) f 1 (h(v)), v R 3. Answer: a) No, since g 1 does not exist b) Yes, AB 1l 2 = 5 3 ; 2 2 1/14 3/14 c) Yes, (A + B) 1 = ; 5/14 1/14 d) Yes, (B 1l 2 )A 1 = 4 7 ; 7 11

37 e) No, since the image of R 2, g(r 2 ) does not lie in the domain of definition of h, which is R 3 matrices C ir B cannot be multiplied because their sizes are not consistent; f) No, since the inverse of h 1 does not exist (the matrix C is not a square matrix); 4 4 g) Yes, CA 1 =

38 4. Eigenvalues and eigenvectors 38 What is eigenvalue and eigenvector? Let (V, +) be a vector space over the field k. Suppose that f : V V is a linear transformation of V. An element λ k is called an eigenvalue of the transformation f, if there exists a non zero vector v V, v O, such that the image f(v) is parallel to v with a scale factor λ: f(v) = λv. The vector v O with this property is called an eigenvector of the linear transformation f which corresponds to the eigenvalue λ. Example 6. Let V = R, f(v) = 3v for all v R. Then every vector v R is an eigenvector which corresponds to an eigenvalue λ = 3. Example 7. Let V = R 2, f(v) = v for all v R 2. Every vector v R 2 is an eigenvector of the transformation f with a corresponding eigenvalue λ = 1. The transformation matrix of f is A = 1l 2 = (in any basis of V ). Hence f(v) = v1l 2 = v. Example 8. Let V = R 2. Suppose that the transformation matrix of f is A = 2 0, 0 1 in basis v 1, v 2 V. Then v 1 is an eigenvector corresponding to an eigenvalue λ = 2, while v 2 is the eigenvector corresponding to the eigenvalue λ = 1. To see this, recall that the coordinates of v 1 in the basis v 1, v 2 are v 1 = (1, 0). Hence f(v 1 ) = v 1 A = (1, 0) 2 0 = (2, 0) = 2v

39 Similarly, the coordinates of the vector v 2 in basis v 1, v 2 are v 2 = (0, 1), thus f(v 2 ) = v 2 A = (0, 1) 2 0 = (0, 1) = v Example 9. Let V = R 2,and suppose that A = is a transformation matrix of f in standard basis e 1, e 2. We will check that the vector v 1 = (1, 2) is the eigenvector which corresponds to the eigenvalue λ = 1, and v 2 = (1, 1) is the eigenvector corresponding to an eigenvalue λ = 2. f(v 1 ) = v 1 A = 1 v 1 f(v 2 ) = v 2 A = 2 v 2. How to find eigenvalues? Let us suppose that V is finite dimensional vector space, dim k V = n. The first step is to pick a basis for V and write the transformation matrix for a linear transformation f : V V 39 in this basis: α 11 α 12 α 1n α A = 21 α 22 α 1n.... α n1 α n2 α nn (How to write the matrix of linear transformation, check the chapter on Linear maps from previous lecture). The second step is to expand the determinant α 11 t α 12 α 1n α φ A (t) = det (A t1l n ) = 21 α 22 t α 1n.... α n1 α n2 α nn t

40 in powers of t. The polynomial we get this way is called a characteristic polynomial of the linear transformation: φ A (t) = a 0 a 1 t + a 2 t ( 1) n 1 a n 1 t n 1 + ( 1) n t n. Third step is to compute the roots of the polynomial φ A (t), λ 1, λ 2,..., λ n (multiple roots are repeated according to their multiplicity): φ A (t) = (λ 1 t)(λ 2 t)... (λ n t). These roots are the eigenvalues of the transformation f. Remark 10. The characteristic polynomial φ A (t) and eigenvalues λ 1,..., λ n of the linear transformation are invariant under the change of basis. This means that they do not change if we pick another basis for V. How to find eigenvectors? Suppose that λ K is an eigenvalue of the linear transformation f. Let v = (β 1, β 2,..., β n ) be the coordinates of an eigenvector v which corresponds to the eigenvalue λ. Since f(v) = va and f(v) = λv, we obtain va = λv. Write this equation as v (A λ1l n ) = O. The eigenvectors v lie in the kernel of the matrix B = A λ1l n : α 11 λ α 12 α 1n α (β 1, β 2,..., β n ) 21 α 22 λ α 1n... = (0, 0,..., 0). α n1 α n2 α nn λ To find them, we solve the coordinates β 1,..., β n in the linear system of equations (α 11 λ)β 1 + α 21 β α n1 β n = 0 40 α 12 β 1 + (α 22 λ)β α n2 β n = α 1n β 1 + α 2n β (α nn λ)β n = 0

41 The number of linearly independent solutions (eigenvectors) is equal to the defect of the matrix B. We repeat this process to find linearly independent eigenvectors for all different eigenvalues λ of the linear transformation f. Example 11. Find the eigenvalues and corresponding eigenvectors of a linear transformation of R 2 in standard basis given the transformation matrix A = Solution. We compute the characteristic polynomials of a transformation: 10 t 12 φ A (t) = 9 11 t = = (10 t) ( 11 t) ( 12) 9 = = t 2 + t 2. We factor the characteristic polynomial and find roots : φ A (t) = t 2 + t 2 = (t 1)(t + 2). The eigenvalues of the linear transformation are λ 1 = 1, λ 2 = 2. Now we calculate eigenvectors. Let v = (β 1, β 2 ) be the eigenvector corresponding to the eigenvalue λ 1 = 1. We write the matrix equation (β 1, β 2 ) = (0, 0), or, (β 1, β 2 ) 9 12 = (0, 0), 9 12 After calculating the product of matrices, we get the linear system of equations 9β 1 + 9β 2 = 0 12β 1 12β 2 = 0. The solutions are β 1 = t, β 2 = t, t R. We choose t = 1 and obtain the vector v 1 = (1, 1). 41

42 Next, let v 2 = (β 1, β 2 ) be the eigenvector corresponding to the eigenvalue λ 2 = 2. We write equation 10 ( 2) (β 1, β 2 ) 12 = (0, 0), 9 11 ( 2) or We get the system (β 1, β 2 ) = (0, 0) β 1 + 9β 2 = 0 12β 1 9β 2 = 0. The solutions are β 1 = 3t, β 2 = 4t, t R. If we choose t = 1, we get v 2 = (3, 4). Answer: λ 1 = 1, λ 2 = 2 42 v 1 = (1, 1), v 2 = (3, 4). Example 12. Find the eigenvalues and corresponding eigenvectors of a linear transformation A = in standard basis of R 3. Solution. We compute the characteristic polynomials of a matrix (the quick way to do that is to expand the determinant by second column): φ A (t) = det(a t1l 3 ) = 3 t 0 4 = 2 1 t 2 = t

43 43 3 t 4 = (1 t) 2 3 t = (1 t)(1 t2 ). The characteristic polynomial is φ A (t) = (1 t) 2 (1 + t), Eigenvalues: λ 1 = λ 2 = 1, λ 3 = 1. The eigenvalue 1 has multiplicity 2. Lets find the corresponding eigenvectors v = (β 1, β 2, β 3 ). The matrix equation is (β 1, β 2, β 3 ) = (0, 0, 0) The rank of a matrix B = A 1l 3 = is rg (B) = 1, the defect def (B) = 2. Hence the kernel of B is generated by two linearly independent vectors. From the matrix equation, we get linear system of equations 4β 1 + 2β 2 + 2β 3 = 0 0 = 0 4β 1 + 2β 2 + 2β 3 = 0 which is equivalent to equation 2β 1 + β 2 + β 3 = 0. The solutions are (β 1, β 2, β 3 ) = (t + s, 2t, 2s) = t (1, 2, 0) + s (1, 0, 2), t R, s R. We take two linearly independent solutions v 1 = (1, 2, 0), v 2 = (0, 1, 2) which span the kernel of B. These are the eigenvectors corresponding to the eigenvalue λ 1 = λ 2 = 1.

44 Next we find the eigenvectors corresponding to the eigenvalue λ 3 = 1. Let v = (β 1, β 2, β 3 ). The matrix equation is (β 1, β 2, β 3 ) = (0, 0, 0) The matrix B = A + 1l 3 = has rank 2 and defect 1. Hence the solutions to the system of equations 2β 1 + 2β 2 + 2β 3 = 0 2β 2 = 0 4β 1 + 2β 2 + 4β 3 = 0 (β 1, β 2, β 3 ) = (t, 0, t) = t (1, 0, 1), t R are generated by one linearly independent vector. If we choose t = 1, we get v 3 = (1, 0, 1). Answer: Exercises. λ 1 = λ 2 = 1, λ 3 = 1, v 1 = (1, 2, 0), v 2 = (1, 0, 2), v 3 = (1, 0, 1). Exercise 1. Find the eigenvalues and corresponding eigenvectors of the linear transformations in standard basis of R n : a) A = 5 2, b) A = 8 9, c) A = 0 1, d) A = 4 5 4, e) A = 0 1 0, f) A = 1 0 1,

45 Answers: g) A =, h) A =, a) λ 1 = 2, λ 2 = 3, v 1 = ( 1, 1), v 2 = ( 3, 2). b) λ 1 = λ 2 = 2, v 1 = ( 2, 3). c) λ 1 = i, λ 2 = i, v 1 = (i, 1), v 2 = (1, i). d) λ 1 = 1, λ 2 = λ 3 = 1, v 1 = (2, 1, 1), v 2 = (0, 1, 0), v 3 = ( 1, 0, 1) e) λ 1 = 1, λ 2 = λ 3 = 1, v 1 = (2, 1, 1), v 2 = (0, 1, 0), v 3 = (1, 0, 1) f) λ 1 = λ 2 = λ 3 = 1, v 1 = (1, 1, 1). g) λ 1 = λ 2 = 1, λ 3 = λ 4 = 2, v 1 = (1, 1, 2, 0), v 3 = (0, 0, 1, 0) h) λ 1 = i, λ 2 = i, λ 3 = λ 4 = 3, v 1 = (1, 1 + 2i, i, i), v 2 = (1, 1 2i, i, i), v 3 = (0, 1, 1, 0) 45

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

6 Inner Product Spaces

6 Inner Product Spaces Lectures 16,17,18 6 Inner Product Spaces 6.1 Basic Definition Parallelogram law, the ability to measure angle between two vectors and in particular, the concept of perpendicularity make the euclidean space

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

SYLLABUS. 1 Linear maps and matrices

SYLLABUS. 1 Linear maps and matrices Dr. K. Bellová Mathematics 2 (10-PHY-BIPMA2) SYLLABUS 1 Linear maps and matrices Operations with linear maps. Prop 1.1.1: 1) sum, scalar multiple, composition of linear maps are linear maps; 2) L(U, V

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Exercise Sheet 1.

Exercise Sheet 1. Exercise Sheet 1 You can download my lecture and exercise sheets at the address http://sami.hust.edu.vn/giang-vien/?name=huynt 1) Let A, B be sets. What does the statement "A is not a subset of B " mean?

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

Linear Algebra Lecture Notes-II

Linear Algebra Lecture Notes-II Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2. MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2. Diagonalization Let L be a linear operator on a finite-dimensional vector space V. Then the following conditions are equivalent:

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

Numerical Linear Algebra

Numerical Linear Algebra University of Alabama at Birmingham Department of Mathematics Numerical Linear Algebra Lecture Notes for MA 660 (1997 2014) Dr Nikolai Chernov April 2014 Chapter 0 Review of Linear Algebra 0.1 Matrices

More information

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

MATH 304 Linear Algebra Lecture 34: Review for Test 2. MATH 304 Linear Algebra Lecture 34: Review for Test 2. Topics for Test 2 Linear transformations (Leon 4.1 4.3) Matrix transformations Matrix of a linear mapping Similar matrices Orthogonality (Leon 5.1

More information

Linear Algebra. Session 12

Linear Algebra. Session 12 Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)

More information

INNER PRODUCT SPACE. Definition 1

INNER PRODUCT SPACE. Definition 1 INNER PRODUCT SPACE Definition 1 Suppose u, v and w are all vectors in vector space V and c is any scalar. An inner product space on the vectors space V is a function that associates with each pair of

More information

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra

1. Foundations of Numerics from Advanced Mathematics. Linear Algebra Foundations of Numerics from Advanced Mathematics Linear Algebra Linear Algebra, October 23, 22 Linear Algebra Mathematical Structures a mathematical structure consists of one or several sets and one or

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

Lecture 1: Review of linear algebra

Lecture 1: Review of linear algebra Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations

More information

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning:

Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u u 2 2 = u 2. Geometric Meaning: Recall: Dot product on R 2 : u v = (u 1, u 2 ) (v 1, v 2 ) = u 1 v 1 + u 2 v 2, u u = u 2 1 + u 2 2 = u 2. Geometric Meaning: u v = u v cos θ. u θ v 1 Reason: The opposite side is given by u v. u v 2 =

More information

MATH 115A: SAMPLE FINAL SOLUTIONS

MATH 115A: SAMPLE FINAL SOLUTIONS MATH A: SAMPLE FINAL SOLUTIONS JOE HUGHES. Let V be the set of all functions f : R R such that f( x) = f(x) for all x R. Show that V is a vector space over R under the usual addition and scalar multiplication

More information

Lecture 23: 6.1 Inner Products

Lecture 23: 6.1 Inner Products Lecture 23: 6.1 Inner Products Wei-Ta Chu 2008/12/17 Definition An inner product on a real vector space V is a function that associates a real number u, vwith each pair of vectors u and v in V in such

More information

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis. Vector spaces DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Vector space Consists of: A set V A scalar

More information

MATH 235. Final ANSWERS May 5, 2015

MATH 235. Final ANSWERS May 5, 2015 MATH 235 Final ANSWERS May 5, 25. ( points) Fix positive integers m, n and consider the vector space V of all m n matrices with entries in the real numbers R. (a) Find the dimension of V and prove your

More information

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i MODULE 6 Topics: Gram-Schmidt orthogonalization process We begin by observing that if the vectors {x j } N are mutually orthogonal in an inner product space V then they are necessarily linearly independent.

More information

orthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis,

orthogonal relations between vectors and subspaces Then we study some applications in vector spaces and linear systems, including Orthonormal Basis, 5 Orthogonality Goals: We use scalar products to find the length of a vector, the angle between 2 vectors, projections, orthogonal relations between vectors and subspaces Then we study some applications

More information

Chapter 4 Euclid Space

Chapter 4 Euclid Space Chapter 4 Euclid Space Inner Product Spaces Definition.. Let V be a real vector space over IR. A real inner product on V is a real valued function on V V, denoted by (, ), which satisfies () (x, y) = (y,

More information

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v

More information

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) David Glickenstein December 7, 2015 1 Inner product spaces In this chapter, we will only consider the elds R and C. De nition 1 Let V be a vector

More information

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures

More information

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7

x 3y 2z = 6 1.2) 2x 4y 3z = 8 3x + 6y + 8z = 5 x + 3y 2z + 5t = 4 1.5) 2x + 8y z + 9t = 9 3x + 5y 12z + 17t = 7 Linear Algebra and its Applications-Lab 1 1) Use Gaussian elimination to solve the following systems x 1 + x 2 2x 3 + 4x 4 = 5 1.1) 2x 1 + 2x 2 3x 3 + x 4 = 3 3x 1 + 3x 2 4x 3 2x 4 = 1 x + y + 2z = 4 1.4)

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

MTH 2032 SemesterII

MTH 2032 SemesterII MTH 202 SemesterII 2010-11 Linear Algebra Worked Examples Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education December 28, 2011 ii Contents Table of Contents

More information

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x = Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Math 25a Practice Final #1 Solutions

Math 25a Practice Final #1 Solutions Math 25a Practice Final #1 Solutions Problem 1. Suppose U and W are subspaces of V such that V = U W. Suppose also that u 1,..., u m is a basis of U and w 1,..., w n is a basis of W. Prove that is a basis

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008. This chapter is available free to all individuals, on the understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

Extra Problems for Math 2050 Linear Algebra I

Extra Problems for Math 2050 Linear Algebra I Extra Problems for Math 5 Linear Algebra I Find the vector AB and illustrate with a picture if A = (,) and B = (,4) Find B, given A = (,4) and [ AB = A = (,4) and [ AB = 8 If possible, express x = 7 as

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam Math 8, Linear Algebra, Lecture C, Spring 7 Review and Practice Problems for Final Exam. The augmentedmatrix of a linear system has been transformed by row operations into 5 4 8. Determine if the system

More information

GQE ALGEBRA PROBLEMS

GQE ALGEBRA PROBLEMS GQE ALGEBRA PROBLEMS JAKOB STREIPEL Contents. Eigenthings 2. Norms, Inner Products, Orthogonality, and Such 6 3. Determinants, Inverses, and Linear (In)dependence 4. (Invariant) Subspaces 3 Throughout

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

Online Exercises for Linear Algebra XM511

Online Exercises for Linear Algebra XM511 This document lists the online exercises for XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Lecture 02 ( 1.1) Online Exercises for Linear Algebra XM511 1) The matrix [3 2

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

Linear Models Review

Linear Models Review Linear Models Review Vectors in IR n will be written as ordered n-tuples which are understood to be column vectors, or n 1 matrices. A vector variable will be indicted with bold face, and the prime sign

More information

Linear Algebra Practice Problems

Linear Algebra Practice Problems Linear Algebra Practice Problems Math 24 Calculus III Summer 25, Session II. Determine whether the given set is a vector space. If not, give at least one axiom that is not satisfied. Unless otherwise stated,

More information

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold:

Inner products. Theorem (basic properties): Given vectors u, v, w in an inner product space V, and a scalar k, the following properties hold: Inner products Definition: An inner product on a real vector space V is an operation (function) that assigns to each pair of vectors ( u, v) in V a scalar u, v satisfying the following axioms: 1. u, v

More information

Recall the convention that, for us, all vectors are column vectors.

Recall the convention that, for us, all vectors are column vectors. Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists

More information

Supplementary Notes on Linear Algebra

Supplementary Notes on Linear Algebra Supplementary Notes on Linear Algebra Mariusz Wodzicki May 3, 2015 1 Vector spaces 1.1 Coordinatization of a vector space 1.1.1 Given a basis B = {b 1,..., b n } in a vector space V, any vector v V can

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r Practice final solutions. I did not include definitions which you can find in Axler or in the course notes. These solutions are on the terse side, but would be acceptable in the final. However, if you

More information

A Primer in Econometric Theory

A Primer in Econometric Theory A Primer in Econometric Theory Lecture 1: Vector Spaces John Stachurski Lectures by Akshay Shanker May 5, 2017 1/104 Overview Linear algebra is an important foundation for mathematics and, in particular,

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic

More information

Inner Product Spaces

Inner Product Spaces Inner Product Spaces Introduction Recall in the lecture on vector spaces that geometric vectors (i.e. vectors in two and three-dimensional Cartesian space have the properties of addition, subtraction,

More information

SOLUTION KEY TO THE LINEAR ALGEBRA FINAL EXAM 1 2 ( 2) ( 1) c a = 1 0

SOLUTION KEY TO THE LINEAR ALGEBRA FINAL EXAM 1 2 ( 2) ( 1) c a = 1 0 SOLUTION KEY TO THE LINEAR ALGEBRA FINAL EXAM () We find a least squares solution to ( ) ( ) A x = y or 0 0 a b = c 4 0 0. 0 The normal equation is A T A x = A T y = y or 5 0 0 0 0 0 a b = 5 9. 0 0 4 7

More information

NOTES on LINEAR ALGEBRA 1

NOTES on LINEAR ALGEBRA 1 School of Economics, Management and Statistics University of Bologna Academic Year 207/8 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

More information

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure. Hints for Exercises 1.3. This diagram says that f α = β g. I will prove f injective g injective. You should show g injective f injective. Assume f is injective. Now suppose g(x) = g(y) for some x, y A.

More information

EXERCISES ON DETERMINANTS, EIGENVALUES AND EIGENVECTORS. 1. Determinants

EXERCISES ON DETERMINANTS, EIGENVALUES AND EIGENVECTORS. 1. Determinants EXERCISES ON DETERMINANTS, EIGENVALUES AND EIGENVECTORS. Determinants Ex... Let A = 0 4 4 2 0 and B = 0 3 0. (a) Compute 0 0 0 0 A. (b) Compute det(2a 2 B), det(4a + B), det(2(a 3 B 2 )). 0 t Ex..2. For

More information

MATH Spring 2011 Sample problems for Test 2: Solutions

MATH Spring 2011 Sample problems for Test 2: Solutions MATH 304 505 Spring 011 Sample problems for Test : Solutions Any problem may be altered or replaced by a different one! Problem 1 (15 pts) Let M, (R) denote the vector space of matrices with real entries

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

ELE/MCE 503 Linear Algebra Facts Fall 2018

ELE/MCE 503 Linear Algebra Facts Fall 2018 ELE/MCE 503 Linear Algebra Facts Fall 2018 Fact N.1 A set of vectors is linearly independent if and only if none of the vectors in the set can be written as a linear combination of the others. Fact N.2

More information

Definitions for Quizzes

Definitions for Quizzes Definitions for Quizzes Italicized text (or something close to it) will be given to you. Plain text is (an example of) what you should write as a definition. [Bracketed text will not be given, nor does

More information

4.2. ORTHOGONALITY 161

4.2. ORTHOGONALITY 161 4.2. ORTHOGONALITY 161 Definition 4.2.9 An affine space (E, E ) is a Euclidean affine space iff its underlying vector space E is a Euclidean vector space. Given any two points a, b E, we define the distance

More information

1 Inner Product and Orthogonality

1 Inner Product and Orthogonality CSCI 4/Fall 6/Vora/GWU/Orthogonality and Norms Inner Product and Orthogonality Definition : The inner product of two vectors x and y, x x x =.., y =. x n y y... y n is denoted x, y : Note that n x, y =

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

Answer Keys For Math 225 Final Review Problem

Answer Keys For Math 225 Final Review Problem Answer Keys For Math Final Review Problem () For each of the following maps T, Determine whether T is a linear transformation. If T is a linear transformation, determine whether T is -, onto and/or bijective.

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

MATH 304 Linear Algebra Lecture 18: Orthogonal projection (continued). Least squares problems. Normed vector spaces.

MATH 304 Linear Algebra Lecture 18: Orthogonal projection (continued). Least squares problems. Normed vector spaces. MATH 304 Linear Algebra Lecture 18: Orthogonal projection (continued). Least squares problems. Normed vector spaces. Orthogonality Definition 1. Vectors x,y R n are said to be orthogonal (denoted x y)

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as

Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as Reading [SB], Ch. 16.1-16.3, p. 375-393 1 Quadratic Forms A quadratic function f : R R has the form f(x) = a x. Generalization of this notion to two variables is the quadratic form Q(x 1, x ) = a 11 x

More information

Contents. Appendix D (Inner Product Spaces) W-51. Index W-63

Contents. Appendix D (Inner Product Spaces) W-51. Index W-63 Contents Appendix D (Inner Product Spaces W-5 Index W-63 Inner city space W-49 W-5 Chapter : Appendix D Inner Product Spaces The inner product, taken of any two vectors in an arbitrary vector space, generalizes

More information

Common-Knowledge / Cheat Sheet

Common-Knowledge / Cheat Sheet CSE 521: Design and Analysis of Algorithms I Fall 2018 Common-Knowledge / Cheat Sheet 1 Randomized Algorithm Expectation: For a random variable X with domain, the discrete set S, E [X] = s S P [X = s]

More information

235 Final exam review questions

235 Final exam review questions 5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an

More information

5 Linear Transformations

5 Linear Transformations Lecture 13 5 Linear Transformations 5.1 Basic Definitions and Examples We have already come across with the notion of linear transformations on euclidean spaces. We shall now see that this notion readily

More information

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix Definition: Let L : V 1 V 2 be a linear operator. The null space N (L) of L is the subspace of V 1 defined by N (L) = {x

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses

More information

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information