Supplementary Notes on Linear Algebra

Size: px
Start display at page:

Download "Supplementary Notes on Linear Algebra"

Transcription

1 Supplementary Notes on Linear Algebra Mariusz Wodzicki May 3, 2015

2

3 1 Vector spaces 1.1 Coordinatization of a vector space Given a basis B = {b 1,..., b n } in a vector space V, any vector v V can be represented as a linear combination v = β 1 b β n b n (1) and this representation is unique, i.e., there is only one sequence of coefficients β 1,, β n for which (1) holds The correspondence between vectors in V and the coefficients in the expansion (1) defines n real valued functions on V, b i : V R, b i (v) β i, (i = 1,..., n). (2) If is another vector, then v = β 1 b β nb n (3) which shows that v + v = (β 1 + β 1)b (β n + β n)b n b i (v + v ) = β i + β i = b i (v) + b i (v ), (i = 1,..., n). Similarly, for any number α, one has which shows that αv = (αβ 1 )b (αβ n )b n b i (αv) = αb i (v), (i = 1,..., n). 3

4 In other words, each function bi : V R is a linear transformation from the vector space V to the one-dimensional vector space R, and the correspondence v [v] B β 1. β n (4) is a linear transformation from V to the n-dimensional vecor space of column vectors R n The coordinatization isomorphism of V with R n The kernel of (4) is {0} since vectors b 1,..., b n are linearly independent. The range of (4) is the whole R n since vectors b 1,..., b n span V. Thus, the correspondence v [v] B identifies V with the vector space R n. We shall refer to (4) as the coordinatization of the vector space V in basis B. 1.2 The dual space V Linear transformations V R are referred to as (linear) functionals on V (they are also called linear forms on V ). Linear functionals form a vector space of their own which is called the dual of V. We will denote it V (pronounce it V dual or V check ) An example: the trace of a matrix The trace of an n n matrix A = [ α ij ] is the sum of the diagonal entries, tr A α α nn. (5) The correspondence A tr A is a linear functional on the vector space Mat n (R) of n n matrices. Exercise 1 Calculate both tr AB and tr BA and show that tr AB = tr BA (6) where A = α α 1n.. (7) α m1... α mn 4

5 denotes an arbitrary m n matrix and β β 1m B =.. (8) β n1... β nm denotes an arbitrary n m matrix An example: the dual of the space of m n matrices For any n m matrix (8), let us consider the linear functional on the space of m n matrices: φ B : A tr AB (A Mat mn (R)). (9) Exercise 2 Calculate φ B (B τ ), where B τ denotes the transpose of B, and show that it vanishes if and only if B = 0. Deduce that φ B = 0 if and only if B = The correspondence φ : Mat nm (R) Mat mn (R), B φ B, (10) is a natural linear transformation from the space of n m matrices into the dual of the space of m n matrices. In view of Exercise 2 it is injective. By considering bases in V, in the next sections we will show that the dimension of V equals the dimension of V if the latter is finite. In particular, this will imply that dim Mat mn (R) = dim Mat mn (R). Since the transposition of matrices, A A τ (A Mat mn (R)), is an isomorphism of vector spaces, it will follow that dim Mat nm (R) = dim Mat mn (R). A corollary of this is that the dual space Mat mn (R) is naturally identified with the vector space of n m matrices, via identification (10). 5

6 1.2.5 The duality between the spaces of row and column vectors In particular, the space of column vectors R n = Mat n1 (R) is naturaly identified with the dual of the space of row vectors Mat 1n (R) and, viceversa, the space of row vectors Mat 1n (R) is naturally identified with the dual of the space of column vectors R n = Mat n1 (R) The coordinate functionals The coordinatization isomorphism of V with R n is made up of n coordinate functionals, cf. (2). They span V. Indeed, given a linear functional φ : V R, let α 1 φ(b 1 ),..., α n φ(b n ). Then, for any vector v V, one has φ(v) = φ(β 1 b β n b n ) = β 1 φ(b 1 ) + + β n φ(b n ) = β 1 α β n α n = α 1 β α n β n = α 1 b 1(v) + + α n b n(v) = ( α 1 b α n b n) (v) which shows that the linear functional φ is a linear combination of functionals b 1,..., b n, φ = α 1 b α n b n The coordinate functionals b 1,..., b n are linearly independent. Indeed, if a linear combination α 1 b 1 + α n b n is the zero functional, then its values on v = b 1,..., b n are all zero. But those values are: α 1,..., α n, since b i (b j) = { 1 if i = j 0 if i j. (11) 6

7 1.2.8 The dual basis B Thus, B {b 1,..., b n} forms a basis of the dual space. Note, that 1.3 Scalar products Bilinear pairings A function of two vector arguments dim V = dim V., : V V R (12) is said to be a bilinear pairing on V if it is a linear functional in each argument. (Bilinear pairings are also called bilinear forms on V.) We say that the bilinear pairing is nondegenerate if, for any nonzero vector v V, there exists v V, such that v, v 0. We say that the bilinear pairing is symmetric if, for any vectors v, v V, one has v, v = v, v Orthogonality We say that vectors v and v are orthogonal if v, v = 0. We denote this fact by v v If X is a subset of V, the set of vectors orthogonal to every element of X is denoted X {v V v x for all x X } (13) Exercise 3 Show that X is a vector subspace of V and X X. (14) 7

8 1.3.6 We say that the bilinear pairing is positively defined if, for any vector v V, one has v, v 0. Theorem 1.1 (The Cauchy-Schwarz Inequality) Let, be a positively defined symmetric bilinear pairing on a vector space V. Then, for any vectors v, v V, one has the following inequality v, v 2 v, v v, v. (15) We shall demonstrate (15) by considering the second degree polynomial where p(t) tv + v, tv + v = v, v t 2 + ( v, v + v, v ) t + v, v = at 2 + bt + c a = v, v, b = 2 v, v and c = v, v. In view of the hypothesis, p(t) 0 for all real number t. This is equivalent to the inequality b 2 4ac which yields inequality (15) An immediate corollary of the Cauchy-Schwarz Inequality is that a symmetric bilinear pairing is nondegenerate and positively defined if and only if v, v > 0 for any nonzero vector in V Scalar products A nondegenerate positively defined symmetric bilinear pairing on V is called a scalar product. Exercise 4 Show that a set of nonzero vectors {v 1,..., v n }, mutually orthogonal with respect to some scalar product on V, is linearly independent. (Hint: for a linear combination representing the zero vector, α 1 v q + + α n v n = 0 calculate the scalar product of both sides with each v i.) 8

9 The associated norm For any scalar product, the functional v v v, v (16) is called the associated norm. Using the norm notation, we can rewrite the Cauchy-Schwarz Inequality as v, v v v. (17) The Triangle Inequality Note that while v + v 2 = v v, v + v 2 ( v + v ) 2 = v v v + v 2. In view of the Cauchy-Schwarz Inequality, the bottom expression is not less than the top expression. Equivalently, v + v v + v, (18) for any pair of vectors v and v in V. This is known as the Triangle Inequality The associated norm satisfies also the following two conditions for any real number α and any vector v V, and for any nonzero vector v V Norms on a vector space Any function αv = α v, (19) v > 0 (20) V [0, ] that satisfies the Triangle Inequality (18) and conditions (19) and (20) is called a norm on V. 9

10 Polarization Formula In terms of the associated norm, the scalar product is expressed by means of the identity v, v = 1 2( v + v 2 v 2 v 2). (21) known as the Polarization Formula. If a norm on a vector space V is associated with a scalar product, then the right-hand-side of (21) must depend on v linearly. If it does not, then that norm is not associated with a scalar product Quadratic forms A function q : V R is called a quadratic form if the pairing assigning the number v, v 1 ( q(v + v ) q(v) q(v ) ) (22) 2 to a pair of vectors v and v in V, is bilinear. Note that the pairing defined by (22) is symmetric. Vice-versa, for any symmetric bilinear pairing,, the function q(v) v, v (v V), (23) is a quadratic form on V We obtain a natural one-to-one correspondence between symmetric bilinear pairings and quadratic forms on V { } symmetric bilinear pairings, : V V R { } quadratic forms. (24) q : V R Nondegenerate symmetric pairings correspond to nondegenerate quadratic forms, i.e., the ones that satisfy q(v) = 0 if and only if v = 0. 10

11 Positively defined symmetric bilinear pairings correspond to positively defined quadratic forms, i.e., the ones that satisfy q(v) 0 (v V) An example: the dot product Given a basis B = {b 1,..., b n } in V, the dot product v B v of two vectors (1) and (3) is defined as v B v β 1 β β n β n. (25) It is the only scalar product on V for which B is orthonormal, i.e., { 1 if i = j b i, b j = 0 if i j. (26) In the special case of V = R n and B being the standard basis 1 0 e 1 =.,..., e n =., (27) 0 1 we obtain the dot product on R n An example: the l p -norms on R n For a positive number p > 0, consider the following functional on R n, x x p ( x 1 p + x n p) 1 p. (28) This functional satisfies the Triangle Inequality if and only if p 1. For any p > 0, it satisfies the other two properties of a norm. Only for p = 2 the right-hand-side of the Polarization Formula is linear in v. In that case, the scalar product is the dot product on R n and the l 2 -norm is known as the Euclidean norm. The vector space R n equipped with the l 2 -norm is referred to as the n-dimensional Euclidean space. 11

12 An example: the Killing scalar product Consider the bilinear pairing on the vector space Mat mn (R) of m n matrices A, B tr A τ B. (29) The pairing is known under the name of Killing form and plays a very important role in Representation Theory. Exercise 5 Calculate tr A τ A and show that tr A τ A > 0 for all nonzero m n matrices. Explain why tr A τ B = tr B τ A Note that the Killing scalar product on Mat n1 (R) = R n coincides with the standard dot product on R n Isometries A linear transformation T : V V between vector spaces equipped with bilinear pairings, and, respectively,,, is called an isometry if it preserves the value of the pairing, i.e., if Tv 1, Tv 2 = v 1, v 2 for any pair of vectors v 1 and v 2 in V The coordinatization isomorphism [ ] B : V R n is an isometry between V equipped with the dot product B and R n equipped with standard dot product: [v 1 ] B [v 2 ] B = v 1 B v 2. (30) Description of bilinear pairings on a vector space with a basis Given a basis B = {b 1,..., b n } and an arbitrary bilinear pairing (12), let us consider the n n matrix Q = [ q ij ] where q ij b i, b j (1 i, j n). (31) 12

13 For a pair of vectors, as in (1) and (3), one has v, v = β 1 b β n b n, β 1 b β nb n, = β i b i, b j β j 1 i,j n = 1 i,j n β i q ij β j (32) = [v] B (Q[v ) ] B (33) = ( Q τ [v] B ) ) [v ] B (34) where Q τ denotes the transpose matrix. If we denote by Q the bilinear pairing on R n given by then (32) can be rewritten as x Q y x τ Qy, (35) v, v = [v] B Q [v ] B. (36) We obtained a description of all bilinear pairings on a vector space with a chosen basis. They are in one-to-one correspondence with n n matrices. In particular, on R n every bilinear pairing is of the form (35) for a unique n n matrix Q Nondegenerate pairings correspond to invertible matrices Symmetric pairings correspond to symmetric matrices Pairings for which the chosen basis is orthogonal correspond to diagonal matrices. 13

14 Positively defined pairings correspond to matrices Q such that for any x R n Symplectic forms x τ Qx 0 Antisymmetric pairings, i.e., the ones satisfying v, v = v, v (v, v V), correspnd to antisymmetric matrices Q, i.e., matrices whose transpose Q τ equals Q. Nondegenerate antisymmetric bilinear pairings are called symplectic forms. Vector spaces equipped with symplectic forms play a fundamental role in Physics, especially Mechanics, and in modern Mathematics An example: the standard symplectic form on R 2l For vectors the formula α 1. x = α l β 1. β l α 1. and x = α l β 1. x, x sympl (α 1 β α l β l ) (β 1α β l α l ) (37) defines the standard symplectic form on R 2l. This bilinear pairing corresponds to the 2l 2l matrix Q = β l 14

15 Note that the symplectic form (37) on R 2 coincides with the determinant of a 2 2 matrix, viewed as a bilinear function of its columns One can equip any even dimensional vector space V with a symplectic form, e.g., by coordinatizing V and using the standard symplectic form (37). Symplectic forms do not exist on vector spaces of odd dimension The coordinatization isomorphism is an isometry We observe that the coordinatization isomorphism is an isometry between V equipped with a pairing, and R n equipped with the pairing Q defined in (35). 1.4 Linear transformations V V Given a bilinear pairing, : V V R, the correspondence where v is the linear functional defines a linear transformation v v V, v v, v (v V), (38) : V V. (39) Vice-versa, any linear transformation Φ : V V, produces a bilinear pairing v, v (Φ(v) ) (v ). (40) 15

16 1.4.3 The two correspondences are mutually inverse. We thus obtain a natural one-to-one correspondence between bilinear pairings on V and linear transformations V V, { } { } bilinear pairings linear transformations on V V V. (41) If, = B is the dot product associated with a basis B in V, then bi in the sense of (38) coincide with the coordinate functionals introduced in Section Note that the linear transformation V V that corresponds to the scalar product B, sends basis vectors b 1,..., b n in V to the basis vectors b1,..., b n in V The pairing is nondegenerate if and only if the kernel of the induced transformation V V is zero. For finite-dimensional spaces the latter occurs precisely when V V is an isomorphism of vector spaces. Thus, for finite-dimensional spaces we obtain a natural on-to-one correspondence nondegenerate { } isomorphisms bilinear pairings V V. (42) on V The second dual space V For a general vector space there are no natural nonzero linear transformations V V. Each such transformation is equivalent to equipping V with a bilinear pairing (12). There is, however, a natural transformation from V to the dual of its own dual: v ˆv V (43) where the linear functional ˆv on V is defined as ˆv(φ) φ(v) (φ V ). (44) The value of ˆv on φ is declared to be the value of the functional φ on the vector v. 16

17 Exercise 6 Calculate v + v (φ) and αv(φ) and show that the correspondence is a linear transformation. ˆ : V V (45) Exercise 7 Show that for any nonzero vector v, there exists a linear functional φ such that φ(v) 0. Deduce from this that ˆv = 0 if and only if v = 0. (Hint: consider a basis B in which b 1 = v.) For a finite dimensional vector space we established in Section that V and its dual V have the same dimension. In particular, also V and V have the same dimension. Combining this with the fact that the transformation (45) is, according to Exercise 7, injective, we infer that it is an isomorphism. Thus, a finite-dimensional vestor space V naturally identifies with its own second dual It follows that the coordinate functionals ψ 1,..., ψ n, for any basis {ψ 1,..., ψ n } in the dual space V ˆb 1,..., ˆbn, are necesarily of the form for some basis B = {b 1,..., b n } in V Let us calculate the value of ψ i on an arbitrary vector v expressed in basis B, cf. (1), ψ i (v) = ψ(β 1 b β n b n ) = β 1 ψ i (b 1 ) + + β n ψ(b n ) = β 1ˆb1 (ψ i ) + + β n ˆbn (ψ i ) = β 1 ψ 1(ψ i ) + + β n ψ n(ψ i ) = β i. We have, of course, b i (v) = β i. Thus, the functionals ψ i and b i are equal. 17

18 We demonstrated that every basis in the dual space V of a finite-dimensional space consists of the coordinate functionals B = {b 1,..., b n} of a unique basis B = {b 1,..., b n } in V We also demonstrated that the coordinate functionals b1,..., b n, of the dual basis B identify with vectors of the original basis in V via natural identification (43) of V with V, b 1 = ˆb 1,..., b n = ˆb n. (46) 18

19 u u w u w 2 Linear transformations 2.1 The adjoint linear transformation The dual linear transformation A linear transformation T : V W induces a linear transformation between the dual spaces with the source and the target exchanging places: T : W V, T (ψ) ψ T, (ψ W ). (47) As you see, T sends a linear functional ψ : W R on W to its compostion with T, T ψ V W R yielding thus a linear functional on V. The dependence on ψ is linear which just reflects the fact that composition of linear trasnformations is distributive with respect to addition of linear transformations and commutes with multiplication by scalars, (ψ + ψ ) T = ψ T + ψ T, (αψ) T = α(ψ T), so T is a linear transformation from the dual of W to the dual of V The kernel of T The kernel of T is the set of functionals ψ on W such that ψ T = 0. These are the functionals that vanish on the range T(V) of T : ker T = {ψ W ker ψ T(V)}. (48) Suppose that both V and W are equipped with scalar products. We shall denote the scalar product on V by, and the one on W by,. In the diagram V T W V W 19

20 u u u u the vertical arrows are the isomorphisms associated with the corresponding scalar products v v = v,, w w = w,. By composing with T and then with the inverse of we obtain the linear transformation T : W V making the following diagram commutative V T W (49) V T W It is called the adjoint of T. Unlike T, the adjoint is defined on the original spaces, not their duals, but its construction depends on the choice of scalar products on the source and on the target of T Note that T takes a vector w W to (T w) = T w,, while T takes w to w T = w, T( ). The commutativity of diagram (49) means that the following linear functionals on V are equal: T w, = w, T( ), i.e., that, for any vector v V, one has T w, v = w, Tv. (50) This last identity is how the adjoint transformation is usually defined but then one has to demonstrate that the desired linear transformation T : W V exists and is unique. The way we define T is free of this inconvenience. 20

21 2.1.7 The kernel of T We shall record a few immediate consequences of identity (50). In view of the bilinear pairing, being nondegenerate, the vector T w is zero if and only if 0 = T w, v = w, Tv, for all v V, i.e., if and only if w is orthogonal to the range of T. Thus, The range of T ker T = T(V). (51) A vector v V is orthogonal to the range of T if and only w, Tv = 0 is zero for all w W. In view of, being nondegenerate, this is equivalent to Tv = 0. Thus, T (W) = ker T or, equivalently, T (W) = ker T. (52) In Section we shall establish that for finite-dimensional subspaces U V one has U = U, hence T (W) = ker T (53) since we assume both V and W to be finite-dimesnional For any vector v V in a vector space with a scalar product we defined in Section the associated linear functional v. In fact, v is the adjoint (!) of the linear transformation R V, sending a real number α to αv. Isn t this remarkable? Mathematics is full of such beautiful miracles, one only needs patience and readiness to discover them (the lazy ones are deprived of many joys, including spectacular joys of such discoveries). Note that vectors in V are here naturally identified with linear transformations from R to V. 21

22 u w w u 2.2 Linear transformations and bases Let T : V W be a linear transformation. Suppose that V is equipped with a basis B = {b 1,..., b n } while W is equipped with a basis C = {c 1,..., c m }. The image of each basis vector b j under T is a linear combination of basis vectors in W, Tb j = α 1j c α mj c m. (54) If v is an arbitrary vector (1) in V, then Tv = T(β 1 b β n b n ) = β 1 Tb β n Tb n = β 1 (α 11 c α m1 c m ) + + β n (α 1n c α mn c m ) = 1 j n 1 i m = 1 i m ( β j α ij c i ) α ij β j c i 1 j n In other words, the column of coefficients representing Tv in basis C, equals A times the column of coefficients representing v in basis B, Here A denotes the m n matrix with columns [Tv] C = A[v] B. (55) [Tb 1 ] C,..., [Tb n ] C A commutative diagram of linear transformations Identity (55) expresses the fact that the following diagram of linear transformations commutes R n L A R m [ ] B [ ] C (56) V T which means that the composition of the top arrow with the left arrow produces the same transformation as the composition of the right arrow 22 W

23 u w w u w w u with the bottom arrow. Here L A denotes left multiplication by matrix A and the vertical arrows are the coordinatization isomorphisms of V with R n and W with R m, induced by basis B in V and basis C in W, respectively The matrix A depends on T and on the choice of bases in both the source and the target spaces of T. When necessary, we shall indicate this dependence by employing the notation A T (C, B) If S : W X is another linear transformation and D = {d 1,..., d l } is a basis in X, then the commutativity of the diagram L AT (C,B) L AS (D,C) R n R m R l [ ] B [ ] C [ ] D (57) T V W X S means that A S (D, C)A T (C, B) = A S T (D, B). (58) Thus, we obtain a one-to-one correspondence { } linear transformations T : V W { m n matrices } (59) where m is the dimension of the target space and n is the dimension of the source space. 1 Warning to my Math 54 students: in my lectures I have been using a slightly different notation, A T (B, C). 23

24 w w This correspondence is not natural: it depends on the chosen bases in V and W. It is, however, compatible with the operations on the spaces of linear transformations and the corresponding spaces of matrices, see (58) and A T+T (C, B) = A T (C, B) + A T (C, B), A αt (C, B) = αa T (C, B). (60) The pair of above identities means that the correspondence T A T (C, B) establishes an isomorphism between the vector space Lin(V, W) of linear transformations from V to W and the vector space Mat mn (R) of m n matrices The case of T : R n R m In the special case when V = R n and W = R m and the bases chosen are the standard bases, the coordinatization isomorphisms are the identity transformations and the commutative diagram becomes R n L A R m (61) R n T R m meaning that every linear transformation R n R m is of the form L A for a unique m n matrix. The columns of A are the images of the standard basis vectors e 1,..., e n R n under T, Te 1,..., Te n. 24

25 u u u w w w w w u u u The effect of a change of basis on the matrix A T (C, B) Given another pair of bases B and C in V and W, respectively, we obtain a commutative diagram R n L A R m [ ] B [ ] C V T W (62) [ ] B [ ] C R n L A R m and, therefore, also the diagram R n L A R m [ ] [ ] C [ ] 1 B [ ] 1 B C (63) R n L A R m The change-of-coordinates matrices The transformation [ ] B [ ] 1 B sends the standard vectors e 1,..., e n to vectors b 1,..., b n and then represents them in basis B as columns of numbers. It acts, therefore, as multiplication by the n n matrix with columns [b 1] B,..., [b n] B. (64) Lay denotes this matrix from B to B P B B and calls it the change-of-coordinates matrix The inverse of the change-of-coordinates matrix P B B. P B B is, by definition, 25

26 The change-of-coordinates formula The commutativity of diagram (63) translates into the formula A T (C, B ) = P A T(C, B) P. (65) C C B B 2.3 The matrix of the dual transformation Every linear functional ψ : R m R is of the form ψ = L α where α = [α 1... α m ] is a 1 m matrix, i.e., a row of m numbers. Therefore the dual of the transformation T = L A : R n R m sends ψ to the composite ψ T = L α L A = L αa If we identify (R m ) and (R n ) with the spaces of row vectors Mat 1m (R) and, respectively, Mat 1n (R), then (L A ) becomes right multiplication by matrix A, (L A ) (α) = αa. (66) The coordinatization of the dual space (R n ) identification (R n ) Mat 1n (R) and transposition is the composite of the τ : Mat 1n (R) Mat n1 (R) = R n. Therefore, the coordinatization of (R m ) and (R n ) identifies (L A ) with left multiplication by the transpose matrix A τ, y A τ y (y R m ). (67) 26

27 2.3.4 The inverse to the coordinatization of (R n ) is the isomorphism associated with the dot product on R n. : R n (R n ) (68) Let V be a vector space with a basis B. The composite of (68) and the dual of the coordinatization isomorphism [ ] B, ([ ] B ) : (R n ) V, sends the standard basis vectors e i in R n to the basis vectors b i in V. This means that the inverse transformation is the coordinatization of V in the dual basis B, ( [ ] B = ([ ] B ) ( ) ) 1. (69) In particular, if T : V W is a linear transformation with matrix A in bases B in V and C in W, then the matrix of its dual is the transpose matrix A τ, T : W V 2.4 The rank of a linear transformation A T (B, C ) = A T (C, B) τ. (70) The rank of T : V W is the dimension of its range, i.e., the vector subspace T(V) of W consisting of the values of T. Exercise 8 Suppose that vectors Tv 1,..., Tv l are linearly independent. Show that vectors v 1,..., v l are linearly independent. (Hint: show that if v 1,..., v l are linearly dependent, then Tv 1,..., Tv l are linearly dependent.) 27

28 2.4.2 The matrix representation of a linear transformation, as described by identity (55), or by the commutative diagram (56), can be also expressed in the following form T = α ij c i b j, (71) 1 i m 1 j n where b 1,..., b n are the coordinate functionals on V Each term c i b j is a linear transformation whose range is the one-dimensional subspace of W spanned by c i. Its rank is thus 1 and the same is true when we multiply c i b j by a nonzero coefficient α ij. Thus, the matrix representation of a linear transformation represents it as the sum of rank 1 transformations whose number equals the number of nonzero entries in matrix A. This number does not exceed mn and often is equal to it Let us note that the identity operator id V on a vector space V of dimension n has obviously rank n and it also has the following representation as the sum of n rank 1 operators, id V = b 1 b b n b n, (72) where {b 1,..., b n } is an arbitrary basis of V. This is nothing more than observing that the identity operator is represented by the identity n n matrix in every basis If a linear transformation T : V W can be expressed as the sum of d rank 1 transformations, then the rank of T does not exceed d because the range of T is spanned by d nonzero vectors belonging to the one-dimensional ranges of those rank 1 transformations. 28

29 This means that a transformation of rank r cannot be represented as the sum of fewer than r rank 1 transformations. We shall now show that, in fact, any linear transformation of rank r can be represented by exactly r transformations of rank Indeed, let {c 1,..., c r } be any basis in the range of T. Let τ 1 T (c1),..., τ r T (cr ). Then ( ) c1 τ c r τ r (v) = c1 τ 1 (v) + + c r τ r (v) = c 1 c 1(Tv) + + c r c r (Tv) = ( c 1 c c r c r ) (Tv) = id Range of T (Tv) = Tv. We demonstrated that T has the following representation as the sum of r rank 1 linear transformations T = c 1 τ c r τ r. (73) We shall now demonstrate that for any linear transformation T it is possible to find such bases B on V and C on W, that the functionals τ 1,..., τ r are the coordinate functionals of B, τ 1 = b 1,..., τ r = b r Indeed, by performing row and column operations it is possible to represent any m n matrix A of rank r as the product A = PJQ (74) where P ia an invertible m m matrix, Q is an invertible n n matrix, and J is the m n matrix having the diagonal r r identity matrix in its 29

30 left top corner and everywhere else zeros, 1... J = 1 (75) Matrix J is the reduced column echelon form of the reduced row echeleon form of A If A is the matrix of T in some bases B and C, then, according to the change-of-coordinates formula (65), J will become the matrix of T in the bases B and C such that P = P C C and Q = P B B. Saying that A T (C, B) = J means that T has the following representation as the sum of r rank 1 transformations: and this is the simplest such representation of T. T = c 1 b c r b r, (76) 2.5 The row rank versus the column rank of a matrix Given an m n matrix A, its row rank is the dimension of the vector space spanned by the rows of A Row A lin span{a 1,..., A m } (77) while the column rank is the dimension of the vector space spanned by the columns of A Col A lin span{a 1,..., A n }. (78) 30

31 Note that Row A is the range of right multiplication by A, R A : Mat 1m (R) Mat 1n (R), whereas Col A is the range of left multiplication by A, L A : Mat n1 (R) Mat m1 (R) For any linear transformation T : V W and any isomorphisms S : W W and U : V V, the range of T coincides with the range of T U and is isomorphic to the range of S T (the isomorphism is provided by S). It follows that dim Range (S T U) = dim Range T. (79) By applying this to T being either L A or R A, we obtain that both the row and the column ranks remain constant when one multiplies the matrix on either side by an invertible matrix. Thus, the row and the column ranks of A = PJQ equal the row and, respectively, the column ranks of matrix J, cf. (74) (75). The two ranks are equal for matrix J, hence they are equal for the original matrix A. 31

32 3 Linear operators on V 3.1 Projections Linear transformations T : V V are usually referred to as linear operators on a vector space V An operator P on a vector space V is said to be a projection onto a subspace W along a subspace W if its range is W, its kernel is W, and Idempotent operators An operator T is said to be idempotent if P(v) = v for all v W. (80) T T = T. (81) Any projection is idempotent. Indeed, P(v) belongs to W, for any vector v, and therefore P ( P(v) ) = P(v), in view of (80) For an idempotent operator T, denote by W its kernel and by W its range. Any vector w in W is of the form w = T(v) for some v V. Hence T(w) = T(T(v)) = T(v) = w, i.e., T is a projection onto its range along its kernel The complementary projection If P is a projection, then P id V P is an idempotent operator too: P P = (id V P) (id V P) = id V 2P + P P = id V 2P + P = P. 32

33 Moreover, and P (v) = 0 if and only if P(v) = v P (v) = v if and only if P(v) = 0, so P is a projection onto the kernel of P along the range of P Note that P P = 0 = P P. (82) It is customary to say about two projections satisfying (82) that they are orthogonal to each other The operator P is referred to as the complementary projection. Its range is W and its kernel is W The associated direct sum decomposition of V Since any vector v V is the sum id V = P + P, v = w + w (83) for some w W and w W (namely, w = P(v) and w = P (v)), and such representation is unique. Indeed, if is another representation, then v = w 1 + w 1 w 1 = P(v) = w and w 1 = P (v) = w. When any vector in a vector space V can be represented as the sum of vectors (83) from two subspaces W and W, and that representation is unique, we say that V is the direct sum of subspaces W and W. Note that the intersection of W with W contains only the zero vector. 33

34 More generally, when any vector in a vector space V can be represented as the sum of vectors v = w w l (84) from subspaces W 1,..., W l of V, and the representation (84) is unique, then we say that V is the direct sum of subspaces W 1,..., W l. In this case, we shall refer to the vectors W 1 W 1,..., w l W l as the component vectors of vector v The associated projections Let W i be the set of vectors v in V whose i-th component vector is zero. It is a vector subspace such that V. If we rewrite (84) as v = w i + w i where w i is the sum of all vectors but w i on the right hand side of (84), then we see that V is the direct sum of W i and W i. Let P i be the corresponding projection onto W i with kernel W i. Since the range of P j is contained in kernel of P i when i j, the associated projections are orthogonal to each other: P i P j = 0 if i j. The fact that every vector in V has a representation (84) means that id V = P P l. (85) Bases compatible with a direct sum decomposition Choosing a basis B i = { b (i) 1,..., } b(i) n i in each Wi, allows to represent each component vector w i as a unique linear combination w i = α (i) 1 b(i) α (i) n i b (i) n i (86) of vectors in B i where n i denotes the dimension of W i. By substituting these linear combinations into (84), we obtain a representation of any vector in V as a linear combination of the set B assembled by taking the union of the component bases B 1,..., B l. Such a representation of v as a linear combination of vectors from B is unique in view of the uniqueness of the representation (84) and of the representations (86). In particular, B is a basis of V. 34

35 In particular, the dimension of V is the sum of the dimensions of the component subspaces n = n n l. (87) Any basis B = {b 1,, b n } in an arbitrary vector space is compatible with several direct sum decompositions of V. For example, let us partition {1,..., n} into two disjoint subsets I and J and let V I and, respectively, V J, be the subspaces spanned by basis vectors b i where i belongs to I or, respectively, to J. Then V is the direct sum of V I and V J Another example: let V i be the one-dimensional subspace spanned by the single vector b i. Then V is the direct sum of one-dimensional subspaces V 1,..., V n. 3.2 Case study: a linear transformation T : V W Given a basis {c 1,..., c r } in the range of a linear transformation T : V W, let us choose vectors b 1,, b r in V such that Tb 1 = c 1,..., Tb r = c r. They are linearly independent. Indeed, if then α 1 b α r b r = 0, α 1 c α r c r = T(α 1 b α r b r ) = 0 and the linear independence of c 1,..., c r implies that all α i are zero. This argument, in fact, shows that no nonzero linear combination of vectors b 1,, b r belongs to the kernel of T. 35

36 3.2.2 Let us denote the linear span of vectors b 1,, b r by V and the kernel of T by V 0. Given a vector v V, let us represent Tv in basis {c 1,..., c r }, Tv = α 1 c α r c r. Then v α 1 b α r b r belongs to V and T(v v) = (α 1 c α r c r ) T(α 1 b α r b r ) = 0 means that v v belongs to V Thus, V is the direct sum of V and V 0. In particular, we demonstrated that dim V = dim T(V) + dim ker T (88) because V is spanned by r linearly independent vectors b 1,, b r and r is the dimension of the range of T. We also showed that for any basis {c 1,..., c r } in the range of T, there exists a basis {b 1,, b n } in V such that (76) holds. 3.3 Linear operators on a space with a chosen basis Given a basis B on V, the matrix A T (B, B) will be referred to as the matrix of T in basis B. We shall denote it A T (B) For two operators S and T and a number α, one has A S+T (B) = A S (B) + A T (B), A αt (B) = αa T (B), (89) and A S T (B) = A S (B)A T (B). (90) These are special cases of identities (60) and (58). 36

37 3.3.3 Polynomial functions of an operator Let p(t) = α 0 + α 1 t + + α d t d (91) be a polynomial. By substituting an operator T in place of the symbol t, we obtain an operator p(t) on V. Identities (60) (90) imply that the matrix of p(t) is that polynomial applied to the matrix of T, A p(t) (B) = p (A T (B)). (92) 3.4 Invariant subspaces of a linear operator A subspace W of V is said to be invariant under an operator T if T(W) W, i.e., if Tw belongs to W for any vector w W. The intersection and the span W 1 W l W W l of any family W 1,, W l of invariant subspaces is invariant. The whole V is the largest invariant subspace of V and the zero space {0}, containing only the zero vector, is the smallest Linear operators on one-dimensional spaces Every operator on a one-diemensional space V is of the form v αv (v V) for a unique real number α. Indeed, if v 0, then v spans V and therefore Tv is a multiple αv for some number α. The coefficient α does not depend on v since for v = α v, one has T(v ) = T(α v) = α T(v) = α αv = αα v = αv. One-dimensional spaces have no nontrivial subspaces so, obviously, operators on one-dimensional spaces have no nontrivial invariant subspaces. 37

38 3.5 An example: rotation in R 2 The rotation of the two-dimensional space R 2 [ ] cos θ sin θ T = L A, where A =, (93) sin θ cos θ has no nontrivial invariant subspaces. 3.6 One-dimensional invariant subspaces and eigenvalues Restriction of T to any of its invariant subspaces W induces an operator on W. If W is one-dimensional, T acts on W as multiplication by a certain number λ. The vectors of one-dimensional invariant subspaces are called eigenvectors of T with eigenvalue λ. The set of all eigenvectors with λ as their eigenvalue forms an invariant subspace that is called the eigensubspace of T corresponding to eigenvalue λ. It will be denoted V λ. 3.7 By definition, V λ is the largest subspace of V on which T acts as multiplication by λ. 3.8 The subspace spanned by all eigenvectors Denote by V the subspace of V spanned by all eigenvectors of T. We shall assume in what follows that V is finite-dimensional. In this case, there are finitely many distinct eigenvalues λ 1,..., λ k and V = V λ1 + + V λk. (94) We shall show that any vector v in V has a unique representation Indeed, if v = v v k (v 1 V λ1,..., v k V λk ). v = v v k (v 1 V λ1,..., v k V λ k ) is another representation, then (v 1 v 1) + + (v k v k ) = 0 (95) 38

39 By applying T i to both sides of (95) we obtain the system of equalities (v 1 v 1 ) + + (v k v k ) = 0 λ 1 (v 1 v 1 ) + + λ k(v k v k ) = 0 λ 2 1 (v 1 v 1 ) + + λ2 k (v k v k ) = (96) Consider the vector space V k whose elements are columns of k vectors in V. The first k equalities in (96) together express the fact that the column of vectors v 1 v 1. v k v k (97) is annihilated by the operator on V k that multiplies a column of vectors by the square matrix λ 1... λ k λ λ 2 k (98) λ k λ k 1 k This matrix is known as the Vandermonde matrix. Its determinant equals (λ i λ j ) 0. (99) 1 i<j k In particular, the operator in question is invertible, hence the column of vectors (97) is the zero vector in V k, i.e., all its components are zero We demonstrated that the subspace V spanned by all the eigenvectors of T is the direct sum of the eigenspaces V λ1,..., V λk Restriction of T to V has a very simple matrix A in any basis compatible with the decomposition of V into the direct sum of the eigenspaces. It is the diagonal matrix having on its diagonal n 1 times λ 1, then n 2 times λ 2, and so on. Here n j = dim V lj denotes the dimension of the space of eigenvectors with eigenvalue λ j. 39

40 3.8.3 Diagonalizable operators We say that an operator T is diagonalizable if there is a basis B on V such that the associated matrix A T (B) is diagonal. Any such basis consists of nonzero eigenvectors of T. It follows that T is diagonalizable if and only if V = V, i.e., if V is spanned by the eigenvectors of T. The latter happens precisely when dim V = dim V λ1 + + dim V λk. (100) Since each dim V λj 1, we infer that any operator having n = dim V distinct eigenvalues is automatically diagonalizable. In this case each eigenspace is one-dimensional, i.e., each nonzero eigenvector is unique up to a multiple. Exercise 9 Describe the eigenvalues and eigenspaces of a projection P and explain why every projection is diagonalizable. 40

41 4 Linear operators on a space with a chosen scalar product 4.1 Orthogonal projections Suppose that W is a subspace equipped with a basis {c 1,..., c r } orthogonal with respect to a scalar product, on V. This means that Consider the operator c i c j (1 i j r). (101) P 1 c 1, c 1 c 1c c r, c r c rc r (102) where the functionals ci V,, as in Section 1.4.1, are defined in terms of the scalar product c i (v) c i, v (v V) The range of P is contained in W. In view of (101), Pc j = c i, c j c 1 i r i, c i c i = c j, c j c j, c j = c j. It follows that Pv = v for each v W. In particular, W is the range of P and P is a projection onto W. The kernel of P consists of vectors where v is any vector of V. v Pv Exercise 10 Show that (v Pv) c j for each 1 j r. Any vector orthogonal to W is annihilated by P, as follows immediately from the definition of P. Thus, the kernel of P coincides with W. 41

42 4.1.3 The orthogonal complement of a subspace We demonstrated that V is the direct sum of W and W. The latter is called the orthogonal complement of W (in V ). The operator defined by (102) is the projection onto W along W. Its definition depends on the choice of an orthogonal basis in W yet the operator itself does not. We call it the orthogonal projection onto W If V is finite-dimensional, then W has a finite basis. By applying the above argument to W instead of W, we deduce that V is the direct sum of W and W. Thus we have two direct sum decompositions of V : as the sum of W and W, and as the sum of W But W W, see Exercise 3. The uniqueness of representation of v = w + w where w W and w W, then implies that every w W belongs to W. This means that 4.2 Gram-Schmidt orthogonalization W = W. (103) Any collection of nonzero vectors w 1,, w r spanning a vector subspace W can be transformed into an orthogonal basis of W. Consider the nondecreasing sequence of vector subspaces W 1 W 2 W r = W, (104) where W i linspan{w 1,..., w i }. Let P i denote the orthogonal projection onto W i. Then u 1 w 1, u 2 w 2 P 1 w 2,..., u r w r P r 1 w r, (105) defines a sequence of mutually orthogonal vectors such that linspan{u 1,..., u i } = linspan{w 1,..., w i } = W i. Note that each u i is the orthogonal projection of w i onto Wi 1 zero precisely when W i = W i 1. and it is 42

43 4.2.2 In order to calculate u i we need the projection operator P i 1 but that one is provided by the previously calculated vectors u 1,..., u i=1 with help of the formula P i 1 = 1 u 1, u 1 u 1u u i 1, u i 1 u i 1ui 1 (106) (in this formula we skip the terms corresponding to those u j that are zero). Thus, the process of calculating the sequence u 1,..., u r becomes a simple algorithm for which we only need the original sequence of vectors w 1,, w r. By removing those u i that are zero, we obtain an orthogonal basis for W. The algorithm is known as the Gram-Schmidt orthogonalization of any sequence of nonzero vectors. It produces, in particular, from any basis of V, an orthogonal basis. If the original basis is orthogonal, its Gram-Schmidt orthogonalization coincides with it. 4.3 Adjoint operators Given operators S and T on V, let us consider the linear pairings that associate to a pair of vectors v and v, the numbers Sv, v and v, Tv, respectively. (107) Our task is to determine their matrices in a given basis B. Below Q is the matrix of, in basis B, cf. (31). To simplify notation, A S, A T and [ ] stand for: A S (B), A T (B) and [ ] B, respectively. According to (36), one has and Sv, v = [Sv] τ Q[v ] = (A S [v]) τ Q[v ] = [v] τ (A τ S Q) [v ] v, Tv = [v] τ Q[Tv] = [v] τ Q ( A T [v ] ) = [v] τ (QA T ) [v ]. 43

44 4.3.2 The matrix of the adjoint operator We say that S and T are adjoint to each other with respect to the bilinear pairing, if two pairings introduced above coincide: Sv, v = v, Tv (v, v V). Our calculations express this in terms of the matrices of the corresponding bilinear pairings in basis B: or, equivalently, If, is nondegenerate, then Q is invertible and A τ S Q = QA T (108) Q τ A S = A τ T Qτ. (109) A S = (Q τ ) 1 A τ T Qτ. (110) If, is a scalar product, then Q is symmetric and (110) becomes or, using the adjoint operator notation T, A S = Q 1 A τ T Q (111) A T = Q 1 A τ TQ. (112) The basis is orthonormal if and only if Q is the identity matrix. In this case, the matrix of the adjoint operator is simply the transpose of the matrix of T : A T = ( A T ) τ. (113) Note that (113) agrees with the fact that ( A τ x ) y = x (Ay). 44

45 4.3.6 Selfadjoint operators An operator on a vector space equipped with a scalar product, is said to be selfadjoint if T = T. Equivalently, Tv, v = v, Tv (v, v V). (114) The operator is selfadjoint if and only if its matrix A T in any orthonormal basis is symmetric. Exercise 11 Show that the orthogonal projection P onto a subspace W is selfadjoint. Exercise 12 Let W be an invariant subspace of a selfadjoint operator T. Show that also its orthogonal complement W is an invariant subspace. Exercise 13 Let v and v be eigenvectors of a selfadjoint operator T with eigenvalues λ λ. Show that v v Any selfadjoint operator on a finite-dimensional nonzero vector space has at least one eigenvalue Let us accept this fact and see what does follow from it. The vector subspace V spanned by all the eigenvectors, cf. (94), is obviously invariant. Since also (V ) is invariant and T restricted to (V ) is a selfadjoint operator on (V ), it has a nonzero eigenvector in (V ) as long as (V ) is nonzero. But all such eigenvectors belong to V. Thus, (V ) is zero and V = V We deduced that V is the direct sum of orthogonal eigenspaces of the selfadjoint operator T. By choosing an orthogonal basis in each eigenspace, we arrive at the following fundamentally important result. 45

46 Theorem 4.1 (Spectral Theorem) Any selfadjoint operator T is diagonalizable with a basis consisting of orthonormal eigenvectors. More precisely, T can be represented as T = 1 i n λ i u i ui (115) where {u 1,..., u n } is an orthonormal basis of V consisting of eigenvectors u i with the eigenvalues λ i. Here each eigenvalue is encountered as many times as the dimension of the eigenspace V λi Any diagonalizable operator is selfadjoint with respect to the dot product B where B denotes any basis for which the matrix A T (B) is diagonal. It follows that an operator is diagonalizable if and only if it is selfadjoint with respect to at least one scalar product. 46

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Numerical Linear Algebra

Numerical Linear Algebra University of Alabama at Birmingham Department of Mathematics Numerical Linear Algebra Lecture Notes for MA 660 (1997 2014) Dr Nikolai Chernov April 2014 Chapter 0 Review of Linear Algebra 0.1 Matrices

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Linear Algebra. Workbook

Linear Algebra. Workbook Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

More information

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

SUMMARY OF MATH 1600

SUMMARY OF MATH 1600 SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You

More information

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r Practice final solutions. I did not include definitions which you can find in Axler or in the course notes. These solutions are on the terse side, but would be acceptable in the final. However, if you

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Study Guide for Linear Algebra Exam 2

Study Guide for Linear Algebra Exam 2 Study Guide for Linear Algebra Exam 2 Term Vector Space Definition A Vector Space is a nonempty set V of objects, on which are defined two operations, called addition and multiplication by scalars (real

More information

Algebra II. Paulius Drungilas and Jonas Jankauskas

Algebra II. Paulius Drungilas and Jonas Jankauskas Algebra II Paulius Drungilas and Jonas Jankauskas Contents 1. Quadratic forms 3 What is quadratic form? 3 Change of variables. 3 Equivalence of quadratic forms. 4 Canonical form. 4 Normal form. 7 Positive

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT 204 - FALL 2006 PRINCETON UNIVERSITY ALFONSO SORRENTINO 1 Adjoint of a linear operator Note: In these notes, V will denote a n-dimensional euclidean vector

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

ALGEBRA 8: Linear algebra: characteristic polynomial

ALGEBRA 8: Linear algebra: characteristic polynomial ALGEBRA 8: Linear algebra: characteristic polynomial Characteristic polynomial Definition 8.1. Consider a linear operator A End V over a vector space V. Consider a vector v V such that A(v) = λv. This

More information

235 Final exam review questions

235 Final exam review questions 5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an

More information

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45 address 12 adjoint matrix 118 alternating 112 alternating 203 angle 159 angle 33 angle 60 area 120 associative 180 augmented matrix 11 axes 5 Axiom of Choice 153 basis 178 basis 210 basis 74 basis test

More information

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2 Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition

More information

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.

Dot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................

More information

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

MATH 304 Linear Algebra Lecture 34: Review for Test 2. MATH 304 Linear Algebra Lecture 34: Review for Test 2. Topics for Test 2 Linear transformations (Leon 4.1 4.3) Matrix transformations Matrix of a linear mapping Similar matrices Orthogonality (Leon 5.1

More information

Math 113 Final Exam: Solutions

Math 113 Final Exam: Solutions Math 113 Final Exam: Solutions Thursday, June 11, 2013, 3.30-6.30pm. 1. (25 points total) Let P 2 (R) denote the real vector space of polynomials of degree 2. Consider the following inner product on P

More information

LINEAR ALGEBRA SUMMARY SHEET.

LINEAR ALGEBRA SUMMARY SHEET. LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized

More information

6 Inner Product Spaces

6 Inner Product Spaces Lectures 16,17,18 6 Inner Product Spaces 6.1 Basic Definition Parallelogram law, the ability to measure angle between two vectors and in particular, the concept of perpendicularity make the euclidean space

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure. Hints for Exercises 1.3. This diagram says that f α = β g. I will prove f injective g injective. You should show g injective f injective. Assume f is injective. Now suppose g(x) = g(y) for some x, y A.

More information

NOTES on LINEAR ALGEBRA 1

NOTES on LINEAR ALGEBRA 1 School of Economics, Management and Statistics University of Bologna Academic Year 207/8 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

More information

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2. MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2. Diagonalization Let L be a linear operator on a finite-dimensional vector space V. Then the following conditions are equivalent:

More information

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

Typical Problem: Compute.

Typical Problem: Compute. Math 2040 Chapter 6 Orhtogonality and Least Squares 6.1 and some of 6.7: Inner Product, Length and Orthogonality. Definition: If x, y R n, then x y = x 1 y 1 +... + x n y n is the dot product of x and

More information

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

Lecture Notes 1: Vector spaces

Lecture Notes 1: Vector spaces Optimization-based data analysis Fall 2017 Lecture Notes 1: Vector spaces In this chapter we review certain basic concepts of linear algebra, highlighting their application to signal processing. 1 Vector

More information

Review of linear algebra

Review of linear algebra Review of linear algebra 1 Vectors and matrices We will just touch very briefly on certain aspects of linear algebra, most of which should be familiar. Recall that we deal with vectors, i.e. elements of

More information

Definitions for Quizzes

Definitions for Quizzes Definitions for Quizzes Italicized text (or something close to it) will be given to you. Plain text is (an example of) what you should write as a definition. [Bracketed text will not be given, nor does

More information

Math 3108: Linear Algebra

Math 3108: Linear Algebra Math 3108: Linear Algebra Instructor: Jason Murphy Department of Mathematics and Statistics Missouri University of Science and Technology 1 / 323 Contents. Chapter 1. Slides 3 70 Chapter 2. Slides 71 118

More information

2. Every linear system with the same number of equations as unknowns has a unique solution.

2. Every linear system with the same number of equations as unknowns has a unique solution. 1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations

More information

Vector Spaces and Linear Transformations

Vector Spaces and Linear Transformations Vector Spaces and Linear Transformations Wei Shi, Jinan University 2017.11.1 1 / 18 Definition (Field) A field F = {F, +, } is an algebraic structure formed by a set F, and closed under binary operations

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

4. Linear transformations as a vector space 17

4. Linear transformations as a vector space 17 4 Linear transformations as a vector space 17 d) 1 2 0 0 1 2 0 0 1 0 0 0 1 2 3 4 32 Let a linear transformation in R 2 be the reflection in the line = x 2 Find its matrix 33 For each linear transformation

More information

Math 396. Quotient spaces

Math 396. Quotient spaces Math 396. Quotient spaces. Definition Let F be a field, V a vector space over F and W V a subspace of V. For v, v V, we say that v v mod W if and only if v v W. One can readily verify that with this definition

More information

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix. MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix. Basis Definition. Let V be a vector space. A linearly independent spanning set for V is called a basis.

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication

More information

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence)

Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) Math 413/513 Chapter 6 (from Friedberg, Insel, & Spence) David Glickenstein December 7, 2015 1 Inner product spaces In this chapter, we will only consider the elds R and C. De nition 1 Let V be a vector

More information

Linear algebra 2. Yoav Zemel. March 1, 2012

Linear algebra 2. Yoav Zemel. March 1, 2012 Linear algebra 2 Yoav Zemel March 1, 2012 These notes were written by Yoav Zemel. The lecturer, Shmuel Berger, should not be held responsible for any mistake. Any comments are welcome at zamsh7@gmail.com.

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

MATH 235. Final ANSWERS May 5, 2015

MATH 235. Final ANSWERS May 5, 2015 MATH 235 Final ANSWERS May 5, 25. ( points) Fix positive integers m, n and consider the vector space V of all m n matrices with entries in the real numbers R. (a) Find the dimension of V and prove your

More information

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors Mathematical Methods wk : Vectors John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors Mathematical Methods wk : Vectors John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

Introduction to Linear Algebra, Second Edition, Serge Lange

Introduction to Linear Algebra, Second Edition, Serge Lange Introduction to Linear Algebra, Second Edition, Serge Lange Chapter I: Vectors R n defined. Addition and scalar multiplication in R n. Two geometric interpretations for a vector: point and displacement.

More information

MTH 2032 SemesterII

MTH 2032 SemesterII MTH 202 SemesterII 2010-11 Linear Algebra Worked Examples Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education December 28, 2011 ii Contents Table of Contents

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Topic 1: Matrix diagonalization

Topic 1: Matrix diagonalization Topic : Matrix diagonalization Review of Matrices and Determinants Definition A matrix is a rectangular array of real numbers a a a m a A = a a m a n a n a nm The matrix is said to be of order n m if it

More information

Practice Exam. 2x 1 + 4x 2 + 2x 3 = 4 x 1 + 2x 2 + 3x 3 = 1 2x 1 + 3x 2 + 4x 3 = 5

Practice Exam. 2x 1 + 4x 2 + 2x 3 = 4 x 1 + 2x 2 + 3x 3 = 1 2x 1 + 3x 2 + 4x 3 = 5 Practice Exam. Solve the linear system using an augmented matrix. State whether the solution is unique, there are no solutions or whether there are infinitely many solutions. If the solution is unique,

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

NORMS ON SPACE OF MATRICES

NORMS ON SPACE OF MATRICES NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system

More information

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,

More information

a(b + c) = ab + ac a, b, c k

a(b + c) = ab + ac a, b, c k Lecture 2. The Categories of Vector Spaces PCMI Summer 2015 Undergraduate Lectures on Flag Varieties Lecture 2. We discuss the categories of vector spaces and linear maps. Since vector spaces are always

More information

SYLLABUS. 1 Linear maps and matrices

SYLLABUS. 1 Linear maps and matrices Dr. K. Bellová Mathematics 2 (10-PHY-BIPMA2) SYLLABUS 1 Linear maps and matrices Operations with linear maps. Prop 1.1.1: 1) sum, scalar multiple, composition of linear maps are linear maps; 2) L(U, V

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Definition (T -invariant subspace) Example. Example

Definition (T -invariant subspace) Example. Example Eigenvalues, Eigenvectors, Similarity, and Diagonalization We now turn our attention to linear transformations of the form T : V V. To better understand the effect of T on the vector space V, we begin

More information

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms (February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops

More information

WOMP 2001: LINEAR ALGEBRA. 1. Vector spaces

WOMP 2001: LINEAR ALGEBRA. 1. Vector spaces WOMP 2001: LINEAR ALGEBRA DAN GROSSMAN Reference Roman, S Advanced Linear Algebra, GTM #135 (Not very good) Let k be a field, eg, R, Q, C, F q, K(t), 1 Vector spaces Definition A vector space over k is

More information

Solutions to Final Exam

Solutions to Final Exam Solutions to Final Exam. Let A be a 3 5 matrix. Let b be a nonzero 5-vector. Assume that the nullity of A is. (a) What is the rank of A? 3 (b) Are the rows of A linearly independent? (c) Are the columns

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Exercise Sheet 1.

Exercise Sheet 1. Exercise Sheet 1 You can download my lecture and exercise sheets at the address http://sami.hust.edu.vn/giang-vien/?name=huynt 1) Let A, B be sets. What does the statement "A is not a subset of B " mean?

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts (in

More information

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers.

5.) For each of the given sets of vectors, determine whether or not the set spans R 3. Give reasons for your answers. Linear Algebra - Test File - Spring Test # For problems - consider the following system of equations. x + y - z = x + y + 4z = x + y + 6z =.) Solve the system without using your calculator..) Find the

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis.

Vector spaces. DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis. Vector spaces DS-GA 1013 / MATH-GA 2824 Optimization-based Data Analysis http://www.cims.nyu.edu/~cfgranda/pages/obda_fall17/index.html Carlos Fernandez-Granda Vector space Consists of: A set V A scalar

More information

A PRIMER ON SESQUILINEAR FORMS

A PRIMER ON SESQUILINEAR FORMS A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form

More information

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam

Math 18, Linear Algebra, Lecture C00, Spring 2017 Review and Practice Problems for Final Exam Math 8, Linear Algebra, Lecture C, Spring 7 Review and Practice Problems for Final Exam. The augmentedmatrix of a linear system has been transformed by row operations into 5 4 8. Determine if the system

More information

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03

Page 52. Lecture 3: Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 2008/10/03 Date Given: 2008/10/03 Page 5 Lecture : Inner Product Spaces Dual Spaces, Dirac Notation, and Adjoints Date Revised: 008/10/0 Date Given: 008/10/0 Inner Product Spaces: Definitions Section. Mathematical Preliminaries: Inner

More information

PRACTICE FINAL EXAM. why. If they are dependent, exhibit a linear dependence relation among them.

PRACTICE FINAL EXAM. why. If they are dependent, exhibit a linear dependence relation among them. Prof A Suciu MTH U37 LINEAR ALGEBRA Spring 2005 PRACTICE FINAL EXAM Are the following vectors independent or dependent? If they are independent, say why If they are dependent, exhibit a linear dependence

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

MATH 221, Spring Homework 10 Solutions

MATH 221, Spring Homework 10 Solutions MATH 22, Spring 28 - Homework Solutions Due Tuesday, May Section 52 Page 279, Problem 2: 4 λ A λi = and the characteristic polynomial is det(a λi) = ( 4 λ)( λ) ( )(6) = λ 6 λ 2 +λ+2 The solutions to the

More information

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p. LINEAR ALGEBRA Fall 203 The final exam Almost all of the problems solved Exercise Let (V, ) be a normed vector space. Prove x y x y for all x, y V. Everybody knows how to do this! Exercise 2 If V is a

More information

Linear Algebra Lecture Notes-II

Linear Algebra Lecture Notes-II Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered

More information

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION

REVIEW FOR EXAM III SIMILARITY AND DIAGONALIZATION REVIEW FOR EXAM III The exam covers sections 4.4, the portions of 4. on systems of differential equations and on Markov chains, and..4. SIMILARITY AND DIAGONALIZATION. Two matrices A and B are similar

More information