Notes on Linear Algebra

Size: px
Start display at page:

Download "Notes on Linear Algebra"

Transcription

1 Notes on Linear Algebra Jay R. Walton Department of Mathematics Texas A&M University October 14, Introduction Linear algebra provides the foundational setting for the study of multivariable mathematics which in turn is the bedrock upon which most modern theories of mathematical physics rest including classical mechanics (rigid body mechanics), continuum mechanics (the mechanics of deformable material bodies), relativistic mechanics, quantum mechanics, etc. At the heart of linear algebra is the notion of a (linear) vector space which is an abstract mathematical structure introduced to make rigorous the classical, intuitive concept of vectors as physical quantities possessing the two attributes of length and direction. In these brief notes, vector spaces and linear transformations are introduced in a three step presentation. First they are studied as algebraic objects and a few important consequences of the concept of linearity are explored. Next the algebraic structure is augmented by introducing a topological structure (via a metric or a norm, for example) providing a convenient framework for extending key concepts from the calculus to the vector space setting permitting a rigorous framework for studying nonlinear, multivariable functions between vector spaces. Finally, an inner product structure for vector spaces is introduced in order to define the geometric notion of angle (and as a special case, the key concept of orthogonality) augmenting the notion of length or distance provided by previously by a norm or a metric. 2 Vector Space The central object in the study of linear algebra is a Vector Space which is defined here as follows. Definition 1 A Vector Space over a field F 1 is a set, V, whose elements are called vectors, endowed with two commutative and associative binary operations, called vector addition and scalar multiplication, subject to the following axioms. Copyright c 2014 by Jay R. Walton. All rights reserved. 1 While in general F can be any field, in these notes it will denote either the real numbers R or the complex numbers C. 1

2 1. If a, b V, the their sum a + b is also in V. 2. Vector addition is commutative and associative, i.e. a + b = b + a and all a, b V and a + (b + c) = (a + b) + c for all a, b, c V. 3. There exists a zero element, 0, in V satisfying 0 + a = a for all a V. 4. Every element a V possesses a (unique) additive inverse, a, satisfying a + a = If a V and α F, then αa V. 6. For all a V, 1 a = a. 7. If a, b V and α, β F, then α(a+b) = αa+αb, (α+β)a = αa+βa, α(βa) = (αβ)a. 2 Remark 1 It follows readily from these axioms that for any a V, 0a = 0 and ( 1)a = a. (Why?) Remark 2 The above axioms are sufficient to conclude that for any collection of vectors a 1,..., a N V and scalars α 1,..., α N F, the Linear Combination α 1 a α N a N is an unambiguously defined vector in V. (Why?) Example 1 The prototype examples of vector spaces over the real numbers R and the complex numbers C are the Euclidean spaces R N and C N of N-tuples of real or complex numbers. Specifically, the elements a R N are defined by a 1 a :=. where a 1,..., a N R. Vector addition and scalar multiplication are defined componentwise, that is a 1 b 1 αa 1 + βb 1 αa + βb = α. + β. =.. a N b N αa N + βb N Analogous relations hold for the complex case. Example 2 The vector space R is defined as the set of all (infinite) sequences, a, of real numbers, that is, a R means ) a = a N ( a1 with vector addition and scalar multiplication done componentwise. 2 The symbol + has been used to denote both the addition of vectors and the addition of scalars. This should not generate any confusion since the meaning of + should be clear from the context in which it is used.. 2

3 Definition 2 A subset U of a vector space V is called a Subspace provided it is closed with respect to vector addition and scalar multiplication (and hence is also a vector space). Example 3 One shows easily that R N is a subspace of R M whenever N M. (Why?) Example 4 The vector space R f is the set all of infinite sequences of real numbers with only finitely many non-zero components. It follows readily that R f is a subspace of R. (Why?) Definition 3 Let A be a subset of a vector space V. Then the Span of A, denoted span(a), is defined to be the set of all (finite) linear combinations formed from the vectors in A. A subset A V is called a Spanning Set for V if span(a) = V. A spanning set A V is called a Minimal Spanning Set if no proper subset of A is a spanning set for V. Remark 3 One readily shows that span(a) is a subspace of V. (How?) 2.1 Linear Independence and Dependence The most important concepts associated with vector spaces are Linear Independence and Linear Dependence. Definition 4 A set of vectors A V is said to be Linearly Independent if given any vectors a 1,..., a N A, then the only way to write the zero vector as a linear combination of these vectors is if all coefficients are zero, that is, α 1 a α N a n = 0 if and only if α 1 =... = α N = 0. Remark 4 If a set of vectors A is linearly independent and B A, then B is also linearly independent. (Why?) Remark 5 A minimal spanning set is easily seen to be linearly independent. Indeed, suppose A is a minimal spanning set and suppose it is not linearly independent. Then there exists a subset {a 1,..., a N } of A such that a N can be written as a linear combination of a 1,..., a N 1. It follows that A {a N } (set difference) is a proper subset of A that spans V contradicting the assumption that A is a minimal spanning set. Definition 5 If a 0, then span({a}) is called a Line in V. If {a, b} is linearly independent, then span({a, b}) is called a Plane in V. Definition 6 A set of vectors A V is said to be Linearly Dependent if there exists a set of vectors a 1,..., a N V and scalars α 1,..., α N F, not all zero, with α 1 a α N a N = 0. 3

4 Remark 6 A set of vectors that contains the zero vector is linearly dependent. (Why?) Remark 7 If the set of vectors A is linearly dependent and A B, then B is also linearly dependent. (Why?) Remark 8 If A = {a 1,..., a N } V is linearly dependent, then one of the a j can be written as a non-trivial (not all scalar coefficients zero) linear combination of the others, that is, one of the a j is in the span of the others. 2.2 Basis The notion of Basis for a vector space is intimately connected with spanning sets of minimal cardinality. Here attention is restricted to vector spaces with spanning sets containing only finitely many vectors. Definition 7 A vector space V is called Finite Dimensional provided it has a spanning set with finitely many vectors. Theorem 1 Suppose V is finite dimensional with minimal spanning set A = {a 1,..., a N }. Let B be another minimal spanning set for V. Then #(B) = N, where #(B) denotes the cardinality of the set B. Proof: It suffices to assume that #(B) N. (Why?) Then choose B := {b 1,..., b N } B. Clearly, B is linearly independent. (Why?) Since A is spanning, there exist scalars α ij such that b i = N j=1 α ija j. Then the matrix L := [α ij ] is invertible. (Why?) Suppose L 1 = [β ij ]. Then a i = N j=1 β ijb j, that is, the set {a 1,..., a N } is in the span of B. But since A spans V, it follows that B must also span V. But since B was assumed to be a minimal spanning set, it must be that B = B. (Why?) Hence, #(B) = N. 3 This theorem allows one to define unambiguously the notion of dimension for a finite dimensional vector space. Definition 8 Let V be a finite dimensional vector space. Then its Dimension, denoted dim(v), is defined to be the cardinality of any minimal spanning set. Definition 9 A Basis for a finite dimensional vector space is defined to be any minimal spanning set. It follows that given a basis B = {b 1,..., b N } for a finite dimensional vector space V, every vector a V can be written uniquely as a linear combination of the base vectors, that is N a = α j b j (1) j=1 with the coefficients α 1,..., α N being uniquely defined. These coefficients are called the coordinates or components of a with respect to the basis B. It is often convenient collect 3 The symbol is used in these notes to indicate the end of a proof. 4

5 these components into an N-tuple denoted α 1 [a] B :=. Notational Convention: It is useful to introduce simplifying notation for sums of the sort (1). Specifically, in the Einstein Summation Convention, expressions in which an index is repeated are to be interpreted as being summed over the range of the index. Thus (1) reduces to N a = α j b j = α j b j. j=1 With the Einstein summation convention in effect, if one wants to write an expression like α j b j for a particular, but unspecified, value of j, then one must write α N. α j b j (no sum) Example 5 In the Euclidean vector space R N, it is customary to define the Natural Basis N = {e 1,..., e N } to be e 1 := 0., e 2 := 1.,..., e N := Finding components of vectors in R N with respect to the natural basis is then immediate. that is, a 1 a =. a N R N = [a] N = a = a j e j. a 1. In general, finding the components of a vector with respect to a prescribed basis usually involves solving a linear system of equations. For example, suppose a basis B = {b 1,..., b N } in R N is given as Then to find the components b 1 := b 11. b N1,..., b N := α 1 [a] B =. 5 α N b 1N. b NN a N.

6 of a vector a 1 a :=. with respect to B, one solves the linear system of equations a i = b ij α j, i = 1,..., N which in expanded form is 2.3 Change of Basis a N a 1 = b 11 α b 1N α N a N = b N1 α b NN α N. Let B = {b 1,..., b N } and B = { b 1,..., b N } be two bases for the vector space V. The question considered here is how to change components of vectors with respect to the basis B into components with respect to the basis B? To answer the question, one needs to specify some relation between the two sets of base vectors. Suppose, for example, one can write the base vectors b j as linear combinations of the base vectors b j, that is, one knows the components of each N-tuple [ b j ] B, j = 1,..., N. To be more specific, suppose b j = t ij b i, for j = 1,..., N. (2) Form the matrix T := [t ij ] called the Transition Matrix between the bases B and B. Then T is invertible. (Why?) Let its inverse have components T 1 = [s ij ]. By definition of matrix inverse, one sees that s ik t kj = δ ij where the Kronecker symbol δ ij is defined as { 1 when i = j, δ ij := 0 when i j. Suppose now that a vector a has components. α 1 [a] B =. Then one shows readily (How?) that the components of a with respect to the basis B are given by α 1 [a] B =. α N with α N α i = s ij α j.. In matrix notation this change of basis formula becomes [a] B = T 1 [a] B. (3) 6

7 3 Linear Transformations Linear transformations are at the heart of the subject of linear algebra. Important in their own right, they also are the principal tool for understanding nonlinear functions between subsets of vector spaces. They will be studied here first from an algebraic perspective. Subsequently, they will be studied geometrically. Definition 10 A linear transformation, L : is a function satisfying V U, between the vector spaces V and U L(αa + βb) = αla + βlb, for every α, β F and a, b V. 4 (4) Remark 9: The symbol + on the left hand side of (4) denotes vector addition in V whereas on right hand side of (4) it denotes vector addition in U. Remark 10: From the definition (4) it follows immediately that once a linear transformation has been specified on a basis for the domain space, its values are determined on all of the domain space through linearity. Two important subspaces associated with a linear transformation are its Range and Nullspace. Consider a linear transformation L : V U. (5) Definition 11 The Range of a linear transformation (5) is the subset of U, denoted Ran(L), defined by Ran(L) := {u U there exists v V such that u = Lv}. Definition 12 The Nullspace of a linear transformation (5) is the subset of V, denoted Nul(L), defined by Nul(L) := {v Lv = 0}. Remark 9 It is an easy exercise to show that Ran(L) is a subspace of U and Nul(L) is a subspace of V. (How?) Example 6 Let A be an M N matrix A := a 11. a 1N a M1 a MN. = (a 1 a N ) where a 1,..., a N denote the column vectors of A, that is, A is viewed as consisting of N- column vectors from R M. 4 In analysis it is customary to make a notational distinction between the name of a function and the values the function takes. Thus, for example, one might denote a function by f and values the function takes by f(x). For a linear transformation, L, it is customary to denote its values by La. This notation is suggestive of matrix multiplication. 7

8 Definition 13 The Column Rank of A is defined to be the dimension of the subspace of R M spanned by its column vectors. One can define a linear transformation A : R N R M by Av := Av (matrix multiplication). (6) It is readily shown that the range of A is the span of the column vectors of A, and the nullspace of A is the set of N-tuples satisfying Ran(A) = span{a 1,..., a N } v 1 v =. v N R N v 1 a v N a N = 0. It follows that dim(ran(a)) equals the number of linearly independent column vectors of A. One can also view the matrix A as consisting of M-row vectors from R N and define its row rank by Definition 14 The Row Rank of A is defined to be the dimension of the subspace of R N spanned by its row vectors. A fundamental result about matrices is Theorem 2 Let A be an M N matrix. Then Proof: Postponed. A closely related result is Row Rank of A = Column Rank of A. (7) Theorem 3 Let A be an M N matrix and A the linear transformation (6). Then dim(ran(a)) + dim(nul(a)) = dim(r N ) = N. (8) The result (8) is a special case of the more general Theorem 4 Let L : V U be a linear transformation. Then dim(ran(l)) + dim(nul(l)) = dim(v). (9) The proof of this last theorem is facilitated by introducing the notion of Direct Sum Decomposition of a vector space V. 8

9 Definition 15 Let V be a finite dimensional vector space. A Direct Sum Decomposition of V, denoted V = U W, consists of a pair of subspaces, U, W of V with the property that every vector v V can be written uniquely as a sum v = u + w with u U and w W. Remark 10 It follows easily from the definition of direct sum decomposition that V = U W = dim(v) = dim(u) + dim(w). (Why?) Example 7 Two non-colinear lines through the origin form a direct sum decomposition of R 2 (Why?), and a plane and non-coincident line through the origin form a direct sum decomposition of R 3 (Why?). Proof of Theorem 4: Let {f 1,..., f n } be a basis for Nul(L) where n = dim(nul(l)). Fill out this set to form a basis for V, {g 1,..., g N n, f 1,..., f n } (How?). Define the subspace Ṽ := span({g 1,..., g N n }). Then Ṽ and Nul(L) form a direct sum decomposition of V (Why?), that is V = Ṽ Nul(L). Moreover, one can easily show that L restricted to Ṽ is a one-to-one transformation onto Ran(L) (Why?). It follows that {Lg 1,..., Lg N n } forms a basis for Ran(L) (Why?) and hence that dim(ran(l)) = dim(ṽ) = N n (Why?) from which one concludes (9). 3.1 Matrices and Linear Transformations Let L : V U be a linear transformation and let B := {b 1,..., b N } be a basis for V and C := {c 1,..., c M } a basis for U. Then the action of the linear transformation L can be carried out via matrix multiplication of coordinates in the following manner. Given v V, define u := Lv. Then the Matrix of L with respect to the bases B and C is defined to be the unique matrix L = [L] B,C satisfying or in more expanded form [u] C = L[v] B (matrix multiplication) [Lv] C = [L] B,C [v] B. (10) To see how to construct [L] B,C, first construct the N column vectors (M-tuples) l j := [Lb j ] C, that is the coordinates with respect to the basis C in U of the image vectors Lb j of the base vectors in B. Then, one easily shows that defining [L] B,C := [l 1... l N ] (matrix whose columns are l 1,..., l N ) (11) gives the unique M N matrix satisfying (10). Indeed, if v V has components [v] B = [v j ], that is v = v j b j, then [Lv] C = [v j Lb j ] C = [l 1... l N ][v] B as required to prove (11). Remark 11 If U = V and B = C, then the matrix of L with respect to B is denoted [L] B and is constructed by using the B-coordinates of the vectors Lb j, [Lb j ] B, as its columns. Example 8 The identity transformation I : V V (Iv = v for all v V) has where δ ij is the Kronecker symbol. (Why?) [I] B = [δ ij ] 9

10 3.2 Change of Basis Suppose L : V V and B = {b 1,..., b N } and B = { b 1,..., b N } are two bases for V. The question of interest here is: How are the matrices [L] B and [L] B related? The question is easily answered by using the change of basis formula (3). From the definition of [L] B and [L] B one has [Lv] B = [L] B [v] B and [Lv] B = [L] B[v] B. It follows from (3) that from which one concludes Remark 12 [Lv] B = T 1 [Lv] B = T 1 [L] B [v] B = T 1 [L] B T [v] B Square matrices A and B satisfying [L] B = T 1 [L] B T. (12) A = T 1 BT for some (non-singular) matrix T are said to be Similar. Thus, matrices related to a given linear transformation through a change of basis are similar and conversely similar matrices correspond the same linear transformation under a change of basis. 3.3 Trace and Determinant of a Linear Transformation Consider linear transformations on a given finite dimensional vector space V, i.e. L : V. Then its Trace is defined by Definition 16 The trace of L, denoted tr(l), is defined by tr(l) := tr([l] B ) V for any matrix representation of L. To see that this notion of trace of a linear transformation is well (unambiguously) defined, let [L] B and [L] B denote two different matrix representations of L. Then they must be similar matrices, i.e. there exists a non-singular matrix T such that [L] B = T 1 [L] B T. But then from the elementary property of the trace of a square matrix, tr(ab) = tr(ba) for any two square matrices A, B (Why true?), one has tr([l] B) = tr(t 1 [L] B T ) = tr(t T 1 [L] B ) = tr([l] B ). In similar fashion, one can define the Determinant of a linear transformation on V. Definition 17 The determinant of a linear transformation L on a finite dimensional vector space V is defined by det(l) := det([l] B ) where [L] B is any matrix representation of L. That this notion of the determinant of a linear transformation is well defined can be seen from an argument similar to that above for the trace. The relevant property of the determinant of square matrices needed in the argument is det(ab) = det(a) det(b) (Why true?). 10

11 4 Metric, Norm and Inner Product In this section, the algebraic structure for vector spaces presented above is augmented through the inclusion of geometric structure. Perhaps the most basic geometric notion is that of distance or length. A convenient way to introduce a notion of distance into a vector space is through use of a Metric. Definition 18 A Metric on a vector space V is a real valued function of two variables d(, ) : V R satisfying the following axioms. For any a, b, c V, d(a, b) 0, with d(a, b) = 0 a = b (P ositivity) d(a, b) = d(b, a) (Symmetry) (13) d(a, b) d(a, c) + d(c, b). (T riangle Inequality) Remark 13 Given a metric on a vector space, one can define the important concept of a metric ball. Specifically, an (open) metric ball, B(c, r), centered at c and of radius r is defined to be the set B(c, r) := {v V d(c, v) < r}. One can also define a notion of metric convergent sequence as follows. {a j, j = 1, 2,...} V is said to be metric convergent to a V provided lim d(a j, a) = 0. j A sequence An equivalent definition in terms of metric balls states that for all ɛ > 0, there exists N > 0 such that a j B(a, ɛ) for all j > N. An important class of metrics on a vector space comes from the notion of a Norm. Definition 19 A Norm on a vector space is a non-negative, real valued function : V R satisfying the following axioms. For all a, b V and α F, a 0, with a = 0 a = 0 (Positivity) αa = α a (Homogeneity) (14) a + b a + b. (Triangle Inequality) Remark 14 A norm has an associated metric defined by d(a, b) := a b. An important class of norms on a vector space comes from the notion of Inner Product. Here a distinction must be made between real and complex vector spaces. 11

12 Definition 20 An inner product on a real vector space is a positive definite, symmetric bilinear form, : V V R satisfying the following axioms. For all a, b, c V and α R, a, a 0 with a, a = 0 a = 0 (Positivity) a, b = b, a (Symmetry) αa, b = α a, b (Homogeneity) (15) a + c, b = a, b + c, b. (Additivity) Definition 21 An inner product on a complex vector space is a positive definite, conjugatesymmetric form, : V V C satisfying the following axioms. For all a, b, c V and α C, a, a 0 with a, a = 0 a = 0 (Positivity) a, b = b, a (Conjugate-Symmetry) (16) αa, b = α a, b (Homogeneity) a + c, b = a, b + c, b (Additivity) where in (16), b, a denotes the complex conjugate of b, a. It is evident from the definition that a complex inner product is linear in the first variable but conjugate linear in the second. In particular, for all a, b V and α C αa, b = α a, b whereas a, αb = α a, b. Remark 15 Given an inner product on either a real or complex vector space, one can define an associated norm by a := a, a. (Why does this define a norm?) Remark 16 In R N, the usual inner product, defined by a, b := a 1 b a N b N = a j b j where a =. a 1 a N, b = b 1. is often denoted by a b, the so-called dot product. In C N, the usual inner product is given by a, b = a b = a 1 b a N b N where now a j b j C and the over-bar denotes complex conjugation. Example 9 Let A be a positive definite, symmetric, real, N N matrix. 5 Then A defines an inner product on R N through a, b A := a (Ab). 5 Recall that symmetric means that A T = A, where A T denotes the transpose matrix, and positive definite means a (Aa) > 0 for all a R N. 12 b N,

13 Example 10 An important class of norms on R N and C N are the p-norms defined for p 1 by ( N ) 1/p a p := a j p. (17) j=1 One must show that (17) satisfies the axioms (14) for a norm. The positivity and symmetry axioms from (14) are straight forward to show, but the triangle inequality is not. A proof that (17) satisfies the triangle inequality in (14) can be found in the supplemental Notes on Inequalities. The special case p = 2 is the famous Minkowski Inequality for any a, b R N. a + b 2 a 2 + b 2 Example 11 The -norm is defined by a := max{ a 1,..., a N }. (18) The task of showing that (18) satisfies the axioms (14) for a norm is left as an exercise. An important observation about the p-norms is that lim p a p = a whose proof is also left as an exercise. That the -norm is the limit of the p-norms as p is what motivates the definition (18). Remark 17 The 2-norm is very special among the class of p-norms being the only one coming from an inner product. The 2-norm is also called the Euclidean Norm since it comes from the usual inner product on the Euclidean space R N. Remark 18 In a real vector space (V,, ), one can define a generalized notion of angle between vectors. Specifically, given a, b V, one defines the angle, θ, between a and b to be the unique number 0 < θ < π satisfying cos θ = a, b a b. As a special case, two vectors a, b V are said to be Orthogonal (in symbols a b) if a, b = 0. (19) Remark 19 Analytically or topologically all norms on a finite dimensional vector space are equivalent in the sense that if a sequence is convergent with respect to one norm, it is convergent with respect to all norms on the space. This observation follows directly from the following important 13

14 Theorem 5 Let V be a finite dimensional vector space and let I and II be two norms on V. Then there exist positive constants m, M such that for all v V m v I v II M v I. Proof: The proof of this result is left as an exercise. While all norms on a finite dimensional vector space might be topologically equivalent, they are not geometrically equivalent. In particular, the shape of the (metric) unit ball can vary greatly for different norms. That fact is illustrated in the following Exercise 1 Sketch the unit ball in R 2 for the p-norms, 1 p. 4.1 Orthonormal Basis In a finite dimensional inner product space (V,, ), an important class of bases consists of the Orthonormal Bases defined as follows. Definition 22 In an inner product space (V,, ), a basis B = {b 1,..., b N } is called Orthonormal provided b i, b j = δ ij for i, j = 1,..., N. (20) Thus, an orthonormal basis consists of pairwise orthogonal unit vectors. Finding coordinates of arbitrary vectors with respect to such a basis is as convenient as it is for the natural basis in F N. Indeed, what makes the natural basis so useful is that it is an orthonormal basis with respect to the usual inner product on R N. Remark 20 Suppose B = {b 1,..., b N } is an orthonormal basis for (V,, ) and a V. Then one easily deduces that a = a 1 b a N b N = a j = a, b j as can be seen by taking the inner product of the first equation with each base vector b j. Moreover, one has a, c = [a] B [c] B (21) where the right hand side is the usual dot-product of N-tuples in R N [a] B [c] B = a 1 c a N c N. Thus, computing the inner product of vectors using components with respect to an orthonormal basis uses the familiar formula for the dot-product on R N with respect to the natural basis. In similar fashion one can derive a simple formula for the components of the matrix representation of a linear transformation on an inner product space with respect to an orthonormal basis. Specifically, suppose L : V V and B = {b 1,..., b N } is an orthonormal basis. Then it is straightforward to show that L := [L] B = [l ij ] with l ij = b i, Lb j. (22) 14

15 One way to derive (22) is to first recall the formula l ij = e i (Le j ) (23) for the components of an N N matrix, L := [l ij ], where {e 1,..., e N } is the natural basis for R N. It follows from (21) that for all a, c V a, Lc = [a] B ([Lc] B ) = [a] B ([L] B [c] B ). (24) Now letting a = b i, c = b j and noting that [b j ] B = e j for all j = 1,..., N, one deduces (22) from (23,24) General Bases and the Metric Tensor More generally, one would like to be able to conveniently compute the inner product of two vectors using coordinates with respect to some general (not necessarily orthonormal) basis. Suppose B := {f 1,..., f N } is a basis for (V,, ) and let a, b V with coordinates [a] B = [a j ], [b] B = [b j ], respectively. Then one has N a, b = a j f j, j=1 N N b k f k = a j b k f j, f k = [a] B (F [b] B ) k=1 j,k=1 where the matrix F := [ f i, f j ] = [f ij ] (25) is called the Metric Tensor associated with the basis B. It follows that the familiar formula (21) for the dot-product in R N holds in general if and only if the basis B is orthonormal. The matrix F is called a metric tensor because it is needed to compute lengths and angles using coordinates with respect to the basis B. In particular, if v V has coordinates [v] B = [v j ], then v (the length of v) is given by v 2 = v, v = f ij v i v j The Gram-Schmidt Orthogonalization Procedure Given any basis B = {f 1,..., f N } for an inner product space (V,, ), there is a simple algorithm, called the Gram-Schmidt Orthogonalization Procedure, for constructing from B an orthonormal basis B = {e 1,..., e N } satisfying span{f 1,..., f K } = span{e 1,..., e K } for each 1 K N. (26) The algorithm proceeds inductively as follows. Step 1: Define e 1 := f 1 / f 1, where denotes the norm associated with the inner product,. Step 2: Subtract from f 2 its component in the direction of e 1 and the normalize. More specifically, define g 2 := f 2 f 2, e 1 e 1 15

16 and then define e 2 := g 2 g 2. It is easily seen that {e 1, e 2 } is an orthonormal pair satisfying (26) for K = 1, 2. Inductive Step: Suppose orthonormal vectors {e 1,..., e J } have been constructed satisfying (26) for 1 K J with J < N. Define and then define g J+1 := f J+1 f J+1, e 1 e 1... f J+1, e J e J e J+1 := g J+1 g J+1. One easily sees that {e 1,..., e J+1 } is an orthonormal set satisfying (26) for 1 K J + 1 thereby completing the inductive step. 4.2 Reciprocal Basis Let (V,, ) be a finite dimensional inner product space with general basis B = {f 1,..., f N }. Then B has an associated basis, called its Reciprocal or Dual Basis defined through Definition 23 Given a basis B = {f 1,..., f N } on the finite dimensional inner product space (V,, ), there is a unique Reciprocal Basis B := {f 1,..., f N } satisfying f i, f j = δ ij for i, j = 1,..., N. (27) Thus, every vector v V has the two representations v = = N v i f i (Contravariant Expansion) (28) i=1 N v i f i. (Covariant Expansion) (29) i=1 The N-tuples [v] B = [v i ] and [v] B = [v i ] are called the Contravariant Coordinates and Covariant Coordinates of v, respectively. It follows immediately from (27,28,29) that the contravariant and covariant coordinates of v can be conveniently computed via the formulae v i = v, f i and v i = v, f i. (30) Exercise 2 Let B = {f 1, f 2 } be any basis for the Cartesian plane R 2. Show how the reciprocal basis B = {f 1, f 2 } can be constructed graphically from B. Exercise 3 Let B = {f 1, f 2, f 3 } be any basis for the Cartesian space R 3. Show that the reciprocal basis B = {f 1, f 2, f 3 } is given by f 1 = f 2 f 3 f 1 (f 2 f 3 ), f 2 = f 1 f 3 f 2 (f 1 f 3 ), f 3 = f 1 f 2 f 3 (f 1 f 2 ) where a b denotes the vector cross product of a and b. 16

17 5 Linear Transformations Revisited The theory of linear transformations between finite dimensional vector spaces is revisited in this section. After some general considerations, several important special classes of linear transformations will be studied from both an algebraic and a geometric perspective. The first notion introduced is the vector space Lin(V, U) of linear transformations between two vector spaces. 5.1 The Vector Space Lin(V, U) Definition 24 Given two vector spaces, V, U, Lin(V, U) denotes the collection of all linear transformations between V and U. It is a vector space under the operations of addition and scalar multiplication of linear transformations defined as follows. Given two linear transformations L 1, L 2 : V U their sum, (L 1 + L 2 ), is defined to be the linear transformation satisfying (L 1 + L 2 )v := L 1 v + L 2 v for all v V. Moreover, scalar multiplication of a linear transformation is defined by (αl)v := αlv for all α F and v V. For the special case V = U, one denotes the vector space of linear transformations on V by Lin(V) Adjoint Transformation Let L Lin(V, U) be a linear transformation between two inner product spaces (V,, ) and (U,, ). The Adjoint Transformation, L, is defined to be the unique transformation in Lin(U, V) satisfying Lv, u = v, L u for all v V and u U. (31) Remark 21 The right hand side of (31) is the inner product on V whereas the left hand side is the inner product on U. Remark 22 This definition of adjoint makes sense due to the Theorem 6 (Riesz Representation Theorem) Suppose l : V F is a linear, scalar valued transformation on the inner product space (V,, ). Then there exists a unique vector a V such that lv = v, a for all v V. Note that for fixed u U, lv := Lv, u is a scalar valued linear transformation on V. By the Riesz Representation Theorem, there exists a unique vector a V such that lv = v, a for all v V. One now defines L u := a V. 17

18 Example 12 Suppose V = R N, U = R M and L M M,N (R), the space of all M N (real) matrices. Then L = L T, the transpose of L. (Why?) On the other hand, if V = C N, U = C M and L M M,N (C), the space of all M N (complex) matrices. Then L = L T, the conjugate transpose of L. (Why?) Remark 23 Consider the special case V = U (over F = R) and let B := {f 1,..., f N } be any basis for V. Then one can show that for any A Lin(V) where the matrix F is given by with The proof of (32) is left as an exercise. [A ] B = F 1 [A] T BF (32) F := [f ij ] f ij := f i, f j Two Natural Norms for Lin(V, U) One can define a natural inner product on Lin(V) by L 1 L 2 := tr(l 1L 2 ) (33) where L 1 denotes the adjoint of L 1. (Why does this satisfy the axioms (15) or (16)?) Remark 24 In the case V = R N with L 1 and L 2 given by N N- matrices L 1 = [l (1) ij ], L (2) 2 = [l (2) ], respectively, then one readily shows (How?) that ij L 1 L 2 = N i,j=1 l (1) ij l(2) ij. One can readily show that (33) also defines an inner product on Lin(V, U) where V and U are any two finite dimensional vector spaces. (How?) The norm associated to this inner product, called the trace norm, is then defined by L := L L = tr(l L). (34) Another natural norm on Lin(V, U), where (V, ) and (U, ) are normed vector spaces, called the operator norm, is defined by L := sup Lv. (35) v =1 Question: What is the geometric interpretation of (35)? An important relationship between the trace norm and operator norm is given by 18

19 Theorem 7 Let (V,, ) and (U,, ) be two finite dimensional, inner product spaces. Then for all L Lin(V, U), L L N L. (36) Proof: The proof uses ideas to be introduced in the up coming study of spectral theory. To see how spectral theory plays a role in proving (36), first notice that L = sup Lv = sup Lv, Lv = sup v, L Lv. v =1 v =1 v =1 Next observe that L L is a self-adjoint, positive linear transformation in Lin(V). As part of spectral theory for self-adjoint operators on a finite dimensional inner product space, it is shown that sup v, L Lv = v =1 max λ j 2 j=1,...,n where { λ j 2, j = 1,..., N} are the (positive) eigenvalues of L L. Thus, one has L = max λ j. (37) j=1,...,n On the other hand, spectral theory will show that for the trace norm, one has L 2 = tr(l L) = N λ j 2. (38) j=1 Appealing to the obvious inequalities max λ j 2 j=1,...,n one readily deduces (36) from (37,38) Elementary Tensor Product N j=1 λ j 2 N max j=1,...,n λ j 2, Among the most useful linear transformations is the class of Elementary Tensor Products which can be used as the basic building blocks of all other linear transformations. First consider the case of Lin(V) where (V,, ) is an inner product space. Definition 25 Let a, b V. Then the elementary tensor product of a and b, denoted a b, is defined to be the linear transformation on V satisfying (a b) v := v, b a for all v V. (39) Remark 25 It follows immediately from the definition that Ran(a b) = span(a) and Nul(a b) = b, where b denote the orthogonal complement of b, i.e. the (N 1)- dimensional subspace of all vectors orthogonal to b (where dim(v) = N). The elementary tensor products are thus the rank one linear transformations in Lin(V). 19

20 Remark 26 The following identities can also be derived easily from the definition of elementary tensor product. Given any a, b, c, d V and A Lin(V) (a b) = b a (a b) (c d) = a, c b, d A (a b) = a, Ab (a b) (c d) = b, c a d A (a b) = (Aa) b (a b) A = a (A b) tr(a b) = a, b. Example 13 When V = R N and a = (a i ), b = (b i ), then the matrix of a b with respect to the natural basis on R N is [a b] N = [a i b j ]. (Why?). (40) Remark 27 The result (40) has a natural generalization to a general finite dimensional inner product space (V,, ). Let B := {e 1,..., e N } be an orthonormal basis for V and a, b V. Then [a b] B = [a i b j ] (41) where [a] B = [a i ] and [b] B = [b i ]. Hence, the formula (40) generalizes to the matrix representation of an elementary tensor product with respect to any orthonormal basis for V. More generally, if B = {f 1,..., f N } is any (not necessarily orthonormal) basis for V, then [a b] B = [a i b j ]F (matrix product) where F is the metric tensor associated with the basis B defined by (25). Verification of this last formula is left as an exercise. The notion of elementary tensor product is readily generalized to the setting of linear transformations between two inner product spaces (V,, ) and (U,, ) as follows. Definition 26 Let a U and b V. Then the elementary tensor product a b is defined to be the linear transformation in Lin(V, U) satisfying (a b) v := v, b a for all v V A Natural Orthonormal Basis for Lin(V, U) Using the elementary tensor product, one can construct a natural orthonormal basis for Lin(V, U). First consider the special case of Lin(V). Suppose (V,, ) is an inner product space with orthonormal basis B := {e 1,..., e N }. Then one easily shows that the set B := {e i e j, i, j = 1,..., N} provides an orthonormal basis for Lin(V) equipped with the inner product (33). If follows immediately that dim(lin(v)) = N 2. 20

21 Remark 28 If L Lin(V), then L = l ij e i e j with l ij = L, e i e j = e i, Le j. (Why?) (42) More generally, suppose B U = {b 1,..., b M } is an orthonormal basis for U and B V = {d 1,..., d N } is an orthonormal basis for V. Then a natural orthonormal basis for Lin(V, U) (with respect to the trace inner product on Lin(V, U)) is {b i d j i = 1,..., M, j = 1,..., N}. Indeed, that these tensor products are pairwise orthogonal and have trace norm one follows from b i d j, b k d l = tr((b i d j ) b k d l ) The generalization of (42) for L Lin(V, U) is = tr(d j b i b k d l ) = b i, b k tr(d j d l ) = b i, b k d j, d l = δ ik δ jl. L = l ij b i d j with l ij := L, b i d j = b i, Ld j. (Why?) Remark 29 Suppose B := {e 1,..., e N } is an orthonormal basis for V. Then every L Lin(V) has the associated matrix [ˆl ij ] defined through N L = ˆlij e i e j. i,j=1 On the other hand, L also has the associated matrix representation [L] B through the relation [Lv] B = [L] B [v] B which must hold for all v V with = [l ij ] defined l ij = e i, Le j. It is straight forward to show (How?) that these two matrices are equal, i.e. [L] B = [l ij ] = [ˆl ij ] General Tensor Product Bases for Lin(V) and Their Duals Let B = {f 1,..., f N } be a general basis for (V,, ) with associated reciprocal basis B = {f 1,..., f N }. Then Lin(V) has four associated tensor product bases and corresponding reciprocal bases given by {f i f j } {f i f j } (43) {f i f j } {f i f j } (44) {f i f j } {f i f j } (45) {f i f j } {f i f j } (46) 21

22 where i, j = 1,..., N in each set. Every L Lin(V) has four matrix representations defined through L = = = = N ˆlij f i f j (Pure Contravariant) (47) i,j=1 N ˆlij f i f j (Pure Covariant) (48) i,j=1 N ˆli j f i f j (Mixed Contravariant-Covariant) (49) i,j=1 N i,j=1 ˆl j i f i f j (Mixed Covariant-Contravariant). (50) It follows easily (How?) that one can compute these various matrix components of L from the formulae ˆlij = f i, Lf j ˆlij = f i, Lf j ˆli j = f i, Lf j ˆl j i = f i, Lf j. Exercise 4 Let L = I, the identity transformation on V. Then I has the four matrix representations (47,48,49,50) with ˆlij = f i, f j = f ij ˆlij = f i, f j = f ij ˆli j = f i, f j = δj i j ˆl i = f i, f j = δj i where δ i j denotes the usual Kronecker symbol. Thus, the matrix for the identity transformation when using a general (non-orthonormal) basis has the accustomed form only using a mixed covariant and contravariant representation. Remark 30 Equivalent Matrix Representations. Given a general basis B = {f 1,..., f N } and its dual basis B = {f 1,..., f N }, one also has the four matrix representations of a linear transformation L Lin(V) defined through [w] B = [Lv] B = [L] (B,B) [v] B [w] B = [Lv] B = [L] (B,B )[v] B [w] B = [Lv] B = [L] (B,B )[v] B [w] B = [Lv] B = [L] (B,B)[v] B 22

23 which in component form becomes w i = l i jv j w i = l j i v j w i = l ij v j w i = l ij v j. One now readily shows that ˆlij = l ij, ˆlij = l ij, ˆli j = l i j and ˆl j i = l j i. Indeed, consider the following calculation: Lv = i,j ˆlij f i f j ( vk f k) = i,j ˆlij f i v k f j, f k = i,j = i ˆlij v j ) ˆlij f i v k δ k j f i ( j = i w i f i where w i = ˆl ij v j. It now follows from the definition of l ij that ˆl ij = l ij. The remaining identities can be deduced by similar reasoning. Henceforth, the matrices [ˆl ij ], [l ij ], etc, will be used interchangeably. Remark 31 Raising and Lowering Indices. The metric tensor F = [f ij ] = [ f i, f j ] and its dual counterpart [f ij ] = [ f i, f j ] can be used to switch between covariant and contravariant component forms of vectors and linear transformations (and indeed, tensors of all orders). This process is called raising and lowering indices in classical tensor algebra. Thus, for example, if v = v i f i = v i f i, then v i = v, f i = v k f k, f i = f ik v k. Thus, [f ij ] is used to transform the covariant components [v i ] of a vector v into the contravariant components [v i ], a process called raising the index. By similar reasoning one shows that v i = f ik v k, that is, the metric tensor F = [f ij ] is used to transform the contravariant components of a vector into the covariant components, a process called lowering the index. 23

24 Analogous raising/lowering index formulas hold for linear transformations. In particular, suppose w = Lv and consider the string of identities Lv = ij l ij f i f j ( v k f k ) = ij l ij f i v k f j, f k = i f i ( k l ik f kj ) v j. From the definition of l i j, it follows that l i j = l ik f kj. Thus, the tensor [f ij ] can be used to lower the column index of [l ij ] by right multiplication of matrices. Similarly, [f ij ] can be used to raise the column index by right multiplication. Moreover, they can be used to lower and raise the row index of the matrices [l ij ] as illustrated by the following string of identities. L = ij l ij f i f j = ij l ij (f iq f q ) f j = qj ( i f iq l ij ) f q f j which from the definition of the matrix [l j q ] implies that l j i = f ik l kj. (51) The following identities are readily obtained by similar arguments. l ij = f ik l j k = li kf ik, l ij = f ik l i j = l k i f ik, etc.... Applying (51) to the identity transformation for which l i j = δj, i l j i l ij = f ij, one sees that δ j i = f ikf kj. = δ j i, lij = f ij and It follows that the metric tensor [f ij ] and the matrix [f ij ] are inverses of one another, i.e. [f ij ] = [f ij ] 1. 24

25 5.1.6 Eigenvalues, Eigenvectors and Eigenspaces Invariance principles are central tools in most areas of mathematics and they come in many forms. In linear algebra, the most important invariance principle resides in the notion of an Eigenvector. Definition 27 Let V be a vector space and L Lin(V) a linear transformation on V. Then a non-zero vector v V is called an Eigenvector and λ F its associated Eigenvalue provided Lv = λv. (52) From the definition (52) it follows that L(span(v)) span(v), that is the line spanned by the eigenvector v is invariant under L in that L maps the line back onto itself. The set of all eigenvectors corresponding to a given eigenvalue λ (with the addition of the zero vector) forms a subspace of V (Why?), denoted E(λ), called the the Eigenspace corresponding to the eigenvalue λ. One readily shows (How?) from the definition (52) that eigenvectors corresponding to distinct eigenvalues of a given linear transformation L Lin(V) are independent, from which one concludes Re-writing the definition (52) as E(λ 1 ) E(λ 2 ) = {0} if λ 1 λ 2. (53) (L λi)v = 0 one sees that v is an eigenvector with eigenvalue λ if and only if v Nul(L λi). It follows that λ is an eigenvalue for L if and only if det(l λi) = 0. Recalling that det(l λi) is a polynomial of degree N, one defines the Characteristic Polynomial p L (λ) of the transformation L to be p L (λ) := det(l λi) = ( λ) N + i 1 (L)( λ) N i N 1 (L)( λ) + i N (L). (54) The coefficients I L := {i 1 (L),..., i N (L)} of the characteristic polynomial are called the Principal Invariants of the transformation L because they are invariant under similarity transformation. More specifically, let T Lin(V) be non-singular. Then the characteristic polynomial for T 1 LT is identical to that for L as can be seen from the calculation p T 1 LT(λ) = det(t 1 LT λi) = det(t 1 LT λt 1 T) = det(t 1 (L λi)t) = det(l λi) = p L (λ). 25

26 An explanation of why the elements of I L are called the Principal Invariants of L must wait until the upcoming section on spectral theory. The principle invariants i 1 (L) and i N (L) have simple forms. Indeed one can readily show (How?) that i 1 (L) = tr(l) and i N (L) = det(l). Moreover, if the characteristic polynomial has the factorization then one also has that p L (λ) = N (λ j λ), j=1 i 1 (λ) = tr(l) = λ λ N and i N (L) = det(l) = λ 1... λ N. 5.2 Classes of Linear Transformations An understanding of general linear transformations in Lin(V ) can be gleaned from a study of special classes of transformations and representation theorems expressing general linear transformations as sums or products of transformations from the special classes. The discussion is framed within the setting of a finite dimensional, inner product vector space. The first class of transformations introduced are the projections Projections Definition 28 A linear transformation P on (V,, ) is called a Projection provided P 2 = P. (55) Remark 32 It follows readily from the definition (55) that P is a projection if and only if I P is a projection. Indeed, assume P is a projection, then (I P ) 2 = I 2 2P + P 2 = I 2P + P = I P which shows that I P is a projection. The reverse implication follows by a similar argument since if I P is a projection, then one need merely write P = I (I P ) and apply the above calculation. Remark 33 The geometry of projections is illuminated by the observation that the there is a simple relationship between their range and nullspace. In particular, one can show easily from the definition (55) that Ran(P ) = Nul(I P ) and Nul(P ) = Ran(I P ). (56) 26

27 For example, to prove that Ran(P ) = Nul(I P ), let v Ran(P ). Then v = P u for some u V and (I P )v = (I P )P u = P u P 2 u = P u P u = 0. It follows that v Nul(P ) and hence that Ran(P ) Nul(I P ). (57) Conversely, if v Nul(I P ), then (I P )v = 0, or equivalently v = P v. Thus, v Ran(P ) from which it follows that Nul(I P ) Ran(P ). (58) From (57,58) one concludes that Ran(P ) = Nul(I P ) as required. The second equality in (56) follows similarly. Remark 34 Another important observation concerning projections is that V is a direct sum of the subspaces Ran(P ) and Nul(P ) To see (59), one merely notes that for any v V V = Ran(P ) Nul(P ). (59) v = P v + (I P )v, with P v Ran(P ) and (I P )v Nul(P ). This fact and (9) show (59). Because of (59) one often refers to a projection P as projecting V onto Ran(P ) along Nul(P ). An important subclass of projections are the Perpendicular Projections. Definition 29 A projection P is called a Perpendicular Projection provided Ran(P ) Nul(P ). A non-obvious fact about perpendicular projections is the Theorem 8 A projection P is a perpendicular projection if and only if P is self-adjoint, i.e. P = P. Proof: First assume P is self-adjoint and let v Ran(P ) and w Nul(P ). Then, v = P u for some u V and v, w = P u, w = u, P w = u, P w = u, 0 = 0 from which it follows that v w and hence that Ran(P ) Nul(P ). Conversely, suppose P is a perpendicular projection. One must show that P is selfadjoint, i.e. that P v, u = v, P u for all v, u V. (60) To that end, notice that P v, u = P v, P u + (I P )u = P v, P u + P v, (I P )u = P v, P u. (Why?) Since P v, P u is symmetric in v and u, the same must be true for P v, u, from which ones concludes (60). Question: Among the class of elementary tensor products, which (if any) are projections? Which are perpendicular projections? 27

28 Answer: An elementary tensor product a b is a projection if and only if a, b = 1. It is a perpendicular projection if and only if b = a and a = 1. (Why?) Thus, if a is a unit vector, then a a is perpendicular projection onto the line spanned by a. More generally, if a and b are perpendicular, unit vectors, then a a + b b is a perpendicular projection onto the plane (2-dimensional subspace) spanned by a and b. (Exercise.) Generalizing further, if a 1,..., a K are pairwise orthogonal unit vectors, then a 1 a a K a K is a perpendicular projection onto span(a 1,..., a K ) Diagonalizable Transformations A linear transformation L on a finite dimensional vector space V is called Diagonalizable provided it has a matrix representation that is diagonal, that is, there is a basis B for V such that [L] B = diag(l 1,..., l N ). (61) Notation: An N N diagonal matrix will be denoted A = diag(a 1,..., a N ) where A = [a ij ] with a 11 = a 1,..., a NN = a N and a ij = 0 for i j. The fundamental result concerning diagonalizable transformations is Theorem 9 A linear transformation L on V is diagonalizable if and only if V has a basis consisting entirely of eigenvectors of L. Proof: Assume first that V has a basis B = {f 1,..., f N } consisting of eigenvectors of L, that is Lf j = λ j f j for j = 1,..., N. Recall that the column N-tuples comprising the matrix [L] B are given by [Lf j ] B, the coordinates with respect to B of the vectors Lf j for j = 1,..., N. But 0 λ 1 [Lf 1 ] B = [λ 1 f 1 ] B = 0., [Lf 2] B = [λ 2 f 2 ] B = 0 It now follows immediately that λ , etc. [L] B = diag(λ 1,..., λ N ). (Why?) Conversely, suppose L is diagonalizable, that is, there exists a basis B = {f 1,..., f N } for V such that [L] B = diag(λ 1,..., λ N ). But then for each j = 1,..., N 0. 0 [Lf j ] B = [L] B [f j ] B = diag(λ 1,..., λ N ) 1 (j th -component=1) = [λ j f j ] B It follows that for each j = 1,..., N, Lf j = λ j f j (Why?), and hence that B = {f 1,..., f N } is a basis of eigenvectors of L. 28

29 5.2.3 Orthogonal and Unitary Transformations Let (V,, ) be a finite dimensional, inner product vector space. The the class of Orthogonal Transformations, in the case of a vector space over R, and the class of Unitary Transformation, in the case of a vector space over C, is defined by Definition 30 A transformation Q Lin(V) is called Orthogonal (real case) or Unitary (complex case) provided That is, the inverse of Q equals the adjoint of Q. Q 1 = Q or equivalently I = Q Q. (62) Remark 35 It follows immediately from (62) that orthogonal transformations preserve the inner product on V and hence leave lengths and angles invariant. Specifically, one has In particular, from (63) it follows that showing that Q preserves lengths. Remark 36 shows that Qa, Qb = a, Q Qb = a, Q 1 Qb = a, b. (63) Qa 2 = Qa, Qa = a, a = a 2 The class of orthogonal transformations will be denoted Orth. One easily It is useful to define the subclasses det(q) = ±1. (Why?) Orth + := {Q Orth det(q) = 1} and Orth := {Q Orth det(q) = 1}. Example 14 If V is the Euclidean plane R 2, then Orth + (R 2 ) is the set of (rigid) Rotations, i.e. given Q Orth + (R 2 ), there exists a 0 < θ < 2π such that ( ) cos(θ) sin(θ) Q =. (64) sin(θ) cos(θ) Also, the transformation R M := ( ) is in Orth (R 2 ) (Why?) (How does R M act geometrically?). More generally, Orth (R 2 ) = {R Orth(R 2 ) R = R M Q for some Q Orth + (R 2 )}. (Why?) Example 15 If V is the Euclidean space R 3, then Orth + is called the set of Proper Orthogonal Transformations or the set of Rotations. To motivate calling Q Orth + (R 3 ), one can prove that 29

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

Linear Algebra 2 Spectral Notes

Linear Algebra 2 Spectral Notes Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

A Brief Introduction to Tensors

A Brief Introduction to Tensors A Brief Introduction to Tensors Jay R Walton Fall 2013 1 Preliminaries In general, a tensor is a multilinear transformation defined over an underlying finite dimensional vector space In this brief introduction,

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

NOTES on LINEAR ALGEBRA 1

NOTES on LINEAR ALGEBRA 1 School of Economics, Management and Statistics University of Bologna Academic Year 207/8 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

More information

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite

More information

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

Practice Exam. 2x 1 + 4x 2 + 2x 3 = 4 x 1 + 2x 2 + 3x 3 = 1 2x 1 + 3x 2 + 4x 3 = 5

Practice Exam. 2x 1 + 4x 2 + 2x 3 = 4 x 1 + 2x 2 + 3x 3 = 1 2x 1 + 3x 2 + 4x 3 = 5 Practice Exam. Solve the linear system using an augmented matrix. State whether the solution is unique, there are no solutions or whether there are infinitely many solutions. If the solution is unique,

More information

Lecture 2: Linear operators

Lecture 2: Linear operators Lecture 2: Linear operators Rajat Mittal IIT Kanpur The mathematical formulation of Quantum computing requires vector spaces and linear operators So, we need to be comfortable with linear algebra to study

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Matrices and Linear Algebra

Matrices and Linear Algebra Contents Quantitative methods for Economics and Business University of Ferrara Academic year 2017-2018 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2

More information

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003 1. True or False (28 points, 2 each) T or F If V is a vector space

More information

SYLLABUS. 1 Linear maps and matrices

SYLLABUS. 1 Linear maps and matrices Dr. K. Bellová Mathematics 2 (10-PHY-BIPMA2) SYLLABUS 1 Linear maps and matrices Operations with linear maps. Prop 1.1.1: 1) sum, scalar multiple, composition of linear maps are linear maps; 2) L(U, V

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

2. Every linear system with the same number of equations as unknowns has a unique solution.

2. Every linear system with the same number of equations as unknowns has a unique solution. 1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations

More information

Lecture notes: Applied linear algebra Part 1. Version 2

Lecture notes: Applied linear algebra Part 1. Version 2 Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and

More information

6 Inner Product Spaces

6 Inner Product Spaces Lectures 16,17,18 6 Inner Product Spaces 6.1 Basic Definition Parallelogram law, the ability to measure angle between two vectors and in particular, the concept of perpendicularity make the euclidean space

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION

MATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm

University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm University of Colorado at Denver Mathematics Department Applied Linear Algebra Preliminary Exam With Solutions 16 January 2009, 10:00 am 2:00 pm Name: The proctor will let you read the following conditions

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

Linear Algebra Lecture Notes-II

Linear Algebra Lecture Notes-II Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

MATH 235. Final ANSWERS May 5, 2015

MATH 235. Final ANSWERS May 5, 2015 MATH 235 Final ANSWERS May 5, 25. ( points) Fix positive integers m, n and consider the vector space V of all m n matrices with entries in the real numbers R. (a) Find the dimension of V and prove your

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r Practice final solutions. I did not include definitions which you can find in Axler or in the course notes. These solutions are on the terse side, but would be acceptable in the final. However, if you

More information

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors Mathematical Methods wk : Vectors John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Mathematical Methods wk 1: Vectors

Mathematical Methods wk 1: Vectors Mathematical Methods wk : Vectors John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS

MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS 1. (7 pts)[apostol IV.8., 13, 14] (.) Let A be an n n matrix with characteristic polynomial f(λ). Prove (by induction) that the coefficient of λ n 1 in f(λ) is

More information

SUMMARY OF MATH 1600

SUMMARY OF MATH 1600 SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

Notes on Linear Algebra

Notes on Linear Algebra 1 Notes on Linear Algebra Jean Walrand August 2005 I INTRODUCTION Linear Algebra is the theory of linear transformations Applications abound in estimation control and Markov chains You should be familiar

More information

235 Final exam review questions

235 Final exam review questions 5 Final exam review questions Paul Hacking December 4, 0 () Let A be an n n matrix and T : R n R n, T (x) = Ax the linear transformation with matrix A. What does it mean to say that a vector v R n is an

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

Boolean Inner-Product Spaces and Boolean Matrices

Boolean Inner-Product Spaces and Boolean Matrices Boolean Inner-Product Spaces and Boolean Matrices Stan Gudder Department of Mathematics, University of Denver, Denver CO 80208 Frédéric Latrémolière Department of Mathematics, University of Denver, Denver

More information

LINEAR ALGEBRA MICHAEL PENKAVA

LINEAR ALGEBRA MICHAEL PENKAVA LINEAR ALGEBRA MICHAEL PENKAVA 1. Linear Maps Definition 1.1. If V and W are vector spaces over the same field K, then a map λ : V W is called a linear map if it satisfies the two conditions below: (1)

More information

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016 Linear Algebra Notes Lecture Notes, University of Toronto, Fall 2016 (Ctd ) 11 Isomorphisms 1 Linear maps Definition 11 An invertible linear map T : V W is called a linear isomorphism from V to W Etymology:

More information

Recall the convention that, for us, all vectors are column vectors.

Recall the convention that, for us, all vectors are column vectors. Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists

More information

The Spectral Theorem for normal linear maps

The Spectral Theorem for normal linear maps MAT067 University of California, Davis Winter 2007 The Spectral Theorem for normal linear maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (March 14, 2007) In this section we come back to the question

More information

Basic Elements of Linear Algebra

Basic Elements of Linear Algebra A Basic Review of Linear Algebra Nick West nickwest@stanfordedu September 16, 2010 Part I Basic Elements of Linear Algebra Although the subject of linear algebra is much broader than just vectors and matrices,

More information

CS 143 Linear Algebra Review

CS 143 Linear Algebra Review CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Math 302 Outcome Statements Winter 2013

Math 302 Outcome Statements Winter 2013 Math 302 Outcome Statements Winter 2013 1 Rectangular Space Coordinates; Vectors in the Three-Dimensional Space (a) Cartesian coordinates of a point (b) sphere (c) symmetry about a point, a line, and a

More information

Online Exercises for Linear Algebra XM511

Online Exercises for Linear Algebra XM511 This document lists the online exercises for XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Lecture 02 ( 1.1) Online Exercises for Linear Algebra XM511 1) The matrix [3 2

More information

Inner Product Spaces

Inner Product Spaces Inner Product Spaces Introduction Recall in the lecture on vector spaces that geometric vectors (i.e. vectors in two and three-dimensional Cartesian space have the properties of addition, subtraction,

More information

Chapter 5 Eigenvalues and Eigenvectors

Chapter 5 Eigenvalues and Eigenvectors Chapter 5 Eigenvalues and Eigenvectors Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2 5.1 Eigenvalues and Eigenvectors Eigenvalue and Eigenvector If A is a n n

More information

(v, w) = arccos( < v, w >

(v, w) = arccos( < v, w > MA322 Sathaye Notes on Inner Products Notes on Chapter 6 Inner product. Given a real vector space V, an inner product is defined to be a bilinear map F : V V R such that the following holds: For all v

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

PRACTICE PROBLEMS FOR THE FINAL

PRACTICE PROBLEMS FOR THE FINAL PRACTICE PROBLEMS FOR THE FINAL Here are a slew of practice problems for the final culled from old exams:. Let P be the vector space of polynomials of degree at most. Let B = {, (t ), t + t }. (a) Show

More information

ELEMENTS OF MATRIX ALGEBRA

ELEMENTS OF MATRIX ALGEBRA ELEMENTS OF MATRIX ALGEBRA CHUNG-MING KUAN Department of Finance National Taiwan University September 09, 2009 c Chung-Ming Kuan, 1996, 2001, 2009 E-mail: ckuan@ntuedutw; URL: homepagentuedutw/ ckuan CONTENTS

More information

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

Elementary maths for GMT

Elementary maths for GMT Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

2 Determinants The Determinant of a Matrix Properties of Determinants Cramer s Rule Vector Spaces 17

2 Determinants The Determinant of a Matrix Properties of Determinants Cramer s Rule Vector Spaces 17 Contents 1 Matrices and Systems of Equations 2 11 Systems of Linear Equations 2 12 Row Echelon Form 3 13 Matrix Algebra 5 14 Elementary Matrices 8 15 Partitioned Matrices 10 2 Determinants 12 21 The Determinant

More information

Chapter 2 Linear Transformations

Chapter 2 Linear Transformations Chapter 2 Linear Transformations Linear Transformations Loosely speaking, a linear transformation is a function from one vector space to another that preserves the vector space operations. Let us be more

More information

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms (February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops

More information

LINEAR ALGEBRA QUESTION BANK

LINEAR ALGEBRA QUESTION BANK LINEAR ALGEBRA QUESTION BANK () ( points total) Circle True or False: TRUE / FALSE: If A is any n n matrix, and I n is the n n identity matrix, then I n A = AI n = A. TRUE / FALSE: If A, B are n n matrices,

More information

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS LINEAR ALGEBRA, -I PARTIAL EXAM SOLUTIONS TO PRACTICE PROBLEMS Problem (a) For each of the two matrices below, (i) determine whether it is diagonalizable, (ii) determine whether it is orthogonally diagonalizable,

More information

Linear Algebra. Workbook

Linear Algebra. Workbook Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

More information

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8. Linear Algebra M1 - FIB Contents: 5 Matrices, systems of linear equations and determinants 6 Vector space 7 Linear maps 8 Diagonalization Anna de Mier Montserrat Maureso Dept Matemàtica Aplicada II Translation:

More information

Eigenvalues and Eigenvectors A =

Eigenvalues and Eigenvectors A = Eigenvalues and Eigenvectors Definition 0 Let A R n n be an n n real matrix A number λ R is a real eigenvalue of A if there exists a nonzero vector v R n such that A v = λ v The vector v is called an eigenvector

More information

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2 Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +

More information

LINEAR ALGEBRA SUMMARY SHEET.

LINEAR ALGEBRA SUMMARY SHEET. LINEAR ALGEBRA SUMMARY SHEET RADON ROSBOROUGH https://intuitiveexplanationscom/linear-algebra-summary-sheet/ This document is a concise collection of many of the important theorems of linear algebra, organized

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

5 Compact linear operators

5 Compact linear operators 5 Compact linear operators One of the most important results of Linear Algebra is that for every selfadjoint linear map A on a finite-dimensional space, there exists a basis consisting of eigenvectors.

More information

Numerical Linear Algebra

Numerical Linear Algebra University of Alabama at Birmingham Department of Mathematics Numerical Linear Algebra Lecture Notes for MA 660 (1997 2014) Dr Nikolai Chernov April 2014 Chapter 0 Review of Linear Algebra 0.1 Matrices

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

Chapter 6: Orthogonality

Chapter 6: Orthogonality Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

More information

Matrices A brief introduction

Matrices A brief introduction Matrices A brief introduction Basilio Bona DAUIN Politecnico di Torino Semester 1, 2014-15 B. Bona (DAUIN) Matrices Semester 1, 2014-15 1 / 41 Definitions Definition A matrix is a set of N real or complex

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45

homogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45 address 12 adjoint matrix 118 alternating 112 alternating 203 angle 159 angle 33 angle 60 area 120 associative 180 augmented matrix 11 axes 5 Axiom of Choice 153 basis 178 basis 210 basis 74 basis test

More information