MA20216: Algebra 2A. Notes by Fran Burstall. Corrections by:
|
|
- Elwin Black
- 5 years ago
- Views:
Transcription
1 MA20216: Algebra 2A Notes by Fran Burstall Corrections by: Callum Kemp Carlos Galeano Rios Kate Powell Tobias Beith Krunoslav Lehman Pavasovic Dan Corbie Phaidra Anastasiadou Louise Hannon Vlad Brebeanu Lauren Godfrey Elizabeth Crowley James Green Reuben Russell Ross Trigoll Emerald Dilworth George Milton Caitlin Ray Liberty Curtis Harry Todd Daniel Ng Papageorgiou Dimosthenis Kerry Finch Daniel Dodd Solla Tapping Alex Walker Charlie Hadfield Sam Cortinhas Rob Brown
2 Contents 1 Linear algebra: concepts and examples Vector spaces Subspaces Bases Standard bases Useful facts Linear maps Vector spaces of linear maps Linear maps and matrices Extension by linearity The rank-nullity theorem Sums and quotients Sums of subspaces Direct sums Direct sums and projections Induction from two summands Direct sums and bases Complements Quotients Inner product spaces Inner products Definition and examples Cauchy Schwarz inequality Orthogonality Orthonormal bases Orthogonal complements and orthogonal projection Linear operators on inner product spaces 29 i
3 4.1 Linear operators and their adjoints Linear operators and matrices Adjoints Linear isometries The spectral theorem Eigenvalues and eigenvectors Invariant subspaces and adjoints The spectral theorem for normal operators The spectral theorem for real self-adjoint operators The spectral theorem for symmetric and Hermitian matrices Singular value decomposition Duality Dual spaces Solution sets and annihilators Transposes Bilinearity Bilinear maps Bilinear forms and quadratic forms Bilinear forms and matrices Symmetric bilinear forms Quadratic forms Classification of symmetric bilinear and quadratic forms ii
4 Chapter 1 Linear algebra: concepts and examples Let us warm up by revising some of the key ideas from Algebra 1B. Along the way, we will see some new examples and prove a couple of new results. 1.1 Vector spaces Recall from Algebra 1B, 2.1: Definition. A vector space V over a field F is a set V with two operations: addition V V V : (v, w) v + w with respect to which V is an abelian group: v + w = w + v, for all v, w V ; u + (v + w) = (u + v) + w, for all u, v, w V ; there is a zero element 0 V for which v + 0 = v = 0 + v, for all v V ; each element v V has an additive inverse v V for which v + ( v) = 0 = ( v) + v. scalar multiplication F V V : (λ, v) λv such that (λ + µ)v = λv + µv, for all v V, λ, µ F. λ(v + w) = λv + λw, for all v, w V, λ F. (λµ)v = λ(µv), for all v V, λ, µ F. 1v = v, for all v V. We call the elements of F scalars and those of V vectors. Examples. 1. Take V = F, the field itself, with addition and scalar multiplication the field addition and multiplication. 2. F n, the n-fold Cartesian product of F with itself, with component-wise addition and scalar multiplication: (λ 1,..., λ n ) + (µ 1,..., µ n ) := (λ 1 + µ 1,..., λ n + µ n ) λ(λ 1,..., λ n ) := (λλ 1,..., λλ n ). 1
5 3. Let M m n (F) denotes the set of m by n matrices (thus m rows and n columns) with entries in F. This is a vector space under entry-wise addition and scalar multiplication. Special cases are the vector spaces of column vectors M n 1 (F) and row vectors M 1 n (F). In computations, we often identify F n with M n 1 (F) by associating x = (x 1,..., x n ) F n with the column vector x = 4. Here is a very general example: let I be any set and V a vector space. Recall that V I denotes the set {f : I V } of all maps from I to V. I claim that V I is a vector space under pointwise addition and scalar multiplication. That is, for f, g : I V and λ F, we define x 1. x n. (f + g)(i) := f(i) + g(i) (λf)(i) := λ(f(i)), for all i I. The zero element is just the constant zero function: 0(i) := 0, and the additive inverses are defined pointwise also: ( f)(i) := (f(i)). Exercise. 1 Prove the claim! That is, show that V I is a vector space under pointwise addition and scalar multiplication. Remark. For suitable I, this last example captures many familiar vector spaces. For example: We identify F n with F {1,...,n} by associating (x 1,..., x n ) F n with the map (i x i ). Similarly, we identify M m n (F) with F {1,...,m} {1,...,n} by associating the matrix A with the map (i, j) A ij. R N is the set of real sequences {(a n ) n N : a n R} that played such a starring role in Analysis Subspaces Definition. A vector (or linear) subspace of a vector space V over F is a non-empty subset U V which is closed under addition and scalar multiplication: whenever u, u 1, u 2 U and λ F, then u 1 + u 2 U and λu U. In this case, we write U V. Say that U is trivial if U = {0} and proper if U V. Of course, U is now a vector space in its own right using the addition and scalar multiplication of V. Exercise U; U V is a subspace if and only if U satisfies the following conditions: 2. For all u 1, u 2 U and λ F, u 1 + λu 2 U. This gives a efficient recipe for checking when a subset is a subspace. 1 Question 4 on sheet 1. 2 Question 1 on sheet 1. 2
6 Examples. A good way to see that something is a vector space is to see that it is a subspace of some V I. That way, there is no need to verify all the tedious axioms (associativity, distributivity and so on). 1. The set c := {real convergent sequences} R N and so is a vector space. This is part of the content of the Algebra of Limits Theorem in Analysis Let [a, b] R be an interval and set C 0 [a, b] := {f : [a, b] R f is continuous}, the set of continuous functions. Then C 0 [a, b] R [a,b]. This is most of the Algebra of Continuous Functions Theorem from Analysis Bases Definitions. Let v 1,..., v n be a list of vectors in a vector space V. 1. The span of v 1,..., v n is span{v 1,..., v n } := {λ 1 v λ n v n λ i F, 1 i n} V. 2. v 1,..., v n span V (or are a spanning list for V ) if span{v 1,..., v n } = V. 3. v 1,..., v n are linearly independent if, whenever λ 1 v λ n v n = 0, then each λ i = 0, 1 i n, and linearly dependent otherwise. 4. v 1,..., v n is a basis for V if they are linearly independent and span V. Definition. A vector space is finite-dimensional if it admits a finite list of vectors as basis and infinitedimensional otherwise. If V is finite-dimensional, the dimension of V, dim V, is the number of vectors in a (any) basis of V. Terminology. Let v 1,..., v n be a list of vectors. 1. A vector of the form λ 1 v λ n v n is called a linear combination of the v i. 2. An equation of the form λ 1 v λ n v n = 0 is called a linear relation on the v i. Recall: Proposition 1.1 (Algebra 1B, Chapter 2, Proposition 4). v 1,..., v n is a basis for V if and only if any v V can be written in the form v = λ 1 v λ n v n (1.1) for unique λ 1,..., λ n F. In this case, (λ 1,..., λ n ) is called the coordinate vector of v with respect to v 1,..., v n Standard bases In general, finite-dimensional vector spaces have many bases and there is no good reason to prefer any particular one. However, some lucky vector spaces come equipped with a natural basis. Proposition 1.2. For I a set and i I, define e i F I by { 1 if i = j e i (j) = 0 if i j, for all j I. If I is finite then (e i ) i I is a basis, called the standard basis, of F I. In particular, dim F I = I. 3
7 Proof. For f F I, we observe that f = i I f(i)e i. Indeed, for j I, ( ) f(i)e i (j) = f(i)e i (j) = f(i)0 + f(j)1 = f(j). i I i I i j In particular, (e i ) i I span. For linear independence, suppose that i I λ ie i = 0 and evaluate both sides at j I to get λ j = 0. Examples. Identify F n with F {1,...,n} and then e i = (0,..., 1,..., 0) with a single 1 in the i-th place. Similarly, the vector space of column vectors has a standard basis with e i, the column vector with a single 1 in the i-th row: 0. e i = Finally, identifying M m n (F) with F {1,...,m} {1,...,n} yields the standard basis (e (i,j) ) i,j of M m n (F) where e (i,j) differs from the zero matrix by a single 1 in the i-th row and j-th column Useful facts A very useful fact about bases that we shall use many times was proved in Algebra 1B: Proposition 1.3 (Algebra 1B, Chapter 3, Theorem 6(b)). Any linearly independent list of vectors in a finite-dimensional vector space can be extended to a basis. Here is another helpful result : Lemma 1.4 (Algebra 1B, Chapter 3, Theorem 5). Let V be a finite-dimensional vector space and U V. Then dim U dim V with equality if and only if U = V. 1.4 Linear maps Definitions. A map φ : V W of vector spaces over F is a linear map (or, in older books, linear transformation) if for all v, w V, λ F. φ(v + w) = φ(v) + φ(w) φ(λv) = λφ(v), The kernel of φ is ker φ := {v V φ(v) = 0} V. The image of φ is im φ := {φ(v) v V } W. 4
8 Remark. φ is linear if and only if φ(v + λw) = φ(v) + λφ(w), for all v, w V, λ F, which has the virtue of being only one thing to prove. Examples. 1. A M m n (F) determines a linear map φ A : F n F m by φ A (x) = y where, for 1 i m, y i = A ij x j. j=1 Otherwise said, y is given by matrix multiplication: y = Ax. 2. For any vector space V, the identity map id V : V V is linear. 3. If φ : V W and ψ : W U are linear then so is ψ φ : V U. 4. Recall that c is the vector space of convergent sequences. The map lim n : (a n ) n N lim n a n : c R is linear thanks to the Algebra of Limits Theorem in Analysis b a : f b a f : C0 [a, b] R is also linear. Definition. A linear map φ : V W is a (linear) isomorphism if there is a linear map ψ : W V such that ψ φ = id V, φ ψ = id W. If there is an isomorphism V W, say that V and W are isomorphic and write V = W. In Algebra 1B, we saw: Lemma 1.5. φ : V W is an isomorphism if and only if φ is a linear bijection (and then ψ = φ 1 ) Vector spaces of linear maps Notation. For vector spaces V, W over F, denote by L F (V, W ) (or simply L(V, W )) the set {φ : V W φ is linear} of linear maps from V to W. Theorem 1.6 (Linearity is a linear condition). L(V, W ) is a vector space under pointwise addition and scalar multiplication. Otherwise said, L(V, W ) W V. Proof. It is enough to show that L(V, W ) is a vector subspace of W V, that is, is non-empty and closed under addition and scalar multiplication. First observe that the zero map 0 : v 0 W is linear: In particular, L(V, W ) is non-empty. 0(v + λw) = 0 = 0 + λ0 = 0(v) + λ0(w). Now let φ, ψ L(V, W ) and show that φ + ψ is linear: (φ + ψ)(v + λw) = φ(v + λw) + ψ(v + λw) = φ(v) + λφ(w) + ψ(v) + λψ(w) = (φ(v) + ψ(v)) + λ(φ(w) + ψ(w)) = (φ + ψ)(v) + λ(φ + ψ)(w), for all v, w V, λ F. Here the first and last equalities are just the definition of pointwise addition while the middle equalities come from the linearity of φ, ψ and the vector space axioms of W. Similarly, it is a simple exercise to see that if µ F and φ L(V, W ) then µφ is also linear. 5
9 1.4.2 Linear maps and matrices Recall from Algebra 1B 2.5: Definition. Let V, W be finite-dimensional vector spaces over F with bases B : v 1,..., v n and B : w 1,..., w m respectively. Let φ L(V, W ). The matrix of φ with respect to B, B is the matrix A = (A ij ) M m n (F) defined by: m φ(v j ) = A ij w i, (1.2) for all 1 j n. In the special case where V = W and B = B, we call A the matrix of φ with respect to B. Thus the recipe for computing A is: expand φ(v j ) in terms of w 1,..., w m to get the j-th column of A. Equivalently, φ(x 1 v x n v n ) = y 1 w y m w m where y = Ax. There is a fancy way to say all this: recall that a basis B : v 1,..., v n of V gives rise to a linear isomorphism φ B : F n V via φ B (λ 1,..., λ n ) = λ i v i. (1.3) Now the relation between φ and A is that φ = φ B φ A φ 1 B or, equivalently, φ B φ A = φ φ B so that the following diagram commutes: V φ W φ B φ B F n φ A F m (The assertion that such a diagram commutes is simply that the two maps one builds by following the arrows in two different ways coincide. However, the diagram also helps us keep track of where the various maps go!) The map φ A is a linear isomorphism L(V, W ) = M m n (F) which also plays well with composition and matrix multiplication: if U is a third vector space with basis B and ψ L(W, U) has matrix B with respect to B, B then ψ φ has matrix BA with respect to B, B. This gives us a compelling dictionary between linear maps and matrices Extension by linearity A linear map of a finite-dimensional vector space is completely determined by its action on a basis. More precisely: Proposition 1.7 (Extension by linearity). Let V, W be vector spaces over F. Let v 1,..., v n be a basis of V and w 1,..., w n any vectors in W. Then there is a unique φ L(V, W ) such that φ(v i ) = w i, 1 i n. (1.4) 6
10 Proof. We need to prove that such a φ exists and that there is only one. We prove existence first. Let v V. By Proposition 1.1, we know there are unique λ 1,..., λ n F for which v = λ 1 v λ n v n and so we define φ(v) to be the only thing it could be: φ(v) := λ 1 w λ n w n. Let us show that this φ does the job. First, with λ i = 1 and λ j = 0, for i j, we see that φ(v i ) = j i 0w j + 1w i = w i so that (1.4) holds. Now let us see that φ is linear: let v, w V with Then, for λ F, whence v = λ 1 v λ n v n w = µ 1 v µ n v n. v + λw = (λ 1 + λµ 1 )v (λ n + λµ n )v n φ(v + λw) = (λ 1 + λµ 1 )w (λ n + λµ n )w n = (λ 1 w λ n w n ) + λ(µ 1 w µ n w n ) = φ(v) + λφ(w). For uniqueness, suppose that φ, φ L(V, W ) both satisfy (1.4). Let v V and write v = λ 1 v λ n v n. Then φ(v) = λ 1 φ(v 1 ) + + λ n φ(v n ) = λ 1 w λ n w n = λ 1 φ (v 1 ) + + λ n φ (v n ) = φ (v), where the first and last equalities come from the linearity of φ, φ and the middle two from (1.4) for first φ and then φ. We conclude that φ = φ and we are done. Remark. In the context of Proposition 1.7, φ is an isomorphism if and only if w 1,..., w n is a basis for W (exercise 3!) The rank-nullity theorem Easily the most important result in Algebra 1B is the famous Rank-nullity theorem: Theorem 1.8 (Rank-nullity). Let φ : V W be linear with V finite-dimensional. Then dim im φ + dim ker φ = dim V. Using this, together with the observation that φ is injective if and only if ker φ = {0}, we saw in Algebra 1B: Proposition 1.9. Let φ : V W be linear with V, W finite-dimensional vector spaces of the same dimension: dim V = dim W. Then the following are equivalent: 3 This is question 2 on exercise sheet 2. 7
11 1. φ is injective. 2. φ is surjective. 3. φ is an isomorphism. Remark. Proposition 1.9 is flat-out false for infinite-dimensional V, W. For example: let S : R N R N be the shift operator: S((a 0, a 1,... )) := (a 1,... ). We readily check that: S is linear; S surjects; S is not injective. For example: S((1, 0, 0,... )) = 0. 8
12 Chapter 2 Sums and quotients We will discuss various ways of building new vector spaces out of old ones. Convention. In this chapter, all vector spaces are over the same field F unless we say otherwise. 2.1 Sums of subspaces Definition. Let V 1,..., V k V. The sum V V k is the set V V k := {v v k v i V i, 1 i k}. V V k is the smallest subspace of V that contains each V i. More precisely: Proposition 2.1. Let V 1,..., V k V. Then (1) V V k V. (2) If W V and V 1,..., V k W then V 1,..., V k V V k W. Proof. It suffices to prove (2) since (1) then follows by taking W = V. For (2), first note that V V k is a subset of W : if v i V i then v i W so that v v k W since W is closed under addition. Now observe that each V i V 1 + +V k since we can write any v i V i as 0+ +v i + +0 V 1 + +V k. In particular, 0 V V k. Finally, we show that V V k is a subspace. If v v k, w w k V V k, with v i, w i V i, for all i, and λ F then (v v k ) + λ(w w k ) = (v 1 + λw 1 ) + + (v k + λw k ) V V k since each v i + λw i V i. Remark. The union k V i is almost never a subspace of V so we use sums as a substitute for unions in Linear Algebra. 2.2 Direct sums Let V 1,..., V k V. Any v V V k can be written v = v v k, with each v i V i. We distinguish the case where the v i are unique. 9
13 Definition. Let V 1,..., V k V. The sum V V k is direct if each v V V k can be written v = v v k in only one way, that is, for unique v i V i, 1 i k. In this case, we write V 1 V k instead of V V k. V 2 v 2 v V 1 0 v 1 Figure 2.1: R 2 = V 1 V 2 Example. Define V 1, V 2 F 3 by V 1 = {(x 1, x 2, 0) x 1, x 2 F} V 2 = {(0, 0, x 3 ) x 3 F}. Then F 3 = V 1 V 2. When is a sum direct? We consider the case of two summands first where there is a very simple answer. Proposition 2.2. Let V 1, V 2 V. Then V 1 + V 2 is direct if and only if V 1 V 2 = {0}. Proof. First suppose that V 1 + V 2 is direct and let v V 1 V 2. Then we can write v in two ways: v = v = 0 + v 2, with v = v 1 = v 2. The uniqueness of the decomposition now forces v = 0. For the converse, suppose that V 1 V 2 = {0} and that v V 1 + V 2 can be written with v i, w i V i, i = 1, 2. Then v = v 1 + v 2 = w 1 + w 2 (v 1 w 1 ) = (w 2 v 2 ) with the left hand in V 1, the right in V 2 and so both in V 1 V 2 from which we immediately get v i = w i, i = 1, 2 so that V 1 + V 2 is direct. The special case V = V 1 + V 2 is important and deserves some terminology: Definition. Let V 1, V 2 V. V is the (internal) direct sum of V 1 and V 2 if V = V 1 V 2. In this case, say that V 2 is a complement of V 1 (and V 1 is a complement of V 2 ). Warning. This notion of the complement of the subspace V 1 has nothing at all to do with the settheoretic complement V \ V 1 which is never a subspace. Remarks. 1. From Proposition 2.2, we see that V = V 1 V 2 if and only if V = V 1 + V 2 and V 1 V 2 = {0}. Many people take these latter properties as the definition of internal direct sum. 10
14 V 2 0 V 1 Figure 2.2: R 3 as a direct sum of a line and a plane 2. There is a related notion of external direct sum that we will not discuss. When there are many summands, the condition that a sum be direct is a little more involved: Proposition 2.3. Let V 1,..., V k V, k 2. Then the sum V V k is direct if and only if for each 1 i k, V i ( j i V j) = {0}. Proof. This is an exercise in imitating the proof of Proposition 2.2. Remark. This is a much stronger condition than simply asking that each V i V j = {0}, for i j Direct sums and projections Definition. Let V be a vector space. A linear map π : V V is a projection if π π = π. Exercise. 1 If π is a projection, V = ker π im π. In fact, all direct sums arise this way: Proposition 2.4. Let V 1, V 2 V with V = V 1 V 2. Then there are projections π 1, π 2 : V V such that: (a) im π i = V i, i = 1, 2; (b) ker π 1 = V 2, ker π 2 = V 1 ; (c) v = π 1 (v) + π 2 (v), for all v V. Otherwise said, id V = π 1 + π 2. Proof. Item (c) tells us how to define the π i so we do this first: for v V, we have that v = v 1 + v 2 for unique v i V i, i = 1, 2. So we define π i (v) to be for i = 1, 2. π i (v) := v i, Our first task is to prove that the π i are linear: for v, w V and λ F, we have 1 Question 3 on sheet 2. v = π 1 (v) + π 2 (v) w = π 1 (w) + π 2 (w) v + λw = π 1 (v + λw) + π 2 (v + λw). 11
15 However, the first two equalities also give so the uniqueness in the third equality gives i = 1, 2, so that both π i are linear. v + λw = (π 1 (v) + λπ 1 (w)) + (π 2 (v) + λπ 2 (w)) π i (v + λw) = π i (v) + λπ i (w), By definition, im π i V i. For the converse, note that, for v 1 V 1, we have v 1 = v 1 + 0, with 0 V 2, so that π 1 (v 1 ) = v 1. In particular, V 1 im π 1 so that im π 1 = V 1. Moreover, taking v 1 = π 1 (v), we get π 1 (π 1 (v)) = π 1 (v), for any v V so that π 1 is a projection. Similarly π 2 is a projection and (a) holds. Finally, v = π 1 (v) + π 2 (v) ker π 1 if and only if v = π 2 (v), or, as we have just seen, v V 2. Thus ker π 1 = V 2 and similarly ker π 2 = V 1 settling (b). As a corollary, we see that dimensions add in direct sums: Proposition 2.5. Let V = V 1 V 2 with V finite-dimensional. Then Proof. We apply the rank-nullity theorem to π 1 : dim V = dim V 1 + dim V 2. dim V = dim im π 1 + dim ker π 1 = dim V 1 + dim V Induction from two summands A convenient way to analyse direct sums with many summands is to induct from the two summand case. For this, we need: Lemma 2.6. Let V 1,..., V k V. Then V V k is direct if and only if V V k 1 is direct and (V V k 1 ) + V k (two summands) is direct. Proof. Suppose first that V V k is direct. Then any v V V k 1 can be written v = v v k for unique v i V i, 1 i k 1 so that V 1 + +V k 1 is direct. Moreover, any v (V 1 + +V k 1 )+V k = V V k can be written v = v v k = (v v k 1 ) + v k with, in particular, unique v k V k so that (V V k 1 ) + V k is direct. For the converse, suppose that V V k 1 and (V V k 1 ) + V k are both direct. Then any v V V k can be written v = w + v k for unique w V V k 1 and v k V k. Also, there is a unique way to write w as w = v v k 1 with v i V i, 1 i k 1. Putting this together, we get v = v v k for unique v i V i, 1 i k so that V V k is direct. 12
16 Here is a sample application: Corollary 2.7. Let V 1,..., V k V be subspaces of a finite-dimensional vector space V with V 1 + +V k direct. Then dim V 1 V k = dim V dim V k. Proof. We induct on k using Proposition 2.5 and Lemma 2.6 in the induction step. In more detail: the induction hypothesis is that the formula holds for k summands. The base case reads dim V 1 = dim V 1 which trivially holds. For the induction step, suppose that the formula holds for any k 1 summands. Then, if V V k is direct, Lemma 2.6 says that V V k 1 is direct and then the induction hypothesis says that dim V 1 V k 1 = dim V dim V k 1. Now Lemma 2.6 says that so that Proposition 2.5 applies to give V 1 V k = (V 1 V k 1 ) V k dim V 1 V k = (dim V dim V k 1 ) + dim V k Direct sums and bases Proposition 2.5 suggests that there is a relation between the bases of V 1, V 2 and the basis of V 1 V 2. This is indeed the case: Proposition 2.8. Let V 1, V 2 V be finite-dimensional subspaces with bases B 1 : v 1,..., v k and B 2 : w 1,..., w l. Then V 1 + V 2 is direct if and only if the concatenation 2 B 1 B 2 : v 1,..., v k, w 1,..., w l is a basis of V 1 + V 2. Proof. Clearly B 1 B 2 spans V 1 + V 2 and so will be a basis exactly when it is linearly independent. Suppose that V 1 + V 2 is direct and that we have a linear relation k λ iv i + l j=1 µ jw j = 0. Then k λ i v i = l µ j w j V 1 V 2 j=1 which last is the zero subspace by Proposition 2.2. Thus k λ i v i = l µ j w j = 0 j=1 so that all the λ i and µ j vanish since B 1 and B 2 are linearly independent. We conclude that B 1 B 2 is linearly independent and so a basis. Conversely, if B 1 B 2 is a basis and v V 1 V 2, we can write v in two ways: v = k λ iv i, for some λ 1,..., λ k F, since v V 1 and, similarly, v = l j=1 µ jw j. We therefore have a linear relation k λ iv i l j=1 µ jw j = 0 and so, by linear independent of B 1 B 2, all λ i, µ j vanish so that v = 0. Thus V 1 V 2 = {0} and V 1 + V 2 is direct by Proposition 2.2. Again, this along with Lemma 2.6 and induction on k yields the many-summand version: Corollary 2.9. Let V 1,..., V k V be finite-dimensional subspaces with B i a basis of V i, 1 i k. Then V V k is direct if and only if the concatenation B 1... B k is a basis for V V k. 2 The concatenation of two lists is simply the list obtained by adjoining all entries in the second list to the first. 13
17 2.2.4 Complements For finite-dimensional vector spaces, any subspace has a complement: Proposition 2.10 (Complements exist). Let U V, a finite-dimensional vector space. Then there is a complement to U. Proof. Let B 1 : v 1,..., v k be a basis for U and so a linearly independent list of vectors in V. By Proposition 1.3, we can extend the list to get a basis B : v 1,..., v n of V. Set W = span{v k+1,..., v n } V : this is a complement to U. Indeed, B 2 : v k+1,..., v n is a basis for W and B = B 1 B 2 so that V = U W by Proposition 2.8. In fact, as Figure 2.3 illustrates, there are many complements to a given subspace. Figure 2.3: Each dashed line is a complement to the undashed subspace. Here is an application: Proposition 2.11 (Extension of linear maps). Let V, W be vector spaces with V finite-dimensional. Let U V be a subspace and φ : U W a linear map. Then there is a linear map Φ : V W such that the restriction 3 of Φ to U is φ: Φ U = φ. Otherwise said: for all u U Φ(u) = φ(u). Proof. By Proposition 2.10, U has a complement and so, by Proposition 2.4, there is a projection π : V V with image U. Set Φ = φ π : V W. This is a linear map and Φ U = φ π U = φ since, for u = π(v) im π = U, π(u) = π(π(v)) = π(v) = u. 2.3 Quotients Let U V. We construct a new vector space from U and V which is an abstract complement to U. The elements of this vector space are equivalence classes for the following equivalence relation: Definition. Let U V. Say that v, w V are congruent modulo U if v w U. In this case, we write v w mod U. Warning. This is emphatically not the relation of congruence modulo an integer n that you studied in Algebra 1A: here the relation is between vectors in a vector space. However, both notions of congruence are examples of a general construction in group theory. 3 Recall that if f : X Y is a map of sets and A X then the restriction of f to A is the map f A : A Y given by f A (a) = f(a), for all a A. 14
18 Lemma Congruence modulo U is an equivalence relation. Proof. Exercise 4! Thus each v V lies in exactly one equivalence class [v] V. What do these equivalence classes look like? Note that w v mod U if and only if w v U or, equivalently, w = v + u, for some u U. Definition. For v V, U V, the set v + U := {v + u u U} V is called a coset of U and v is called a coset representative of v + U. We conclude that the equivalence class of v modulo U is the coset v + U. v v + U U 0 Figure 2.4: A subspace U R 2 and a coset v + U. Remark. In geometry, cosets of vector subspaces are called affine subspaces. Examples include lines in R 2 and lines and planes in R 3 irrespective of whether they contain zero (as vector subspaces must). Example. Fibres of a linear map: let φ : V W be a linear map and let w im φ. Then the fibre of φ over w is defined by: φ 1 {w} := {v V φ(v) = w}. Unless w = 0, this is not a linear subspace but notice that v, v are in the same fibre if and only if φ(v) = φ(v ), or, equivalently, φ(v v ) = 0 or v v ker φ. We conclude that the fibres of φ are exactly the cosets of ker φ: φ 1 {w} = v + ker φ, for any v φ 1 {w}. We shall see below that any coset arises this way for a suitable φ. Definition. Let U V. The quotient space V/U of V by U is the set V/U, pronounced V mod U, of cosets of U: V/U := {v + U v V }. This is a subset of the power set 5 P(V ) of V. The quotient map q : V V/U is defined by q(v) = v + U. 4 This is question 1 on exercise sheet 3. 5 Recall from Algebra 1A that the power set of a set A is the set of all subsets of A. 15
19 The quotient map q will be important to us. It has two key properties: 1. q is surjective. 2. q(v) = q(v ) if and only if v v mod U, that is, v v U. We can add and scalar multiply cosets to make V/U into a vector space and q into a linear map: Theorem Let U V. Then, for v, w V, λ F, (v + U) + (w + U) := (v + w) + U λ(v + U) := (λv) + U give well-defined operations of addition and scalar multiplication on V/U with respect to which V/U is a vector space and q : V V/U is a linear map. Moreover, ker q = U and im q = V/U. Proof. We phrase everything in terms of q to keep the notation under control. Since q surjects, we lose nothing by doing this: any element of V/U is of the form q(v) for some v V. With this understood, the proposed addition and scalar multiplication in V/U read q(v) + q(w) := q(v + w) λq(v) := q(λv) so that q is certainly linear so long as these operations make sense. Here the issue is that if q(v) = q(v ) and q(w) = q(w ), we must show that q(v + w) = q(v + w ), q(λv) = q(λv ). (2.1) However, in this case, we have v v U and w w U so that since U is a subspace, and this establishes (2.1). (v + w) (v + w ) = (v v ) + (w w ) U λv λv = λ(v v ) U, As for the vector space axioms, these follow from those of V. For example: q(v) + q(w) = q(v + w) = q(w + v) = q(w) + q(v). Here the first and third equalities are the definition of addition in V/U and the middle one comes from commutativity of addition in V. The zero element is q(0) = 0 + U = U while the additive inverse of q(v) is q( v). The linearity of q comes straight from how we defined our addition and scalar multiplication while v ker q if and only if q(v) = q(0) if and only if v = v 0 U so that ker q = U. v + U v v + U U 0 q 0 + U V V/U Figure 2.5: The quotient map q. 16
20 Corollary Let U V. If V is finite-dimensional then so is V/U and dim V/U = dim V dim U. Proof. Apply rank-nullity to q using ker q = U and im q = V/U. Remark. Theorem 2.13 shows that: 1. Any U V is the kernel of a linear map. 2. Any coset v + U is the fibre of a linear map: indeed v + U = q 1 {q(v)}. Commentary. Many people find the quotient space V/U difficult to think about: its elements are (special) subsets of V and this can be confusing. An alternative, perhaps better way, to proceed is to concentrate instead on the properties of V/U in much that same way that, in Analysis, we deal with real numbers via the axioms of a complete ordered field without worrying too much what a real number actually is! From this point of view, the quotient V/U of V by U is a vector space along with a linear map q : V V/U such that q surjects; ker q = U and this is really all you need to know! The content of Theorem 2.13, from this perspective, is simply that quotients exist! Theorem 2.15 (First Isomorphism Theorem). Let φ : V W be a linear map of vector spaces. Define φ : V/ ker φ im φ by φ(q(v)) = φ(v), where q : V V/ ker φ is the quotient map. Then φ is a well-defined linear isomorphism. In particular, V/ ker φ = im φ. Proof. First we show that φ is well-defined: q(v) = q(v ) if and only if v v ker φ if and only if φ(v v ) = 0, or, equivalently, φ(v) = φ(v ). We also get a bit more: φ injects since if φ(q(v)) = φ(q(v )) then φ(v) = φ(v ) which implies that q(v) = q(v ). To see that φ is linear, we compute using the linearity of q and φ: φ(q(v 1 ) + λq(v 2 )) = φ(q(v 1 + λv 2 )) = φ(v 1 + λv 2 ) = φ(v 1 ) + λφ(v 2 ) = φ(q(v 1 )) + λ φ(q(v 2 )), for v 1, v 2 V, λ F. It remains to show that φ is surjective: but if w im φ, then w = φ(v) = φ(q(v)), for some v V, and we are done. Remarks. 1. Let q : V V/ ker φ be the quotient map and i : im φ W the inclusion. Then the First Isomorphism Theorem shows that we may write φ as the composition i φ q of a quotient map, an isomorphism and an inclusion. 2. This whole story of cosets, quotients and the First Isomorphism Theorem has versions in many other contexts such as group theory (see MA30237) and ring theory (MA20217). 17
21 Chapter 3 Inner product spaces In this chapter, we equip real or complex vector spaces with extra structure that generalises the familiar dot product. Convention. In this chapter, we take the field F of scalars to be either R or C. 3.1 Inner products Definition and examples Recall the dot (or scalar) product on R n : for x = (x 1,..., x n ), y = (y 1,..., y n ) R n, x y := x 1 y x n y n = x T y. Using this we define: the length of x: x := x x; the angle θ between x and y: x y = x y cos θ. There is also a dot product on C n : for x, y C n, x y = x 1 y x n y n = x y, where x (pronounced x-dagger ) is the conjugate transpose x T of x. We then have that x x = x i x i = x i 2 is real, non-negative and vanishes exactly when x = 0. We abstract the key properties of the dot product into the following: Definition. Let V be a vector space of F (which is R or C). An inner product on V is a map V V F : (v, w) v, w which is: (1) (conjugate) symmetric: w, v = v, w, for all v, w V. In particular v, v = v, v and so is real. (2) linear in the second slot: u, v + w = u, v + u, w u, λv = λ u, v, for all u, v, w V and λ F. 18
22 (3) positive definite: For all v V, v, v 0 with equality if and only if v = 0. A vector space with an inner product is called an inner product space. Remark. Any subspace U of an inner product space V is also an inner product space: just restrict, to U U. Let us spell out the implications of this definition in the real and complex cases. Suppose first that F = R. Then the conjugate symmetry is just symmetry: v, w = w, v and it follows that we also have linearity in the first slot: v + w, u = v, u + w, u λv, u = λ v, u. We summarise the situation by saying that a real inner product is a positive definite, symmetric, bilinear form. We shall have more to say about bilinear forms later in chapter 6. Now let us turn to the case F = C. Now it is not the case that an inner product is linear in the first slot. Definition. A map φ : V W of complex vector spaces is conjugate linear (or anti-linear) if for all v, w V and λ F. φ(v + w) = φ(v) + φ(w) φ(λv) = λφ(v), We see from properties (1) and (2) that a complex inner product has v + w, u = v, u + w, u λv, u = λ v, u and so is conjugate linear in the first slot and linear in the second. Such a function is said to be sesquilinear (from the Latin sesqui which means one-and-a-half). Thus an inner product on a complex vector spaces is a positive definite, conjugate symmetric, sesquilinear form. Definition. Let V be an inner product space. 1. The norm of v V is v := v, v Say v, w V are orthogonal or perpendicular if v, w = 0. In this case, we write v w. Remarks. 1. The norm allows us to define the distance between v and w by v w. We can now do analysis on V : this is one of the Big Ideas in MA Warning: There is another convention for complex inner products which is prevalent in Analysis: there they ask that, be linear in the first slot and conjugate linear in the second. There are good reasons for either choice. 3. Physicists often write v w for v, w. Inner product spaces (especially infinite-dimensional ones) are the setting for quantum mechanics. Examples. 1. The dot product on R n or C n is an inner product. 2. Let [a, b] R be a closed, bounded interval. Define a real inner product on C 0 [a, b] by f, g = This is clearly symmetric, bilinear and non-negative. To see that it is definite, one must show that if b a f 2 = 0 then f = 0. This is an exercise in Analysis using the inertia property of continuous functions (see MA20218). b a fg. 19
23 3. The set of square summable sequences l 2 R N is given by l 2 := {(a n ) n N n N a 2 n < }. Exercises. 1 (a) l 2 R N. (b) If a, b l 2 then n N a nb n is absolutely convergent and then a, b := a n b n n N defines an inner product on l 2. Hint: for x, y R, rearrange 0 ( x y ) 2 to get 2 x y x 2 + y 2 (3.1a) and then deduce (x + y) 2 2(x 2 + y 2 ). (3.1b) Judicious use of equations (3.1) and the comparison theorem from MA10207 will bake the cake. Remark. Perhaps surprisingly, l 2 and C 0 [a, b] are closely related: this is what Fourier series are about: see MA Cauchy Schwarz inequality Here is one of the most important and ubiquitous inequalities in all of mathematics: Theorem 3.1 (Cauchy Schwarz inequality). Let V be an inner product space. For v, w V, v, w v w (3.2) with equality if and only if v, w are linearly dependent, that is, either v = 0 or w = λv, for some λ F. Proof. The idea of the proof is to write w = λv + u where u v (see Figure 3.1) and then use the fact that u 2 0. In detail, first note that if v = 0 then both sides of the inequality vanish and there is nothing to prove. Otherwise, let us seek λ F so that u := w λv v. We therefore need 0 = v, w λv = v, w λ v, v so that The situation is shown in Figure 3.1. With λ and then u so defined we have λ = v, w v 2. 0 u 2 = w λv, u = w, u = w, w λv = w, w λ w, v = w 2 v, w 2 v 2, where we used the sesquilinearity of the inner product to reach the second line and the conjugate symmetry to reach the third. Rearranging this yields v, w 2 v 2 w 2 and taking a square root gives us the Cauchy Schwarz inequality. Finally, we have equality if and only if u = 0 or, equivalently, u = 0, that is, w = λv. 1 Question 7 on sheet 4. 20
24 u w 0 λv v Figure 3.1: Construction of u. Examples. 1. Let (Ω, P ) be a finite probability space. Then the space R Ω of real random variables is an inner product space with f, g = E(fg) = x Ω f(x)g(x)p (x), so long as P (x) > 0 for each x Ω (we need this for positive-definiteness). Now the (square of) the Cauchy Schwarz inequality reads E(fg) 2 E(f 2 )E(g 2 ). 2. For a, b l 2, the Cauchy Schwarz inequality reads: ( a n b n n N n N a 2 n ) 1/2 ( 1/2 bn) 2. The Cauchy Schwarz inequality is an essentially 2-dimensional result about the inner product space span{v, w}. Here are some more that are almost as fundamental: Proposition 3.2. Let V be an inner product space and v, w V. 1. Pythagoras Theorem: If v w then n N v + w 2 = v 2 + w 2. (3.3) 2. Triangle inequality: v + w v + w with equality if and only if v = 0 or w = λv with λ Parallelogram identity: v + w 2 + v w 2 = 2( v 2 + w 2 ). w v + w w v + w 0 v (a) Pythagoras Theorem 0 v w v (b) Parallelogram identity Figure 3.2: The identities of Proposition 3.2 Proof. 21
25 1. Exercise 2 : expand out v + w 2 = v + w, v + w. 2. We prove v + w 2 ( v + w ) 2. We have v + w 2 = v 2 + 2Re v, w + w 2. Now, by Cauchy Schwarz so that Re v, w v, w v w v + w 2 v v w + w 2 = ( v + w ) 2 with equality if and only if Re v, w = v, w = v w in which case we first get w = λv, for some λ F, and then that Reλ = λ so that λ Exercise 3! 3.2 Orthogonality Orthonormal bases Definition. A list of vectors u 1,..., u k in an inner product space V is orthonormal if, for all 1 i, j k, { 1 if i = j; u i, u j = δ ij := 0 if i j. If u 1,..., u k is also a basis, we call it an orthonormal basis. Example. The standard basis e 1,..., e n of F n is orthonormal for the dot product. Orthonormal bases are very cool. Here is why: if u 1,..., u k is orthonormal and v span{u 1,..., u k } then we can write v = λ 1 u λ k u k. How can we compute the coordinates λ i? In general, this amounts to solving a system of linear equations and so involves something tedious and lengthy like Gaussian elimination. However, in our case, things are much easier. Observe: u i, v = u i, j λ j u j = j λ j u i, u j = j λ j δ ij = λ i. Thus which is very easy to compute. Let us enshrine this analysis into the following lemma: λ i = u i, v (3.4) Lemma 3.3. Let V be an inner product space with orthonormal basis u 1,..., u n and let v V. Then As an immediate consequence of (3.4): 2 Question 2(a) on sheet 4. 3 Question 2(c) on sheet 4. v = u i, v u i. 22
26 Lemma 3.4. Any orthonormal list of vectors u 1,..., u k is linearly independent. Proof. If λ 1 u λ k u k = 0 then (3.4) gives λ i = u i, 0 = 0. What is more, these coordinates λ i are all you need to compute inner products. Proposition 3.5. Let u 1,..., u n be an orthonormal basis of an inner product space V. Let v = x 1 u x n u n and w = y 1 u y n u n. Then v, w = x i y i = x y. Thus the inner product of two vectors is the dot product of their coordinates with respect to an orthonormal basis. Proof. We simply expand out v, w by sesquilinearity: v, w = i x i u i, j y j u j = i,j x i y j u i, u j = i,j x i y j δ ij = i x i y i = x y. To put it another way: Proposition 3.6. Let u 1,..., u n be an orthonormal basis of an inner product space V and v, w V. Then: (1) Parseval s identity: v, w = n v, u i u i, w. (2) Bessel s equality: v 2 = n v, u i 2. Proof. (1) This comes straight from Proposition 3.5, using conjugate symmetry of the inner product to get x i = u i, v = v, u i. (2) Put v = w in (1). All of this should make us eager to get our hands on orthonormal bases and so we would like to know if they always exist. To see that they do, we need the following construction: Theorem 3.7 (Gram Schmidt orthogonalisation). Let v 1,..., v m be linearly independent vectors in an inner product space V. Then there is an orthonormal list u 1,..., u m such that for all 1 k m, defined inductively by: where, span{u 1,..., u k } = span{v 1,..., v k }, u k := w k / w k and, for k > 1, w 1 := v 1 k 1 w k := v k u j, v k u j = v k j=1 k 1 j=1 w j, v k w j 2 w j. 23
27 Proof. We induct with inductive hypothesis at k that u 1,..., u k is orthonormal and that, for 1 l k, span{u 1,..., u l } = span{v 1,..., v l }. At k = 1, this reads u 1 = 1 and span{u 1 } = span{v 1 } which is certainly true. Now assume the hypothesis is true at k 1 so that u 1,..., u k 1 is orthonormal and span{u 1,..., u k 1 } = span{v 1,..., v k 1 }. Then span{u 1,..., u k } = span{v 1,..., v k 1, u k } = span{v 1,..., v k 1, w k } = span{v 1,..., v k }, where the first equality comes from the induction hypothesis and the last from the definition of w k. Moreover, for any i < k, u i, w k = u i, v k j<k u j, v k u i, u j = u i, v k j<k u j, v k δ ij = u i, v k u i, v k = 0 Thus w k u 1,..., u k 1 so that u k is also whence u 1,..., u k is orthonormal. Thus the inductive hypothesis is true at k and so at m by induction. Corollary 3.8. Any finite-dimensional inner product space V has an orthonormal basis. Proof. Let v 1,..., v n be any basis of V and apply Theorem 3.7 to get an orthonormal (and so linearly independent by Lemma 3.4) list u 1,..., u n with span{u 1,..., u n } = span{v 1,..., v n } = V. Thus the u 1,..., u n span also and so are an orthonormal basis. Remark. For practical purposes such as computations, it is easiest to phrase the Gram Schmidt algorithm in terms of the w k : we set w 1 := v 1 k 1 w k := v k j=1 w j, v k w j 2 w j (which can be computed without introducing square roots) and then finally set u k = w k / w k. Example. Let U R 3 be given by {x R 3 x 1 + x 2 + x 3 = 0}. Let us find an orthonormal basis for U. First we need a basis of U to start with: dim U = 2 (why?) with basis v 1 = (1, 0, 1), v 2 = (0, 1, 1) (of course, there are many other bases). First, w 1 2 = v 1 2 = 2 while w 1, v 2 = v 1, v 2 = 1 so that w 2 = v 2 w 1, v 2 w 1, w 1 w 1 = (0, 1, 1) 1 2 (1, 0, 1) = ( 1 2, 1, 1 2 ). This means that w 2 2 = 1/ /4 = 3/2 so that u 1 = w 1 / 2 = ( 1 2, 0, 1 2 ) 2 u 2 = 3 w 2 = 2 3 ( 1 2, 1, 1 2 ) = ( 1 6, 2 3, 1 6 ). Let us conclude our discussion of orthonormal bases with an application of Gram Schmidt which has uses in Statistics (see MA20227) and elsewhere. First, a definition: Definition. A matrix Q M n n (R) is orthogonal if Q T Q = I n, or, equivalently, Q has orthonormal columns with respect to the dot product. identity matrix. Here I n is the n n 24
28 Remark. The two conditions in this definition are indeed equivalent: if q i is the i-th column of Q then (Q T Q) ij = q T i q j. Theorem 3.9 (QR decomposition). Let A M n n (R) be an invertible matrix. Then we can write A = QR, where Q is orthogonal and R is upper triangular (R ij = 0 if i > j) with positive entries on the diagonal. Proof. We apply Theorem 3.7 to the columns of A to get the columns of Q. So let v 1,..., v n be the columns of A. Since A is invertible, these are a basis so we can apply Theorem 3.7 to get an orthonormal basis u 1,..., u n. Let Q be the orthogonal matrix whose columns are the u i. Unravelling the formulae of the Gram Schmidt procedure, we have v 1 = v 1 u 1 v 2 = w 2 u 2 + u 1, v 2 u 1 and, more generally, v k = w k u k + j, v k u j. j<k u Otherwise said, A = QR where R kk = w k, R jk = u j, v k, for j < k, and R ij = 0 if i > j. To compute Q and R in practice, first do Gram Schmidt orthogonalisation on the columns of A to get Q and then note that Q T A = Q T QR = I n R = R so that R = Q T A which is probably easier to compute than keeping track of intermediate coefficients in the orthogonalisation! Remarks. 1. In pure mathematics, the QR decomposition is a special case of the Iwasawa decomposition. 2. We shall have more to say about orthogonal matrices in the next chapter, see Orthogonal complements and orthogonal projection Definition. Let V be an inner product space and U V. The orthogonal complement U of U (in V ) is given by U := {v V u, v = 0, for all u U}. Proposition Let V be an inner product space and U V. Then (1) U V ; (2) U U = {0}; (3) U (U ). Proof. (1) This is a straightforward exercise using the second slot linearity of the inner product. (2) If u U U, u, u = 0 so that u = 0 by positive-definiteness of the inner product. (3) If u U and w U then w, u = u, w = 0 so that u (U ). 25
29 U U Figure 3.3: Orthogonal complements in R 2 If U is finite-dimensional then U is a complement to U in the sense of 2.2 (even if V is infinitedimensional!): Theorem Let U be a finite-dimensional subspace of an inner product space V. Then V is an internal direct sum: V = U U. Proof. By Proposition 2.2 and Proposition 3.10(2), we just need to prove that V = U + U. u 1,..., u k be an orthonormal basis of U and let v V. We write For this, let v = ( k ) ( k ) u i, v u i + v u i, v u i =: v1 + v 2. Now v 1 U being in the span of the u i while, for 1 j k, Thus, for u = λ 1 u λ k u k U, so that v 2 U. u j, v 2 = u j, v k u i, v u j, u i = u j, v u j, v = 0. u, v 2 = k λ j u j, v 2 = 0 Corollary Let V be a finite-dimensional inner product space and U V. Then (1) dim U = dim V dim U. (2) U = (U ). Proof. (1) This is immediate from Proposition 2.5. (2) We give two proofs of this. Two applications of (1) give j=1 dim(u ) = dim V dim U = dim U while Proposition 3.10(3) gives U (U ). We conclude that we have equality by Lemma
30 Alternatively, let v (U ) and write v = v 1 + v 2 with v 1 U and v 2 U. Then 0 = v 2, v = v 2, v 1 + v 2, v 2 = v 2 2 so that v 2 = 0 giving v = v 1 U. Thus (U ) U which, with Proposition 3.10(3) yields U = (U ). This argument works when V, and even U, are infinite-dimensional so long as V = U U. Remark. Both Theorem 3.11 and Corollary 3.12(2) can fail when U is infinite-dimensional. When V = U U, we can apply the results of to get projections onto U and U. Definition. Let V be an inner product space and U V such that V = U U. The projection π U : V V with image U and kernel U is called the orthogonal projection onto U. Remark. π U = id V π U. The situation is illustrated in Figure 3.4. π U (v) v U π U (v) U Figure 3.4: Orthogonal projections Lemma Let V be an inner product space and U V a finite-dimensional subspace with orthonormal basis u 1,..., u k then, for all v V, Proof. This is the proof of Theorem π U (v) = k u i, v u i. Let us conclude this chapter with an application to a minimisation problem that, among other things, underlies much of Fourier analysis (see MA20223). Theorem Let V be an inner product space and U V such that V = U U. For v V, π U (v) is the nearest point of U to v: for all u U, v π U (v) v u. Proof. As we see in Figure 3.5, this is just the Pythagoras theorem (Proposition 3.2). Indeed, for u U, note that π U (v) u U while v π U (v) = π U (v) U. Thus, Now take square roots! v u 2 = v π U (v) + π U (v) u 2 = v π U (v) 2 + π U (v) u 2 v π U (v) 2. Exercise. Read Example 6.58 on pages of Axler s Linear Algebra Done Right to see a beautiful application of this result. He takes V = C 0 [ π, π] and U to be the space of polynomials of degree at most 5 to get an astonishingly accurate polynomial approximation to sin. 27
31 v u U π U (v) Figure 3.5: The orthogonal projection minimises distance to U 28
32 Chapter 4 Linear operators on inner product spaces Convention. In this chapter, we once again take the field F of scalars to be either R or C. 4.1 Linear operators and their adjoints Linear operators and matrices Definition. Let V be a vector space over F. A linear operator on V is a linear map φ : V V. The vector space of linear operators on V is denoted L(V ) (instead of L(V, V )). A special case of the analysis of tell us that linear operators in the presence of a basis are closely related to square matrices: if V is a finite-dimensional vector space over F with basis B = v 1,..., v n and φ L(V ) then the matrix of φ with respect to B is the square matrix A = (A ij ) M n n (F) with for 1 j n. φ(v j ) = Equivalently, φ(x 1 v x n v n ) = y 1 v y n v n where A ij v i, (4.1) y = Ax Adjoints First a preliminary lemma: Lemma 4.1 (Nondegeneracy Lemma). Let V be an inner product space and v V. Then v, w = 0, for all w V, if and only if v = 0. Proof. For the forward implication, take v = w to get v, v = 0 and so v = 0 by positive-definiteness of inner product. Conversely, if v = 0, v, w = 0, for any w V, since the inner product is anti-linear in the first slot 1. 1 To spell it out: 0, w = 0 + 0, w = 0, w + 0, w 29
MA20216: Algebra 2A. Notes by Fran Burstall
MA20216: Algebra 2A Notes by Fran Burstall Corrections by: Callum Kemp Carlos Galeano Rios Kate Powell Tobias Beith Krunoslav Lehman Pavasovic Dan Corbie Phaidra Anastasiadou Louise Hannon Vlad Brebeanu
More informationMath 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.
Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,
More informationLinear Algebra. Min Yan
Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................
More informationThe following definition is fundamental.
1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationLINEAR ALGEBRA REVIEW
LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication
More informationA PRIMER ON SESQUILINEAR FORMS
A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form
More information08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms
(February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops
More informationMath Linear Algebra II. 1. Inner Products and Norms
Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,
More informationOctober 25, 2013 INNER PRODUCT SPACES
October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal
More informationMath 396. Quotient spaces
Math 396. Quotient spaces. Definition Let F be a field, V a vector space over F and W V a subspace of V. For v, v V, we say that v v mod W if and only if v v W. One can readily verify that with this definition
More information4.2. ORTHOGONALITY 161
4.2. ORTHOGONALITY 161 Definition 4.2.9 An affine space (E, E ) is a Euclidean affine space iff its underlying vector space E is a Euclidean vector space. Given any two points a, b E, we define the distance
More informationChapter 2 Linear Transformations
Chapter 2 Linear Transformations Linear Transformations Loosely speaking, a linear transformation is a function from one vector space to another that preserves the vector space operations. Let us be more
More informationhomogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45
address 12 adjoint matrix 118 alternating 112 alternating 203 angle 159 angle 33 angle 60 area 120 associative 180 augmented matrix 11 axes 5 Axiom of Choice 153 basis 178 basis 210 basis 74 basis test
More informationMathematics Department Stanford University Math 61CM/DM Inner products
Mathematics Department Stanford University Math 61CM/DM Inner products Recall the definition of an inner product space; see Appendix A.8 of the textbook. Definition 1 An inner product space V is a vector
More informationMathematical Methods wk 1: Vectors
Mathematical Methods wk : Vectors John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm
More informationMathematical Methods wk 1: Vectors
Mathematical Methods wk : Vectors John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationHilbert spaces. 1. Cauchy-Schwarz-Bunyakowsky inequality
(October 29, 2016) Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/fun/notes 2016-17/03 hsp.pdf] Hilbert spaces are
More informationExercise Sheet 8 Linear Algebra I
Fachbereich Mathematik Martin Otto Achim Blumensath Nicole Nowak Pavol Safarik Winter Term 2008/2009 (E8.1) [Morphisms] Exercise Sheet 8 Linear Algebra I Let V be a finite dimensional F-vector space and
More information1. General Vector Spaces
1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule
More informationFirst we introduce the sets that are going to serve as the generalizations of the scalars.
Contents 1 Fields...................................... 2 2 Vector spaces.................................. 4 3 Matrices..................................... 7 4 Linear systems and matrices..........................
More information2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.
Hints for Exercises 1.3. This diagram says that f α = β g. I will prove f injective g injective. You should show g injective f injective. Assume f is injective. Now suppose g(x) = g(y) for some x, y A.
More informationLinear Algebra. Paul Yiu. Department of Mathematics Florida Atlantic University. Fall A: Inner products
Linear Algebra Paul Yiu Department of Mathematics Florida Atlantic University Fall 2011 6A: Inner products In this chapter, the field F = R or C. We regard F equipped with a conjugation χ : F F. If F =
More informationChapter 6: Orthogonality
Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products
More informationGQE ALGEBRA PROBLEMS
GQE ALGEBRA PROBLEMS JAKOB STREIPEL Contents. Eigenthings 2. Norms, Inner Products, Orthogonality, and Such 6 3. Determinants, Inverses, and Linear (In)dependence 4. (Invariant) Subspaces 3 Throughout
More informationLinear and Bilinear Algebra (2WF04) Jan Draisma
Linear and Bilinear Algebra (2WF04) Jan Draisma CHAPTER 1 Basics We will assume familiarity with the terms field, vector space, subspace, basis, dimension, and direct sums. If you are not sure what these
More informationALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA
ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND
More informationSupplementary Notes on Linear Algebra
Supplementary Notes on Linear Algebra Mariusz Wodzicki May 3, 2015 1 Vector spaces 1.1 Coordinatization of a vector space 1.1.1 Given a basis B = {b 1,..., b n } in a vector space V, any vector v V can
More informationDot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.
Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................
More informationLINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS
LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts (in
More informationTypical Problem: Compute.
Math 2040 Chapter 6 Orhtogonality and Least Squares 6.1 and some of 6.7: Inner Product, Length and Orthogonality. Definition: If x, y R n, then x y = x 1 y 1 +... + x n y n is the dot product of x and
More informationMATH 240 Spring, Chapter 1: Linear Equations and Matrices
MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear
More informationA Little Beyond: Linear Algebra
A Little Beyond: Linear Algebra Akshay Tiwary March 6, 2016 Any suggestions, questions and remarks are welcome! 1 A little extra Linear Algebra 1. Show that any set of non-zero polynomials in [x], no two
More informationW if p = 0; ; W ) if p 1. p times
Alternating and symmetric multilinear functions. Suppose and W are normed vector spaces. For each integer p we set {0} if p < 0; W if p = 0; ( ; W = L( }... {{... } ; W if p 1. p times We say µ p ( ; W
More informationMAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction
MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example
More informationMath 593: Problem Set 10
Math 593: Problem Set Feng Zhu, edited by Prof Smith Hermitian inner-product spaces (a By conjugate-symmetry and linearity in the first argument, f(v, λw = f(λw, v = λf(w, v = λf(w, v = λf(v, w. (b We
More informationEigenvectors and Hermitian Operators
7 71 Eigenvalues and Eigenvectors Basic Definitions Let L be a linear operator on some given vector space V A scalar λ and a nonzero vector v are referred to, respectively, as an eigenvalue and corresponding
More informationMATH 112 QUADRATIC AND BILINEAR FORMS NOVEMBER 24, Bilinear forms
MATH 112 QUADRATIC AND BILINEAR FORMS NOVEMBER 24,2015 M. J. HOPKINS 1.1. Bilinear forms and matrices. 1. Bilinear forms Definition 1.1. Suppose that F is a field and V is a vector space over F. bilinear
More information4 Hilbert spaces. The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan
The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan Wir müssen wissen, wir werden wissen. David Hilbert We now continue to study a special class of Banach spaces,
More information1 Invariant subspaces
MATH 2040 Linear Algebra II Lecture Notes by Martin Li Lecture 8 Eigenvalues, eigenvectors and invariant subspaces 1 In previous lectures we have studied linear maps T : V W from a vector space V to another
More information6 Inner Product Spaces
Lectures 16,17,18 6 Inner Product Spaces 6.1 Basic Definition Parallelogram law, the ability to measure angle between two vectors and in particular, the concept of perpendicularity make the euclidean space
More informationNORMS ON SPACE OF MATRICES
NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system
More informationCategories and Quantum Informatics: Hilbert spaces
Categories and Quantum Informatics: Hilbert spaces Chris Heunen Spring 2018 We introduce our main example category Hilb by recalling in some detail the mathematical formalism that underlies quantum theory:
More informationMATRICES ARE SIMILAR TO TRIANGULAR MATRICES
MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,
More informationWe simply compute: for v = x i e i, bilinearity of B implies that Q B (v) = B(v, v) is given by xi x j B(e i, e j ) =
Math 395. Quadratic spaces over R 1. Algebraic preliminaries Let V be a vector space over a field F. Recall that a quadratic form on V is a map Q : V F such that Q(cv) = c 2 Q(v) for all v V and c F, and
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More informationContents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2
Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition
More information2 so Q[ 2] is closed under both additive and multiplicative inverses. a 2 2b 2 + b
. FINITE-DIMENSIONAL VECTOR SPACES.. Fields By now you ll have acquired a fair knowledge of matrices. These are a concrete embodiment of something rather more abstract. Sometimes it is easier to use matrices,
More informationA linear algebra proof of the fundamental theorem of algebra
A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional
More informationThen x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r
Practice final solutions. I did not include definitions which you can find in Axler or in the course notes. These solutions are on the terse side, but would be acceptable in the final. However, if you
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationA linear algebra proof of the fundamental theorem of algebra
A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional
More informationFinite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product
Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )
More informationLinear Algebra 2 Spectral Notes
Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex
More informationMath 113 Final Exam: Solutions
Math 113 Final Exam: Solutions Thursday, June 11, 2013, 3.30-6.30pm. 1. (25 points total) Let P 2 (R) denote the real vector space of polynomials of degree 2. Consider the following inner product on P
More informationTopics in linear algebra
Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over
More informationLinear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016
Linear Algebra Notes Lecture Notes, University of Toronto, Fall 2016 (Ctd ) 11 Isomorphisms 1 Linear maps Definition 11 An invertible linear map T : V W is called a linear isomorphism from V to W Etymology:
More informationNumerical Linear Algebra
University of Alabama at Birmingham Department of Mathematics Numerical Linear Algebra Lecture Notes for MA 660 (1997 2014) Dr Nikolai Chernov April 2014 Chapter 0 Review of Linear Algebra 0.1 Matrices
More informationLinear Algebra Highlights
Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to
More informationLINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008. This chapter is available free to all individuals, on the understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationBasics of Hermitian Geometry
Chapter 8 Basics of Hermitian Geometry 8.1 Sesquilinear Forms, Hermitian Forms, Hermitian Spaces, Pre-Hilbert Spaces In this chapter, we attempt to generalize the basic results of Euclidean geometry presented
More informationDuke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014
Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document
More informationFinal Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2
Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch
More informationV (v i + W i ) (v i + W i ) is path-connected and hence is connected.
Math 396. Connectedness of hyperplane complements Note that the complement of a point in R is disconnected and the complement of a (translated) line in R 2 is disconnected. Quite generally, we claim that
More informationLinear Algebra March 16, 2019
Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented
More informationExercises on chapter 0
Exercises on chapter 0 1. A partially ordered set (poset) is a set X together with a relation such that (a) x x for all x X; (b) x y and y x implies that x = y for all x, y X; (c) x y and y z implies that
More informationYour first day at work MATH 806 (Fall 2015)
Your first day at work MATH 806 (Fall 2015) 1. Let X be a set (with no particular algebraic structure). A function d : X X R is called a metric on X (and then X is called a metric space) when d satisfies
More informationI teach myself... Hilbert spaces
I teach myself... Hilbert spaces by F.J.Sayas, for MATH 806 November 4, 2015 This document will be growing with the semester. Every in red is for you to justify. Even if we start with the basic definition
More informationLinear Algebra I. Ronald van Luijk, 2015
Linear Algebra I Ronald van Luijk, 2015 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents Dependencies among sections 3 Chapter 1. Euclidean space: lines and hyperplanes 5 1.1. Definition
More informationCHAPTER 3. Hilbert spaces
CHAPTER 3 Hilbert spaces There are really three types of Hilbert spaces (over C). The finite dimensional ones, essentially just C n, for different integer values of n, with which you are pretty familiar,
More informationThe value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.
Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class
More information( 9x + 3y. y 3y = (λ 9)x 3x + y = λy 9x + 3y = 3λy 9x + (λ 9)x = λ(λ 9)x. (λ 2 10λ)x = 0
Math 46 (Lesieutre Practice final ( minutes December 9, 8 Problem Consider the matrix M ( 9 a Prove that there is a basis for R consisting of orthonormal eigenvectors for M This is just the spectral theorem:
More informationSpectral Theorem for Self-adjoint Linear Operators
Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;
More informationBASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x
BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,
More informationx i e i ) + Q(x n e n ) + ( i<n c ij x i x j
Math 210A. Quadratic spaces over R 1. Algebraic preliminaries Let V be a finite free module over a nonzero commutative ring F. Recall that a quadratic form on V is a map Q : V F such that Q(cv) = c 2 Q(v)
More information1 Functional Analysis
1 Functional Analysis 1 1.1 Banach spaces Remark 1.1. In classical mechanics, the state of some physical system is characterized as a point x in phase space (generalized position and momentum coordinates).
More informationFunctional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...
Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................
More informationMTH 2032 SemesterII
MTH 202 SemesterII 2010-11 Linear Algebra Worked Examples Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education December 28, 2011 ii Contents Table of Contents
More informationMIDTERM I LINEAR ALGEBRA. Friday February 16, Name PRACTICE EXAM SOLUTIONS
MIDTERM I LIEAR ALGEBRA MATH 215 Friday February 16, 2018. ame PRACTICE EXAM SOLUTIOS Please answer the all of the questions, and show your work. You must explain your answers to get credit. You will be
More informationMath 396. An application of Gram-Schmidt to prove connectedness
Math 396. An application of Gram-Schmidt to prove connectedness 1. Motivation and background Let V be an n-dimensional vector space over R, and define GL(V ) to be the set of invertible linear maps V V
More informationLecture notes - Math 110 Lec 002, Summer The reference [LADR] stands for Axler s Linear Algebra Done Right, 3rd edition.
Lecture notes - Math 110 Lec 002, Summer 2016 BW The reference [LADR] stands for Axler s Linear Algebra Done Right, 3rd edition. 1 Contents 1 Sets and fields - 6/20 5 1.1 Set notation.................................
More informationChapter 5. Basics of Euclidean Geometry
Chapter 5 Basics of Euclidean Geometry 5.1 Inner Products, Euclidean Spaces In Affine geometry, it is possible to deal with ratios of vectors and barycenters of points, but there is no way to express the
More informationSUMMARY OF MATH 1600
SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You
More informationNOTES on LINEAR ALGEBRA 1
School of Economics, Management and Statistics University of Bologna Academic Year 207/8 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura
More informationMath 115A: Linear Algebra
Math 115A: Linear Algebra Michael Andrews UCLA Mathematics Department February 9, 218 Contents 1 January 8: a little about sets 4 2 January 9 (discussion) 5 2.1 Some definitions: union, intersection, set
More information12x + 18y = 30? ax + by = m
Math 2201, Further Linear Algebra: a practical summary. February, 2009 There are just a few themes that were covered in the course. I. Algebra of integers and polynomials. II. Structure theory of one endomorphism.
More informationLinear Algebra Massoud Malek
CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product
More informationarxiv: v1 [math.gr] 8 Nov 2008
SUBSPACES OF 7 7 SKEW-SYMMETRIC MATRICES RELATED TO THE GROUP G 2 arxiv:0811.1298v1 [math.gr] 8 Nov 2008 ROD GOW Abstract. Let K be a field of characteristic different from 2 and let C be an octonion algebra
More informationMath 4A Notes. Written by Victoria Kala Last updated June 11, 2017
Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...
More informationTHE MINIMAL POLYNOMIAL AND SOME APPLICATIONS
THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS KEITH CONRAD. Introduction The easiest matrices to compute with are the diagonal ones. The sum and product of diagonal matrices can be computed componentwise
More informationApplied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationCHAPTER VIII HILBERT SPACES
CHAPTER VIII HILBERT SPACES DEFINITION Let X and Y be two complex vector spaces. A map T : X Y is called a conjugate-linear transformation if it is a reallinear transformation from X into Y, and if T (λx)
More informationNONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction
NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques
More informationLinear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space
Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................
More informationMATH 115A: SAMPLE FINAL SOLUTIONS
MATH A: SAMPLE FINAL SOLUTIONS JOE HUGHES. Let V be the set of all functions f : R R such that f( x) = f(x) for all x R. Show that V is a vector space over R under the usual addition and scalar multiplication
More informationSpectral Theory, with an Introduction to Operator Means. William L. Green
Spectral Theory, with an Introduction to Operator Means William L. Green January 30, 2008 Contents Introduction............................... 1 Hilbert Space.............................. 4 Linear Maps
More informationIMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET
IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each
More information