MA20216: Algebra 2A. Notes by Fran Burstall
|
|
- Archibald Ford
- 6 years ago
- Views:
Transcription
1 MA20216: Algebra 2A Notes by Fran Burstall Corrections by: Callum Kemp Carlos Galeano Rios Kate Powell Tobias Beith Krunoslav Lehman Pavasovic Dan Corbie Phaidra Anastasiadou Louise Hannon Vlad Brebeanu Lauren Godfrey Elizabeth Crowley James Green Reuben Russell Ross Trigoll Emerald Dilworth George Milton Caitlin Ray
2 Contents 1 Linear algebra: concepts and examples Vector spaces Subspaces Bases Standard bases Linear maps Vector spaces of linear maps Extension by linearity The rank-nullity theorem Sums, products and quotients Sums and products Sums of subspaces Internal direct sum (two summands) Internal direct sums (many summands) External direct sums and products Quotients Inner product spaces Inner products Definition and examples Cauchy Schwarz inequality Orthogonality Orthonormal bases Orthogonal complements and orthogonal projection Linear operators on inner product spaces Linear operators and their adjoints Linear operators and matrices Adjoints i
3 4.1.3 Linear isometries The spectral theorem Eigenvalues and eigenvectors Invariant subspaces and adjoints The spectral theorem for normal operators The spectral theorem for real self-adjoint operators The spectral theorem for symmetric and Hermitian matrices Singular value decomposition Duality Dual spaces Solution sets and annihilators Transposes Bilinearity Bilinear maps Bilinear forms and quadratic forms Bilinear forms and matrices Symmetric bilinear forms Quadratic forms Classification of symmetric bilinear and quadratic forms A Further results 62 A.1 More on sums and products A.1.1 Sums, products and linear maps A.1.2 Infinite sums and products
4 Chapter 1 Linear algebra: concepts and examples Let us warm up by revising some of the key ideas from Algebra 1B. Along the way, we will see some new examples and prove a couple of new results. 1.1 Vector spaces Recall from Algebra 1B, 2.1: Definition. A vector space V over a field F is a set V with two operations: addition V V V : (v, w) v + w with respect to which V is an abelian group: v + w = w + v, for all v, w V ; u + (v + w) = (u + v) + w, for all u, v, w V ; there is a zero element 0 V for which v + 0 = v = 0 + v, for all v V ; each element v V has an additive inverse v V for which v + ( v) = 0 = ( v) + v. scalar multiplication F V V : (λ, v) λv such that (λ + µ)v = λv + µv, for all v V, λ, µ F. λ(v + w) = λv + λw, for all v, w V, λ F. (λµ)v = λ(µv), for all v V, λ, µ F. 1v = v, for all v V. We call the elements of F scalars and those of V vectors. Examples. 1. Take V = F, the field itself, with addition and scalar multiplication the field addition and multiplication. 2. F n, the n-fold Cartesian product of F with itself, with component-wise addition and scalar multiplication: (λ 1,..., λ n ) + (µ 1,..., µ n ) := (λ 1 + µ 1,..., λ n + µ n ) λ(λ 1,..., λ n ) := (λλ 1,..., λλ n ). 2
5 3. Let M m n (F) denotes the set of m by n matrices (thus m rows and n columns) with entries in F. This is a vector space under entry-wise addition and scalar multiplication. Special cases are the vector spaces of column vectors M n 1 (F) and row vectors M 1 n (F). In computations, we often identify F n with M n 1 (F) by associating x = (x 1,..., x n ) F n with the column vector x = 4. Here is a very general example: let I be any set and V a vector space. Recall that V I denotes the set {f : I V } of all maps from I to V. I claim that V I is a vector space under pointwise addition and scalar multiplication. That is, for f, g : I V and λ F, we define for all i I. x 1. x n. (f + g)(i) := f(i) + g(i) (λf)(i) := λ(f(i)), The zero element is just the constant zero function: 0(i) := 0, and the additive inverses are defined pointwise also: ( f)(i) := (f(i)). Exercise. 1 Prove the claim! That is, show that V I is a vector space under pointwise addition and scalar multiplication. Remark. For suitable I, this last example captures many familiar vector spaces. For example: We identify F n with F {1,...,n} by associating (x 1,..., x n ) F n with the map (i x i ). Similarly, we identify M m n (F) with F {1,...,m} {1,...,n} by associating the matrix A with the map (i, j) A ij. R N is the set of real sequences {(a n ) n N : a n R} that played such a starring role in Analysis Subspaces Definition. A vector (or linear) subspace of a vector space V over F is a non-empty subset U V which is closed under addition and scalar multiplication: whenever u, u 1, u 2 U and λ F, then u 1 + u 2 U and λu U. In this case, we write U V. Say that U is trivial if U = {0} and proper if U V. Of course, U is now a vector space in its own right using the addition and scalar multiplication of V. Examples. A good way to see that something is a vector space is to see that it is a subspace of some V I. That way, there is no need to verify all the tedious axioms (associativity, distributivity and so on). 1. The set c := {real convergent sequences} R N and so is a vector space. This is part of the content of the Algebra of Limits Theorem in Analysis 1. 1 Question 4 on sheet 1. 3
6 2. Let [a, b] R be an interval and set the set of continuous functions. C 0 [a, b] := {f : [a, b] R f is continuous}, Then C 0 [a, b] R [a,b]. This is most of the Algebra of Continuous Functions Theorem from Analysis Bases Definitions. Let v 1,..., v n be a list of vectors in a vector space V. 1. The span of v 1,..., v n is span{v 1,..., v n } := {λ 1 v λ n v n λ i F, 1 i n} V. 2. v 1,..., v n span V (or are a spanning list for V ) if span{v 1,..., v n } = V. 3. v 1,..., v n are linearly independent if, whenever λ 1 v λ n v n = 0, then each λ i = 0, 1 i n, and linearly dependent otherwise. 4. v 1,..., v n is a basis for V if they are linearly independent and span V. Definition. A vector space is finite-dimensional if it admits a finite list of vectors as basis and infinitedimensional otherwise. If V is finite-dimensional, the dimension of V, dim V, is the number of vectors in a (any) basis of V. Here is a slightly different take on bases which can be helpful in practice: Proposition 1.1. v 1,..., v n is a basis for V if and only if any v V can be written in the form for unique λ 1,..., λ n F. v = λ 1 v λ n v n (1.1) Proof. First suppose that v 1,..., v n is a basis and so spans. Then, for v V, we can find some λ 1,..., λ n F for which (1.1) holds. For uniqueness, suppose that v = n λ iv i = n µ iv i. Then 0 = v v = (λ 1 µ 1 )v (λ n µ n )v n and the linear independence of the v i now forces each λ i = µ i. Conversely, if the v i have the unique linear combinations property, they clearly span. As for linear independence, suppose that λ 1 v 1 + +λ n v n = 0. Since we also have 0 = 0v v n, the uniqueness tell us that each λ i = 0. A very useful fact about bases that we shall use many times was proved in Algebra 1B: Proposition 1.2 (Algebra 1B, Chapter 3, Theorem 7(b)). Any linearly independent list of vectors in a finite-dimensional vector space can be extended to a basis. In particular, any basis of a subspace U V extends to a basis of V. Consequently: Lemma 1.3. Let V be a finite-dimensional vector space and U V. Then with equality if and only if U = V. dim U dim V Proof. A basis of U contains less vectors than one of V hence the inequality. In the case of equality, a basis of U is already a maximal linearly independent list of vectors in V and so must be a basis of V. Thus U = V. 4
7 1.3.1 Standard bases In general, finite-dimensional vector spaces have many bases and there is no good reason to prefer any particular one. However, some lucky vector spaces come equipped with a natural basis. Proposition 1.4. For I a set and i I, define e i F I by { 1 if i = j e i (j) = 0 if i j, for all j I. If I is finite then (e i ) i I is a basis, called the standard basis, of F I. In particular, dim F I = I. Proof. The key observation is that there is a unique way to write v F I as a linear combination of the e i : v = v(i)e i. i I Indeed, for j I, ( ) v(i)e i (j) = v(i)e i (j) = v(i)0 + v(j)1 = v(j). i I i I i j Thus Proposition 1.1 applies to show that the (e i ) i I are a basis. Examples. Identify F n with F {1,...,n} and then e i = (0,..., 1,..., 0) with a single 1 in the i-th place. Similarly, the vector space of column vectors has a standard basis with e i, the column vector with a single 1 in the i-th row: 0. e i = Finally, identifying M m n (F) with F {1,...,m} {1,...,n} yields the standard basis (e (i,j) ) i,j of M m n (F) where e (i,j) differs from the zero matrix by a single 1 in the i-th row and j-th column. 1.4 Linear maps Definitions. A map φ : V W of vector spaces over F is a linear map (or, in older books, linear transformation) if for all v, w V, λ F. φ(v + w) = φ(v) + φ(w) φ(λv) = λφ(v), The kernel of φ is ker φ := {v V φ(v) = 0} V. The image of φ is im φ := {φ(v) v V } W. Remark. φ is linear if and only if φ(v + λw) = φ(v) + λφ(w), for all v, w V, F, which has the virtue of being only one thing to prove. 5
8 Examples. 1. A M m n (F) determines a linear map φ A : F n F m by φ A (x) = y where, for 1 i m, y i = A ij x j. j=1 Otherwise said, y is given by matrix multiplication: y = Ax. 2. For any vector space V, the identity map id V : V V is linear. 3. If φ : V W and ψ : W U are linear then so is ψ φ : V U. 4. Recall that c is the vector space of convergent sequences. The map lim n : (a n ) n N lim n a n : c R is linear thanks to the Algebra of Limits Theorem in Analysis b a : f b a f : C0 [a, b] R is also linear. Definition. A linear map φ : V W is a (linear) isomorphism if there is a linear map ψ : W V such that ψ φ = id V, φ ψ = id W. If there is an isomorphism V W, say that V and W are isomorphic and write V = W. In Algebra 1B, we saw: Lemma 1.5. φ : V W is an isomorphism if and only if φ is a linear bijection (and then ψ = φ 1 ) Vector spaces of linear maps Notation. For vector spaces V, W over F, denote by L F (V, W ) (or simply L(V, W )) the set {φ : V W φ is linear} of linear maps from V to W. Theorem 1.6 (Linearity is a linear condition). L(V, W ) is a vector space under pointwise addition and scalar multiplication. Otherwise said, L(V, W ) W V. Proof. It is enough to show that L(V, W ) is a vector subspace of W V, that is, is non-empty and closed under addition and scalar multiplication. First observe that the zero map 0 : v 0 W is linear: In particular, L(V, W ) is non-empty. 0(v + λw) = 0 = 0 + λ0 = 0(v) + λ0(w). Now let φ, ψ L(V, W ) and show that φ + ψ is linear: (φ + ψ)(v + λw) = φ(v + λw) + ψ(v + λw) = φ(v) + λφ(w) + ψ(v) + λψ(w) = (φ(v) + ψ(v)) + λ(φ(w) + ψ(w)) = (φ + ψ)(v) + λ(φ + ψ)(w), for all v, w V, λ F. Here the first and last equalities are just the definition of pointwise addition while the middle equalities come from the linearity of φ, ψ and the vector space axioms of W. Similarly, it is a simple exercise to see that if µ F and φ L(V, W ) then µφ is also linear. 6
9 1.4.2 Extension by linearity A linear map of a finite-dimensional vector space is completely determined by its action on a basis. More precisely: Proposition 1.7 (Extension by linearity). Let V, W be vector spaces over F. Let v 1,..., v n be a basis of V and w 1,..., w n any vectors in W. Then there is a unique φ L(V, W ) such that φ(v i ) = w i, 1 i n. (1.2) Proof. We need to prove that such a φ exists and that there is only one. We prove existence first. Let v V. By Proposition 1.1, we know there are unique λ 1,..., λ n F for which v = λ 1 v λ n v n and so we define φ(v) to be the only thing it could be: φ(v) := λ 1 w λ n w n. Let us show that this φ does the job. First, with λ i = 1 and λ j = 0, for i j, we see that φ(v i ) = j i 0w j + 1w i = w i so that (1.2) holds. Now let us see that φ is linear: let v, w V with Then, for λ F, v = λ 1 v λ n v n w = µ 1 v µ n v n. whence v + λw = (λ 1 + λµ 1 )v (λ n + λµ n )v n φ(v + λw) = (λ 1 + λµ 1 )w (λ n + λµ n )w n = (λ 1 w λ n w n ) + λ(µ 1 w µ n w n ) = φ(v) + λφ(w). For uniqueness, suppose that φ, φ L(V, W ) both satisfy (1.2). Let v V and write v = λ 1 v λ n v n. Then φ(v) = λ 1 φ(v 1 ) + + λ n φ(v n ) = λ 1 w λ n w n = λ 1 φ (v 1 ) + + λ n φ (v n ) = φ (v), where the first and last equalities come from the linearity of φ, φ and the middle two from (1.2) for first φ and then φ. We conclude that φ = φ and we are done. Remark. In the context of Proposition 1.7, φ is an isomorphism if and only if w 1,..., w n is a basis for W (exercise 2!). Here is an application which gives us another way to think about bases: we can view them as linear isomorphisms from F n. Let B : v 1,..., v n be a basis for V. Then Proposition 1.7 gives us a linear isomorphism φ B : F n V such that that is, φ B (x) = i x iv i. φ B (e i ) = v i, 1 i n, (1.3) Conversely, any linear isomorphism φ : F n V defines a unique basis via (1.3). 2 This is question 2 on exercise sheet 2. 7
10 1.4.3 The rank-nullity theorem Easily the most important result in Algebra 1B is the famous Rank-nullity theorem: Theorem 1.8 (Rank-nullity). Let φ : V W be linear with V finite-dimensional. Then dim im φ + dim ker φ = dim V. Using this, together with the observation that φ is injective if and only if ker φ = {0}, we saw in Algebra 1B: Proposition 1.9. Let φ : V W be linear with V, W finite-dimensional vector spaces of the same dimension: dim V = dim W. Then the following are equivalent: 1. φ is injective. 2. φ is surjective. 3. φ is an isomorphism. Remark. Proposition 1.9 is flat-out false for infinite-dimensional V, W. For example: let S : R N R N be the shift operator: S((a 0, a 1,... )) := (a 1,... ). We readily check that: S is linear; S surjects; S is not injective. For example: S((1, 0, 0,... )) = 0. 8
11 Chapter 2 Sums, products and quotients We will discuss various ways of building new vector spaces out of old ones. Convention. In this chapter, all vector spaces are over the same field F unless we say otherwise. 2.1 Sums and products Sums of subspaces Definition. Let V 1,..., V k V. The sum V V k is the set V V k := {v v k v i V i, 1 i k}. V V k is the smallest subspace of V that contains each V i. More precisely: Proposition 2.1. Let V 1,..., V k V. Then (1) V V k V. (2) If W V and V 1,..., V k W then V 1,..., V k V V k W. Proof. It suffices to prove (2) since (1) then follows by taking W = V. For (2), first note that V V k is a subset of W : if v i V i then v i W so that v v k W since W is closed under addition. Now observe that each V i V 1 + +V k since we can write any v i V i as 0+ +v i + +0 V 1 + +V k. In particular, V V k is non-empty. Finally, we show that V V k is closed under addition and scalar multiplication. If v v k, w w k V V k, with v i, w i V i, for all i, then (v v k ) + (w w k ) = (v 1 + w 1 ) + + (v k + w k ) V V k since each v i + w i V i. Again, for λ F, since λv i V i. λ(v v k ) = λv λv k V V k, Internal direct sum (two summands) Here is an important special case of the sum construction. 9
12 Definition. Let V 1, V 2 V. V is the (internal) direct sum of V 1 and V 2 if (a) V = V 1 + V 2 ; (b) V 1 V 2 = {0}. In this case, write V = V 1 V 2 and say that V 2 is a complement of V 1 (and V 1 is a complement of V 2!). V 2 0 V 1 An alternative take: Figure 2.1: R 3 as a direct sum of a line and a plane Proposition 2.2. For V 1, V 2 V, the following are equivalent: (1) V = V 1 + V 2 and V 1 V 2 = {0}. (2) Each v V can be written for unique v i V i, i = 1, 2. v = v 1 + v 2, Proof. We show (1) implies (2) first. Let v V. Since V = V 1 + V 2, there are v i V i, i = 1, 2, with v = v 1 + v 2. For the uniqueness, if v = v 1 + v 2 also with v i V i then 0 = v v = (v 1 v 1) + (v 2 v 2) yields v 1 v 1 = v 2 v 2 V 1 V 2 = {0}. Thus v i = v i, i = 1, 2. Now suppose (2) holds and prove (1): clearly we have V = V 1 + V 2. If v V 1 V 2 then we can write v = v = 0 + v 2, with v 1 = v 2 = v. The uniqueness part of (2) now gives v 1 = v 2 = 0, that is v = 0. The situation is illustrated in Figure 2.2. Dimensions add in direct sums: Proposition 2.3. Let V = V 1 V 2 with V finite-dimensional. Then dim V = dim V 1 + dim V 2. Proof. Let v 1,..., v k be a basis for V 1 and w 1,..., w m be a basis for V 2. It suffices to show that v 1,..., v k, w 1,..., w m is a basis for V. 10
13 V 2 v 2 v V 1 0 v 1 Figure 2.2: R 2 = V 1 V 2 For this, let v V. By Proposition 2.2, we have unique v V 1, v V 2 for which v = v + v while, by Proposition 1.1, there are unique scalars λ i, µ j F such that We conclude that v = λ 1 v λ k v k, v = µ 1 w µ m w m. v = λ 1 v λ k v k + µ 1 w µ m w m, for unique λ i, µ j F so that, by Proposition 1.1, v 1,..., v k, w 1,..., w m is a basis as required. For finite-dimensional vector spaces, any subspace has a complement: Proposition 2.4 (Complements exist). Let U V, a finite-dimensional vector space. Then there is a complement to U. Proof. Let v 1,..., v k be a basis for U and so a linearly independent list of vectors in V. By Proposition 1.2, we can extend the list to get a basis v 1,..., v n of V. Set W = span{v k+1,..., v n }: this is a complement to U. Indeed, V = U + W since any v V can be written v = λ 1 v λ n v n = (λ 1 v λ k v k ) + (λ k+1 v k λ n v n ) U + W. Further, if v U W we can write v = λ 1 v λ k v k + 0v k v n = 0v v k + λ k+1 v k λ n v n and uniqueness in Proposition 1.1 tells us that each λ i = 0 so that v = 0. In fact, as Figure 2.3 illustrates, there are many complements to a given subspace. Figure 2.3: Each dashed line is a complement to the undashed subspace. 11
14 2.1.3 Internal direct sums (many summands) We can have more than two summands in the direct sum construction. This is how the conditions of Proposition 2.2 generalise: Proposition 2.5. Let V 1,..., V k V, k 2. Then the following are equivalent: (1) V = V V k and, for each 1 j k, V j ( i j V i) = {0}. (2) Any v V can be written v = v v k for unique v i V i, 1 i k. Proof. This is an exercise in imitating the proof of Proposition 2.2. Definition. Let V 1,..., V k V. Say that V is the (internal) direct sum of the V i if either condition of Proposition 2.5 holds. In this case, write V = V 1 V k. Remark. The condition on intersections in Proposition 2.5(1) is much more stringent than simply asking that each V i V j = {0}: the latter is simply not enough when k > External direct sums and products There is a similar and very closely related construction where the V i are arbitrary vector spaces rather than subspaces of a fixed vector space V. For this, recall the Cartesian product of sets X 1,..., X k : this is X 1 X k := {(x 1,..., x k ) x i X i, 1 i k}. The Cartesian product of vector spaces is a vector space under component-wise addition and scalar multiplication: Theorem 2.6. Let V 1,..., V k be vector spaces over a field F. Then the Cartesian product V 1 V k is a vector space over F under component-wise addition and scalar multiplication: (v 1,..., v k ) + (w 1,..., w k ) = (v 1 + w 1,..., v k + w k ) λ(v 1,..., v k ) = (λv 1,..., λv k ). The zero element is (0,..., 0) where the zero in the i-th slot is the zero element of V i. Similarly, (v 1,..., v k ) = ( v 1,..., v k ). Proof. This is a straightforward exercise: the vector space axioms for the product come by applying those of the factors V i to the components. Definition. Let V 1,..., V k be vector spaces over a field F. The direct product or external direct sum of the V i is the Cartesian product of the V i equipped with the vector space structure of component-wise addition and scalar multiplication. This space is denoted V 1 V k or V 1 V k. Remark. The latter notation is a bit confusing since we are already using it for the internal direct sum of subspaces. However, we are about to see that internal and external direct sums are essentially the same. Dimensions add in direct products too: Proposition 2.7. Let V 1,..., V k be finite-dimensional vector spaces. Then V 1 V k is also finitedimensional and dim V 1 V k = dim V dim V k. 12
15 Proof. We induct on k. For k = 1, there is nothing to prove. For the induction step, suppose that the formula holds for products with k 1 factors. Now consider the map p : V 1 V k V 1 given by p(v 1,..., v k ) = v 1. This is plainly linear with im p = V 1 and Thus, by the induction hypothesis, ker p = {0} V 2 V k = V2 V k. dim ker p = dim V dim V k, which, together with the rank-nullity theorem, yields dim V 1 V k = dim V dim V k. Remark. Another, more tedious, approach is to build a basis for the product out of bases for the V i : if v (i) 1,..., v(i) n(i) is a basis for V i, we define n(i) elements of V 1 V k by setting ˆv (i) j := (0,..., v (i) j,..., 0), where all components but the i-th are zero. Then the collection of all ˆv (i) j, 1 j n(i), 1 i k, can be shown to be a basis of V 1 V k. We can now see the relation between internal and external direct sums: they are isomorphic in a natural way. Theorem 2.8. Let V 1,..., V k V. Then V = V 1 V k (internal direct sum) if and only if the linear map Γ : V 1 V k V given by is an isomorphism. Γ(v 1,..., v k ) = v v k Proof. Clearly Γ surjects exactly when V = V 1 + +V k. Moreover, Γ is injective if and only if, whenever v v k = w w k, with each v i, w i V i, then v i = w i, for all 1 i k. Otherwise said, Γ is bijective if and only if the condition of Proposition 2.5(2) holds, that is, when V = V 1 V k. Corollary 2.9. If V = V 1 V k is an internal direct sum of finite-dimensional subspaces then dim V = dim V dim V k. Proof. By Theorem 2.8, we know that V = V 1 V k and so we can apply Proposition 2.7. Remark. On the other hand, we may view any direct product V 1 V k as an internal direct sum: define a subspace ˆV i of V 1 V k be setting all components to zero except the i-th. Thus ˆV i = {(0,..., v i,..., 0) v i V i } V 1 V k. Then each V i = ˆVi and V 1 V k = ˆV 1 ˆV k (internal direct sum). Example. Let us compare R 3 R 2 with R Is it true that R 3 R 2 = R 5? No: elements of R 5 are lists of five numbers (x 1,..., x 5 ) while elements of R 3 R 2 are pairs of lists of numbers ((x 1,..., x 3 ), (y 1, y 2 )). However: 2. R 3 R 2 = R 5 : we identify ((x 1,..., x 3 ), (y 1, y 2 )) with (x 1,..., x 3, y 1, y 2 ). 13
16 3. Moreover, as in the previous remark, we can set and then R 5 = ˆR 3 ˆR 2 (internal direct sum). ˆR 3 := {(x 1, x 2, x 3, 0, 0) x i R} = R 3 ˆR 2 := {(0, 0, 0, y 1, y 2 ) y i R} = R 2 4. Many people see very little difference between R 3 and ˆR 3 et cetera and simply write R 5 = R 3 R 2 = R 3 R 2. While we may sympathise, we should remember in some far corner of our minds that these vector spaces are not quite identical. There is more we could say about sums and products: in particular, one can define the direct sum and product of an infinite number of vector subspaces. However, in that case, the direct product is quite different to the direct sum. You can read about this in the very non-examinable Appendix A Quotients Let U V. We construct a new vector space from U and V which is an abstract complement to U. The elements of this vector spaces are equivalence classes for the following equivalence relation: Definition. Let U V. Say that v, w V are congruent modulo U if v w U. In this case, we write v w mod U. Lemma Congruence modulo U is an equivalence relation. Proof. Exercise 1! Thus each v V lies in exactly one equivalence class [v] V. What do these equivalence classes look like? Note that w v mod U if and only if w v U or, equivalently, w = v + u, for some u U. Definition. For v V, U V, the set v + U := {v + u u U} V is called a coset of U and v is called a coset representative of v + U. We conclude that the equivalence class of v modulo U is the coset v + U. v v + U U 0 Figure 2.4: A subspace U R 2 and a coset v + U. Remark. In geometry, cosets of vector subspaces are called affine subspaces. Examples include lines in R 2 and lines and planes in R 3 irrespective of whether they contain zero (as vector subspaces must). 1 This is question 1 on exercise sheet 3. 14
17 Examples. 1. Fibres of a linear map: let φ : V W be a linear map and let w = φ(v) im φ. Then v φ 1 {w} if and only if φ(v ) = φ(v) or, equivalently, φ(v v ) = 0, that is, v v ker φ. Thus φ 1 {w} = v + ker φ. We shall see below that any coset arises this way for a suitable φ. 2. General solutions of inhomogeneous equations: here is a concrete version of the previous example. Consider the matrix ( ) B = and the corresponding linear map φ B : R 3 R 2. Let us seek the general solution to the inhomogeneous linear equation ( 0 φ B (x) = (0, 4), equivalently, Bx =. (2.1) 4) One solution is ( 1, 1, 1) while the general solution is the fibre of φ B over (0, 4) which is ( 1, 1, 1) + ker φ B. Finding the kernel amounts to solving the homogeneous linear system Bx = 0 which we readily achieve to get that ker φ B = span{(5, 3, 7)} so that the general solution to (2.1) is ( 1, 1, 1) + span{(5, 3, 7)} = {(5λ 1, 3λ + 1, 7λ 1) λ R}. Definition. Let U V. The quotient space V/U of V by U is the set V/U, pronounced V mod U, of cosets of U: V/U := {v + U v V }. The quotient map q : V V/U is defined by q(v) = v + U. This is a vector space and q is a linear map: Theorem Let U V. Then, for v, w V, λ F, (v + U) + (w + U) := (v + w) + U λ(v + U) := (λv) + U give well-defined operations of addition and scalar multiplication on V/U with respect to which V/U is a vector space and q : V V/U is a linear map. Moreover, ker q = U and im q = V/U (so q surjects). Proof. For readability, we use the equivalence class notation [v] = v + U = q(v). So our addition and scalar multiplication are given by [v] + [w] := [v + w] λ[v] := [λv] and a key issue is to see that these are well-defined, that is, we get the same answers if we use different representatives of the cosets. More precisely, if [v] = [v ] and [w] = [w ], we must show that [v + w] = [v + w ], [λv] = [λv ]. (2.2) 15
18 However, in this case, we have v v = u 1 and w w = u 2, for some u 1, u 2 U and then since U is a subspace, and this establishes (2.2). (v + w) (v + w ) = u 1 + u 2 U λv λv = λu 1 U, As for the vector space axioms, these follow from those of V. For example: [v] + [w] = [v + w] = [w + v] = [w] + [v]. The zero element is [0] = 0 + U = U while the additive inverse of [v] is [ v] = ( v) + U. The linearity of q comes straight from how we defined our addition and scalar multiplication: q(v + λw) = [v + λw] = [v] + λ[w] = q(v) + λq(w). Finally v ker q if and only if [v] = [0] if and only if v U while, for any v + U V/U, v + U = q(v) so that q surjects. v + U v v + U U 0 q 0 + U V V/U Figure 2.5: The quotient map q. Corollary Let U V. If V is finite-dimensional then so is V/U and dim V/U = dim V dim U. Proof. Apply rank-nullity to q using ker q = U and im q = V/U. Remark. Theorem 2.11 shows that: 1. Any U V is the kernel of a linear map. 2. Any coset v + U is the fibre of a linear map: indeed v + U = q 1 {v + U}, where we read the v + U on the right as an element of V/U and that on the left as a subset of V! Theorem 2.13 (First Isomorphism Theorem). Let φ : V W be a linear map of vector spaces. Define φ : V/ ker φ im φ by φ(v + ker φ) = φ(v). Then φ is a well-defined linear isomorphism. In particular, V/ ker φ = im φ. Proof. Once again, we use equivalence class notation and write [v] for the coset v + ker φ. Thus φ is defined by φ([v]) = φ(v). First we show that φ is well-defined: [v] = [v ] if and only if v v ker φ if and only if φ(v v ) = 0, or, equivalently, φ(v) = φ(v ). 16
19 To see that φ is linear, we compute: for v 1, v 2 V, λ F. φ([v 1 ] + λ[v 2 ]) = φ([v 1 + λv 2 ]) = φ(v 1 + λv 2 ) = φ(v 1 ) + λφ(v 2 ) = φ([v 1 ]) + λ φ([v 2 ]), Finally we show that φ is an isomorphism: first [v] ker φ if and only if v ker φ if and only if [v] = ker φ, the zero element of V/ ker φ. Thus φ injects. Further, if w im φ, then w = φ(v) = φ([v]), for some v V, so that φ surjects. Remarks. 1. Let q : V V/ ker φ be the quotient map and i : im φ W the inclusion. Then the First Isomorphism Theorem shows that we may write φ as the composition i φ q of a quotient map, an isomorphism and an inclusion. 2. This whole story of cosets, quotients and the First Isomorphism Theorem has versions in many other contexts such as group theory (see MA30237) and ring theory (MA20217). Examples. (1) Let φ L(V, W ). For w = φ(v) im φ, we identified the fibre over w with a coset of ker φ: φ 1 {w} = v + ker φ. From this point of view, the isomorphism φ : V/ ker φ im φ simply reads φ 1 {w} w. (2) More practically, consider once again the matrix ( ) B = and the corresponding linear map φ B : R 3 R 2. Now B has rank 2 (the rows are not proportional) so φ B is onto. Thus the elements of R 3 / ker φ B are the solution sets {x Bx = y}, for each y R 2, and the isomorphism φ B is {x Bx = y} y. This helps us to understand the vector space operations of R 3 / ker φ B : {x Bx = y 1 } + λ{x Bx = y 2 } = {x Bx = y 1 + λy 2 }. 17
20 Chapter 3 Inner product spaces In this chapter, we equip real or complex vector spaces with extra structure that generalises the familiar dot product. Convention. In this chapter, we take the field F of scalars to be either R or C. 3.1 Inner products Definition and examples Recall the dot (or scalar) product on R n : for x = (x 1,..., x n ), y = (y 1,..., y n ) R n, x y := x 1 y x n y n = x T y. Using this we define: the length of x: x := x x; the angle θ between x and y: x y = x y cos θ. There is also a dot product on C n : for x, y C n, x y = x 1 y x n y n = x y, where x (pronounced x-dagger ) is the conjugate transpose x T of x. We then have that x x = x i x i = x i 2 is real, non-negative and vanishes exactly when x = 0. We abstract the key properties of the dot product into the following: Definition. Let V be a vector space of F (which is R or C). An inner product on V is a map V V F : (v, w) v, w which is: (1) (conjugate) symmetric: w, v = v, w, for all v, w V. In particular v, v = v, v and so is real. (2) linear in the second slot: u, v + w = u, v + u, w u, λv = λ u, v, for all u, v, w V and λ F. 18
21 (3) positive definite: For all v V, v, v 0 with equality if and only if v = 0. A vector space with an inner product is called an inner product space. Remark. Any subspace U of an inner product space V is also an inner product space: just restrict, to U U. Let us spell out the implications of this definition in the real and complex cases. Suppose first that F = R. Then the conjugate symmetry is just symmetry: v, w = w, v and it follows that we also have linearity in the first slot: v + w, u = v, u + w, u λv, u = λ v, u. We summarise the situation by saying that a real inner product is a positive definite, symmetric, bilinear form. We shall have more to say about bilinear forms later in chapter 6. Now let us turn to the case F = C. Now it is not the case that an inner product is linear in the first slot. Definition. A map φ : V W of complex vector spaces is conjugate linear (or anti-linear) if for all v, w V and λ F. φ(v + w) = φ(v) + φ(w) φ(λv) = λφ(v), We see from properties (1) and (2) that a complex inner product has v + w, u = v, u + w, u λv, u = λ v, u and so is conjugate linear in the first slot and linear in the second. Such a function is said to be sesquilinear (from the Latin sesqui which means one-and-a-half). Thus an inner product on a complex vector spaces is a positive definite, conjugate symmetric, sesquilinear form. Definition. Let V be an inner product space. 1. The norm of v V is v := v, v Say v, w V are orthogonal or perpendicular if v, w = 0. In this case, we write v w. Remarks. 1. The norm allows us to define the distance between v and w by v w. We can now do analysis on V : this is one of the Big Ideas in MA Warning: There is another convention for complex inner products which is prevalent in Analysis: there they ask that, be linear in the first slot and conjugate linear in the second. There are good reasons for either choice. 3. Physicists often write v w for v, w. Inner product spaces (especially infinite-dimensional ones) are the setting for quantum mechanics. Examples. 1. The dot product on R n or C n is an inner product. 2. Let [a, b] R be a closed, bounded interval. Define a real inner product on C 0 [a, b] by f, g = This is clearly symmetric, bilinear and non-negative. To see that it is definite, one must show that if b a f 2 = 0 then f = 0. This is an exercise in Analysis using the inertia property of continuous functions (see MA20218). b a fg. 19
22 3. The set of square summable sequences l 2 R N is given by l 2 := {(a n ) n N n N a 2 n < }. Exercises. 1 (a) l 2 R N. (b) If a, b l 2 then n N a nb n is absolutely convergent and then a, b := n N a n b n defines an inner product on l 2. Hint: for x, y R, rearrange 0 ( x y ) 2 to get 2 x y x 2 + y 2 (3.1a) and then deduce (x + y) 2 2(x 2 + y 2 ). (3.1b) Judicious use of equations (3.1) and the comparison theorem from MA10207 will bake the cake. Remark. Perhaps surprisingly, l 2 and C 0 [a, b] are closely related: this is what Fourier series are about: see MA Cauchy Schwarz inequality Here is one of the most important and ubiquitous inequalities in all of mathematics: Theorem 3.1 (Cauchy Schwarz inequality). Let V be an inner product space. For v, w V, v, w v w (3.2) with equality if and only if v, w are linearly dependent, that is, either v = 0 or w = λv, for some λ F. Proof. The idea of the proof is to write w = λv + u where u v (see Figure 3.1) and then use the fact that u 2 0. In detail, first note that if v = 0 then both sides of the inequality vanish and there is nothing to prove. Otherwise, let us seek λ F so that u := w λv v. We therefore need 0 = v, w λv = v, w λ v, v so that The situation is shown in Figure 3.1. λ = v, w v 2. With λ and then u so defined we have 1 Question 7 on sheet 4. 0 u 2 = w λv, w λv = w, w λ v, w λ w, v + λ λ v, v = w 2 = w 2 w, v v, w v, w w, v v, w w, v v 2 v 2 + v 4 v 2 v, w 2 v 2, 20
23 u w 0 λv v Figure 3.1: Construction of u. where we used the sesquilinearity of the inner product to reach the second line and the conjugate symmetry to reach the third. Rearranging this yields v, w 2 v 2 w 2 and taking a square root gives us the Cauchy Schwarz inequality. Finally, we have equality if and only if u = 0 or, equivalently, u = 0, that is, w = λu. Examples. 1. Let (Ω, P ) be a finite probability space. Then the space R Ω of real random variables is an inner product space with f, g = E(fg) = x Ω f(x)g(x)p (x), so long as P (x) > 0 for each x Ω (we need this for positive-definiteness). Now the (square of) the Cauchy Schwarz inequality reads E(fg) 2 E(f 2 )E(g 2 ). 2. For a, b l 2, the Cauchy Schwarz inequality reads: ( a n b n n N n N a 2 n ) 1/2 ( 1/2 bn) 2. The Cauchy Schwarz inequality is an essentially 2-dimensional result about the inner product space span{v, w}. Here are some more that are almost as fundamental: Proposition 3.2. Let V be an inner product space and v, w V. 1. Pythagoras Theorem: If v w then n N v + w 2 = v 2 + w 2. (3.3) 2. Triangle inequality: v + w v + w with equality if and only if v = 0 or w = λv with λ Parallelogram identity: v + w 2 + v w 2 = 2( v 2 + w 2 ). Proof. 1. Exercise 2 : expand out v + w 2 = v + w, v + w. 2 Question 2 on sheet 4. 21
24 w v + w w v + w 0 v (a) Pythagoras Theorem 0 v w v (b) Parallelogram identity Figure 3.2: The identities of Proposition We prove v + w 2 ( v + w ) 2. We have Now, by Cauchy Schwarz so that v + w 2 = v 2 + 2Re v, w + w 2. Re v, w v, w v w v + w 2 v v w + w 2 = ( v + w ) 2 with equality if and only if Re v, w = v, w = v w in which case we first get w = λv, for some λ F, and then that Reλ = λ so that λ Exercise 3! 3.2 Orthogonality Orthonormal bases Definition. A list of vectors u 1,..., u k in an inner product space V is orthonormal if, for all 1 i, j k, { 1 if i = j; u i, u j = δ ij := 0 if i j. If u 1,..., u k is also a basis, we call it an orthonormal basis. Example. The standard basis e 1,..., e n of F n is orthonormal for the dot product. Orthonormal bases are very cool. Here is why: if u 1,..., u k is orthonormal and v span{u 1,..., u k } then we can write v = λ 1 u λ k u k. How can we compute the coordinates λ i? In general, this amounts to solving a system of linear equations and so involves something tedious and lengthy like Gaussian elimination. However, in our case, things are much easier. Observe: u i, v = u i, j λ j u j = j λ j u i, u j = j λ j δ ij = λ i. Thus λ i = u i, v (3.4) which is very easy to compute. Let us enshrine this analysis into the following lemma: 3 Question 3 on sheet 4. 22
25 Lemma 3.3. Let V be an inner product space with orthonormal basis u 1,..., u n and let v V. Then As an immediate consequence of (3.4): v = u i, v u i. Lemma 3.4. Any orthonormal list of vectors u 1,..., u k is linearly independent. Proof. If λ 1 u λ k u k = 0 then (3.4) gives λ i = u i, 0 = 0. What is more, these coordinates λ i are all you need to compute inner products. Proposition 3.5. Let u 1,..., u n be an orthonormal basis of an inner product space V. Let v = x 1 u x n u n and w = y 1 u y n u n. Then v, w = x i y i = x y. j=1 Thus the inner product of two vectors is the dot product of their coordinates with respect to an orthonormal basis. Proof. We simply expand out v, w by sesquilinearity: v, w = i x i u i, j y j u j = i,j x i y j u i, u j = i,j x i y j δ ij = i x i y i = x y. To put it another way: Proposition 3.6. Let u 1,..., u n be an orthonormal basis of an inner product space V and v, w V. Then: (1) Parseval s identity: v, w = n v, u i u i, w. (2) Bessel s equality: v 2 = n v, u i 2. Proof. (1) This comes straight from Proposition 3.5, using conjugate symmetry of the inner product to get x i = u i, v = v, u i. (2) Put v = w in (1). All of this should make us eager to get our hands on orthonormal bases and so we would like to know if they always exist. To see that they do, we need the following construction: Theorem 3.7 (Gram Schmidt orthogonalisation). Let v 1,..., v m be linearly independent vectors in an inner product space V. Then there is an orthonormal list u 1,..., u m such that for all 1 k m, defined inductively by: span{u 1,..., u k } = span{v 1,..., v k }, u 1 := v 1 / v 1 u k := w k / w k 23
26 where, for k > 1, k 1 w k := v k u j, v k u j. j=1 Proof. We induct with inductive hypothesis at k that u 1,..., u k is orthonormal and that, for 1 l k, span{u 1,..., u l } = span{v 1,..., v l }. At k = 1, this reads u 1 = 1 and span{u 1 } = span{v 1 } which is certainly true. Now assume the hypothesis is true at k 1 so that u 1,..., u k 1 is orthonormal and span{u 1,..., u k 1 } = span{v 1,..., v k 1 }. Then span{u 1,..., u k } = span{u 1,..., u k 1, w k } = span{v 1,..., v k 1, w k } = span{v 1,..., v k }. Moreover, for any i < k, u i, w k = u i, v k j<k u j, v k u i, u j = u i, v k j<k u j, v k δ ij = u i, v k u i, v k = 0 Thus w k u 1,..., u k 1 so that u k is also whence u 1,..., u k is orthonormal. Thus the inductive hypothesis is true at k and so at m by induction. Remark. For practical purposes, we can get an easier to use formula (no square roots!) for w k by setting w 1 = v 1 and then replacing u j by w j / w j, for j < k, to get: k 1 w k = v k j=1 w j, v k w j 2 w j. Corollary 3.8. Any finite-dimensional inner product space V has an orthonormal basis. Proof. Let v 1,..., v n be any basis of V and apply Theorem 3.7 to get an orthonormal (and so linearly independent by Lemma 3.4) list u 1,..., u n with span{u 1,..., u n } = span{v 1,..., v n } = V. Thus the u 1,..., u n span also and so are an orthonormal basis. Example. Let U R 3 be given by {x R 3 x 1 + x 2 + x 3 = 0}. Let us find an orthonormal basis for U. First we need a basis of U to start with: dim U = 2 (why?) with basis v 1 = (1, 0, 1), v 2 = (0, 1, 1). Then v 1 = 2 so that u 1 = ( 1 2, 0, 1 2 ). Next w 1, v 2 = v 1, v 2 = 1 so that w 2 = v 2 w 1, v 2 w 1, w 1 w 1 = (0, 1, 1) 1 2 (1, 0, 1) = ( 1 2, 1, 1 2 ). This means that w 2 = 1/ /4 = 3/2 so that u 2 = 2 3 ( 1 2, 1, 1 2 ) = ( 1 2 6, 3, 1 6 ). Let us conclude our discussion of orthonormal bases with an application of Gram Schmidt which has uses in Statistics (see MA20227) and elsewhere. First, a definition: 24
27 Definition. A matrix Q M n n (R) is orthogonal if Q T Q = I n, or, equivalently, Q has orthonormal columns with respect to the dot product. identity matrix. Here I n is the n n Remark. The two conditions in this definition are indeed equivalent: if q i is the i-th column of Q then (Q T Q) ij = q T i q j. Theorem 3.9 (QR decomposition). Let A M n n (R) be an invertible matrix. Then we can write A = QR, where Q is orthogonal and R is upper triangular (R ij = 0 if i > j) with positive entries on the diagonal. Proof. We apply Theorem 3.7 to the columns of A to get the columns of Q. So let v 1,..., v n be the columns of A. Since A is invertible, these are a basis so we can apply Theorem 3.7 to get an orthonormal basis u 1,..., u n. Let Q be the orthogonal matrix whose columns are the u i. Unravelling the formulae of the Gram Schmidt procedure, we have and, more generally, v 1 = v 1 u 1 v 2 = w 2 u 2 + u 1, v 2 u 1 v k = w k u k + j<k u j, v k u j. Otherwise said, A = QR where R kk = w k, R jk = u j, v k, for j < k, and R ij = 0 if i > j. To compute Q and R in practice, first do Gram Schmidt orthogonalisation on the columns of A to get Q and then note that Q T A = Q T QR = I n R = R so that R = Q T A which is probably easier to compute than keeping track of intermediate coefficients in the orthogonalisation! Remarks. 1. In pure mathematics, the QR decomposition is a special case of the Iwasawa decomposition. 2. We shall have more to say about orthogonal matrices in the next chapter, see section Orthogonal complements and orthogonal projection Definition. Let V be an inner product space and U V. The orthogonal complement U of U (in V ) is given by U := {v V u, v = 0, for all u U}. Proposition Let V be an inner product space and U V. Then (1) U V ; (2) U U = {0}; (3) U (U ). 25
28 U U Figure 3.3: Orthogonal complements in R 2 Proof. (1) This is a straightforward exercise using the second slot linearity of the inner product. (2) If u U U, u, u = 0 so that u = 0 by positive-definiteness of the inner product. (3) If u U and w U then w, u = u, w = 0 so that u (U ). If U is finite-dimensional then U is a complement to U in the sense of section (even if V is infinite-dimensional!): Theorem Let U be a finite-dimensional subspace of an inner product space V. Then V is an internal direct sum: V = U U. Proof. By Proposition 3.10(2), we just need to prove that V = U +U. For this, let u 1,..., u k be an orthonormal basis of U and let v V. We write v = ( k ) ( k ) u i, v u i + v u i, v u i =: v1 + v 2. Now v 1 U being in the span of the u i while, for 1 j k, Thus, for u = λ 1 u λ k u k U, so that v 2 U. u j, v 2 = u j, v k u i, v u j, u i = u j, v u j, v = 0. u, v 2 = k λ j u j, v 2 = 0 Corollary Let V be a finite-dimensional inner product space and U V. Then (1) dim U = dim V dim U. (2) U = (U ). Proof. j=1 26
29 (1) This is immediate from Proposition 2.3. (2) Two applications of (1) give dim(u ) = dim V dim U = dim U while Proposition 3.10(3) gives U (U ). We conclude that we have equality by Lemma 1.3. Definition. Let V be an inner product space and U V such that V = U U. We can write any v V as v = v 1 + v 2 for unique v 1 U, v 2 U. Define π U : V V, the orthogonal projection onto U, by π U (v) = v 1. Remark. π U (v) = v 2 = v v 1 so that π U = id V π U. The situation is illustrated in Figure 3.4. π U (v) v U π U (v) U Figure 3.4: Orthogonal projections Proposition Let V be an inner product space and U V such that V = U U. Then (1) π u is linear. (2) ker π U = U. (3) π U U = id U so that im π U = U. (4) If U is finite-dimensional with orthonormal basis u 1,..., u k then, for all v V, π U (v) = k u i, v u i. Proof. Items (1) (3) (which make sense for any direct sum) are exercises 4. Item (4) is what we proved to establish Theorem Let us conclude this chapter with an application to a minimisation problem that, among other things, underlies much of Fourier analysis (see MA20223). Theorem Let V be an inner product space and U V such that V = U U. For v V, π U (v) is the nearest point of U to v: for all u U, v π U (v) v u. 4 Question 4 on sheet 2. 27
30 v u U π U (v) Figure 3.5: The orthogonal projection minimises distance to U Proof. As we see in Figure 3.5, this is just the Pythagoras theorem. π U (v) u U while v π U (v) = π U (v) U. Thus Indeed, for u U, note that v π U (v) 2 v π U (v) 2 + π U (v) u 2 = v π U (v) + π U (v) u 2 = v u 2, where the first equality is Pythagoras theorem (Proposition 3.2). Now take square roots! Exercise. Read Example 6.58 on pages of Axler s Linear Algebra Done Right to see a beautiful application of this result. He takes V = C 0 [ π, π] and U to be the space of polynomials of degree at most 5 to get an astonishingly accurate polynomial approximation to sin. 28
31 Chapter 4 Linear operators on inner product spaces Convention. In this chapter, we once again take the field F of scalars to be either R or C. 4.1 Linear operators and their adjoints Linear operators and matrices Definition. Let V be a vector space over F. A linear operator on V is a linear map φ : V V. The vector space of linear operators on V is denoted L(V ) (instead of L(V, V )). We saw in Algebra 1B that linear operators in the presence of a basis are closely related to square matrices: Definition. Let V be a finite-dimensional vector space over F with basis B = v 1,..., v n and let φ L(V ). The matrix of φ with respect to B is the matrix A = (A ij ) M n n (F) for which for 1 j n. φ(v j ) = A ij v i, (4.1) In words, the j-th column of A are the coefficients obtained by expanding out φ(v j ) in terms of the basis B. Equivalently, φ(x 1 v x n v n ) = y 1 v y n v n where Remarks. y = Ax. 1. The map φ A is a linear isomorphism L(V ) M n n (F) that sends composition of operators to multiplication of matrices: if ψ L(V ) has matrix B with respect to B, then ψ φ has matrix BA. 2. This is a special case of the story from Algebra 1B where we use the same basis on the domain and codomain. 3. The fancy way to say the relation between φ and A is to use the isomorphism φ B : F n V corresponding to B (see section 1.4.2). Then φ = φ B φ A φ 1 B, 29
32 or, equivalently, φ B φ A = φ φ B so that the following diagram commutes: V φ V φ B F n φ A F n φ B (The assertion that such a diagram commutes is simply that the two maps one builds by following the arrows in two different ways coincide. However, the diagram also helps us keep track of where the various maps go!) Adjoints First a preliminary lemma: Lemma 4.1 (Nondegeneracy Lemma). Let V be an inner product space and v V. Then v, w = 0, for all w V, if and only if v = 0. Proof. For the forward implication, take v = w to get v, v = 0 and so v = 0 by positive-definiteness of inner product. Conversely, if v = 0, v, w = 0, for any w V, since the inner product is anti-linear in the first slot 1. Remark. To put this another way: V = {0}. Definition. Let V be an inner product space and φ L(V ). φ L(V ) such that, for all v, w V, we have An adjoint to φ is a linear operator φ (v), w = v, φ(w) or, equivalently, by conjugate symmetry, w, φ (v) = φ(w), v. Adjoints are well-behaved under most linear map constructions: Proposition 4.2. Let V be an inner product space and suppose φ, ψ L(V ) have adjoints. Then φ ψ; φ + λψ, λ F; φ and id V all have adjoints given by: (1) (φ ψ) = ψ φ (note the change of order here!). (2) (φ + λψ) = φ + λψ. (3) (φ ) = φ. (4) id V = id V. Proof. These are all easy exercises 2. When V is finite-dimensional, any φ L(V ) has a unique adjoint: Proposition 4.3. Let V be a finite-dimensional inner product space and φ L(V ) a linear operator. Then (1) φ has a unique adjoint φ. (2) Let u 1,..., u n be an orthonormal basis of V with respect to which φ has matrix A. Then φ has matrix A := A T (which is A T when F = R). 1 To spell it out: 0, w = 0 + 0, w = 0, w + 0, w 2 Question 1 on sheet 6. 30
MA20216: Algebra 2A. Notes by Fran Burstall. Corrections by:
MA20216: Algebra 2A Notes by Fran Burstall Corrections by: Callum Kemp Carlos Galeano Rios Kate Powell Tobias Beith Krunoslav Lehman Pavasovic Dan Corbie Phaidra Anastasiadou Louise Hannon Vlad Brebeanu
More informationMath 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.
Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,
More informationElementary linear algebra
Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The
More informationThe following definition is fundamental.
1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic
More informationLinear Algebra. Min Yan
Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................
More informationLINEAR ALGEBRA REVIEW
LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication
More information08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms
(February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops
More informationOctober 25, 2013 INNER PRODUCT SPACES
October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal
More informationMathematics Department Stanford University Math 61CM/DM Inner products
Mathematics Department Stanford University Math 61CM/DM Inner products Recall the definition of an inner product space; see Appendix A.8 of the textbook. Definition 1 An inner product space V is a vector
More informationhomogeneous 71 hyperplane 10 hyperplane 34 hyperplane 69 identity map 171 identity map 186 identity map 206 identity matrix 110 identity matrix 45
address 12 adjoint matrix 118 alternating 112 alternating 203 angle 159 angle 33 angle 60 area 120 associative 180 augmented matrix 11 axes 5 Axiom of Choice 153 basis 178 basis 210 basis 74 basis test
More information4.2. ORTHOGONALITY 161
4.2. ORTHOGONALITY 161 Definition 4.2.9 An affine space (E, E ) is a Euclidean affine space iff its underlying vector space E is a Euclidean vector space. Given any two points a, b E, we define the distance
More informationMath 396. Quotient spaces
Math 396. Quotient spaces. Definition Let F be a field, V a vector space over F and W V a subspace of V. For v, v V, we say that v v mod W if and only if v v W. One can readily verify that with this definition
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationMath Linear Algebra II. 1. Inner Products and Norms
Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,
More informationLinear Algebra. Paul Yiu. Department of Mathematics Florida Atlantic University. Fall A: Inner products
Linear Algebra Paul Yiu Department of Mathematics Florida Atlantic University Fall 2011 6A: Inner products In this chapter, the field F = R or C. We regard F equipped with a conjugation χ : F F. If F =
More informationSupplementary Notes on Linear Algebra
Supplementary Notes on Linear Algebra Mariusz Wodzicki May 3, 2015 1 Vector spaces 1.1 Coordinatization of a vector space 1.1.1 Given a basis B = {b 1,..., b n } in a vector space V, any vector v V can
More informationA PRIMER ON SESQUILINEAR FORMS
A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form
More informationHilbert spaces. 1. Cauchy-Schwarz-Bunyakowsky inequality
(October 29, 2016) Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/fun/notes 2016-17/03 hsp.pdf] Hilbert spaces are
More information1. General Vector Spaces
1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule
More information2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.
Hints for Exercises 1.3. This diagram says that f α = β g. I will prove f injective g injective. You should show g injective f injective. Assume f is injective. Now suppose g(x) = g(y) for some x, y A.
More informationLinear and Bilinear Algebra (2WF04) Jan Draisma
Linear and Bilinear Algebra (2WF04) Jan Draisma CHAPTER 1 Basics We will assume familiarity with the terms field, vector space, subspace, basis, dimension, and direct sums. If you are not sure what these
More informationChapter 2 Linear Transformations
Chapter 2 Linear Transformations Linear Transformations Loosely speaking, a linear transformation is a function from one vector space to another that preserves the vector space operations. Let us be more
More informationExercise Sheet 8 Linear Algebra I
Fachbereich Mathematik Martin Otto Achim Blumensath Nicole Nowak Pavol Safarik Winter Term 2008/2009 (E8.1) [Morphisms] Exercise Sheet 8 Linear Algebra I Let V be a finite dimensional F-vector space and
More information2 so Q[ 2] is closed under both additive and multiplicative inverses. a 2 2b 2 + b
. FINITE-DIMENSIONAL VECTOR SPACES.. Fields By now you ll have acquired a fair knowledge of matrices. These are a concrete embodiment of something rather more abstract. Sometimes it is easier to use matrices,
More informationFirst we introduce the sets that are going to serve as the generalizations of the scalars.
Contents 1 Fields...................................... 2 2 Vector spaces.................................. 4 3 Matrices..................................... 7 4 Linear systems and matrices..........................
More informationALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA
ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND
More informationGQE ALGEBRA PROBLEMS
GQE ALGEBRA PROBLEMS JAKOB STREIPEL Contents. Eigenthings 2. Norms, Inner Products, Orthogonality, and Such 6 3. Determinants, Inverses, and Linear (In)dependence 4. (Invariant) Subspaces 3 Throughout
More informationLINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS
LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts (in
More informationMath 113 Final Exam: Solutions
Math 113 Final Exam: Solutions Thursday, June 11, 2013, 3.30-6.30pm. 1. (25 points total) Let P 2 (R) denote the real vector space of polynomials of degree 2. Consider the following inner product on P
More informationMATH 240 Spring, Chapter 1: Linear Equations and Matrices
MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear
More informationNORMS ON SPACE OF MATRICES
NORMS ON SPACE OF MATRICES. Operator Norms on Space of linear maps Let A be an n n real matrix and x 0 be a vector in R n. We would like to use the Picard iteration method to solve for the following system
More information6 Inner Product Spaces
Lectures 16,17,18 6 Inner Product Spaces 6.1 Basic Definition Parallelogram law, the ability to measure angle between two vectors and in particular, the concept of perpendicularity make the euclidean space
More informationChapter 6: Orthogonality
Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products
More informationDot Products. K. Behrend. April 3, Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem.
Dot Products K. Behrend April 3, 008 Abstract A short review of some basic facts on the dot product. Projections. The spectral theorem. Contents The dot product 3. Length of a vector........................
More informationMathematical Methods wk 1: Vectors
Mathematical Methods wk : Vectors John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm
More informationMathematical Methods wk 1: Vectors
Mathematical Methods wk : Vectors John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm
More informationTypical Problem: Compute.
Math 2040 Chapter 6 Orhtogonality and Least Squares 6.1 and some of 6.7: Inner Product, Length and Orthogonality. Definition: If x, y R n, then x y = x 1 y 1 +... + x n y n is the dot product of x and
More informationThe value of a problem is not so much coming up with the answer as in the ideas and attempted ideas it forces on the would be solver I.N.
Math 410 Homework Problems In the following pages you will find all of the homework problems for the semester. Homework should be written out neatly and stapled and turned in at the beginning of class
More informationMATRICES ARE SIMILAR TO TRIANGULAR MATRICES
MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,
More informationYour first day at work MATH 806 (Fall 2015)
Your first day at work MATH 806 (Fall 2015) 1. Let X be a set (with no particular algebraic structure). A function d : X X R is called a metric on X (and then X is called a metric space) when d satisfies
More informationNumerical Linear Algebra
University of Alabama at Birmingham Department of Mathematics Numerical Linear Algebra Lecture Notes for MA 660 (1997 2014) Dr Nikolai Chernov April 2014 Chapter 0 Review of Linear Algebra 0.1 Matrices
More informationContents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2
Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition
More information4 Hilbert spaces. The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan
The proof of the Hilbert basis theorem is not mathematics, it is theology. Camille Jordan Wir müssen wissen, wir werden wissen. David Hilbert We now continue to study a special class of Banach spaces,
More informationMath 593: Problem Set 10
Math 593: Problem Set Feng Zhu, edited by Prof Smith Hermitian inner-product spaces (a By conjugate-symmetry and linearity in the first argument, f(v, λw = f(λw, v = λf(w, v = λf(w, v = λf(v, w. (b We
More informationCategories and Quantum Informatics: Hilbert spaces
Categories and Quantum Informatics: Hilbert spaces Chris Heunen Spring 2018 We introduce our main example category Hilb by recalling in some detail the mathematical formalism that underlies quantum theory:
More informationMath 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination
Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column
More informationLecture notes: Applied linear algebra Part 1. Version 2
Lecture notes: Applied linear algebra Part 1. Version 2 Michael Karow Berlin University of Technology karow@math.tu-berlin.de October 2, 2008 1 Notation, basic notions and facts 1.1 Subspaces, range and
More informationMath 4A Notes. Written by Victoria Kala Last updated June 11, 2017
Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...
More informationLinear Algebra Highlights
Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to
More informationLinear Algebra 2 Spectral Notes
Linear Algebra 2 Spectral Notes In what follows, V is an inner product vector space over F, where F = R or C. We will use results seen so far; in particular that every linear operator T L(V ) has a complex
More informationI teach myself... Hilbert spaces
I teach myself... Hilbert spaces by F.J.Sayas, for MATH 806 November 4, 2015 This document will be growing with the semester. Every in red is for you to justify. Even if we start with the basic definition
More informationThen x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r
Practice final solutions. I did not include definitions which you can find in Axler or in the course notes. These solutions are on the terse side, but would be acceptable in the final. However, if you
More informationLINEAR ALGEBRA W W L CHEN
LINEAR ALGEBRA W W L CHEN c W W L Chen, 1997, 2008. This chapter is available free to all individuals, on the understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,
More informationLINEAR ALGEBRA MICHAEL PENKAVA
LINEAR ALGEBRA MICHAEL PENKAVA 1. Linear Maps Definition 1.1. If V and W are vector spaces over the same field K, then a map λ : V W is called a linear map if it satisfies the two conditions below: (1)
More informationMATH 112 QUADRATIC AND BILINEAR FORMS NOVEMBER 24, Bilinear forms
MATH 112 QUADRATIC AND BILINEAR FORMS NOVEMBER 24,2015 M. J. HOPKINS 1.1. Bilinear forms and matrices. 1. Bilinear forms Definition 1.1. Suppose that F is a field and V is a vector space over F. bilinear
More informationA Little Beyond: Linear Algebra
A Little Beyond: Linear Algebra Akshay Tiwary March 6, 2016 Any suggestions, questions and remarks are welcome! 1 A little extra Linear Algebra 1. Show that any set of non-zero polynomials in [x], no two
More informationMIDTERM I LINEAR ALGEBRA. Friday February 16, Name PRACTICE EXAM SOLUTIONS
MIDTERM I LIEAR ALGEBRA MATH 215 Friday February 16, 2018. ame PRACTICE EXAM SOLUTIOS Please answer the all of the questions, and show your work. You must explain your answers to get credit. You will be
More informationSpectral Theorem for Self-adjoint Linear Operators
Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;
More information1 Functional Analysis
1 Functional Analysis 1 1.1 Banach spaces Remark 1.1. In classical mechanics, the state of some physical system is characterized as a point x in phase space (generalized position and momentum coordinates).
More informationMath 108b: Notes on the Spectral Theorem
Math 108b: Notes on the Spectral Theorem From section 6.3, we know that every linear operator T on a finite dimensional inner product space V has an adjoint. (T is defined as the unique linear operator
More information( 9x + 3y. y 3y = (λ 9)x 3x + y = λy 9x + 3y = 3λy 9x + (λ 9)x = λ(λ 9)x. (λ 2 10λ)x = 0
Math 46 (Lesieutre Practice final ( minutes December 9, 8 Problem Consider the matrix M ( 9 a Prove that there is a basis for R consisting of orthonormal eigenvectors for M This is just the spectral theorem:
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationIMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET
IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each
More informationFinal Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2
Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch
More informationExercises on chapter 0
Exercises on chapter 0 1. A partially ordered set (poset) is a set X together with a relation such that (a) x x for all x X; (b) x y and y x implies that x = y for all x, y X; (c) x y and y z implies that
More informationEigenvectors and Hermitian Operators
7 71 Eigenvalues and Eigenvectors Basic Definitions Let L be a linear operator on some given vector space V A scalar λ and a nonzero vector v are referred to, respectively, as an eigenvalue and corresponding
More informationIMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET
IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each
More informationSUMMARY OF MATH 1600
SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationFunctional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...
Functional Analysis Franck Sueur 2018-2019 Contents 1 Metric spaces 1 1.1 Definitions........................................ 1 1.2 Completeness...................................... 3 1.3 Compactness......................................
More informationMath 25a Practice Final #1 Solutions
Math 25a Practice Final #1 Solutions Problem 1. Suppose U and W are subspaces of V such that V = U W. Suppose also that u 1,..., u m is a basis of U and w 1,..., w n is a basis of W. Prove that is a basis
More informationApplied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in
More informationExercise Sheet 1.
Exercise Sheet 1 You can download my lecture and exercise sheets at the address http://sami.hust.edu.vn/giang-vien/?name=huynt 1) Let A, B be sets. What does the statement "A is not a subset of B " mean?
More informationDuke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014
Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014 Linear Algebra A Brief Reminder Purpose. The purpose of this document
More informationLINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM
LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator
More informationMATH 115A: SAMPLE FINAL SOLUTIONS
MATH A: SAMPLE FINAL SOLUTIONS JOE HUGHES. Let V be the set of all functions f : R R such that f( x) = f(x) for all x R. Show that V is a vector space over R under the usual addition and scalar multiplication
More informationBindel, Fall 2016 Matrix Computations (CS 6210) Notes for
1 Logistics Notes for 2016-08-29 General announcement: we are switching from weekly to bi-weekly homeworks (mostly because the course is much bigger than planned). If you want to do HW but are not formally
More informationLinear Algebra I. Ronald van Luijk, 2015
Linear Algebra I Ronald van Luijk, 2015 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents Dependencies among sections 3 Chapter 1. Euclidean space: lines and hyperplanes 5 1.1. Definition
More informationMTH 2032 SemesterII
MTH 202 SemesterII 2010-11 Linear Algebra Worked Examples Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education December 28, 2011 ii Contents Table of Contents
More informationj=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.
LINEAR ALGEBRA Fall 203 The final exam Almost all of the problems solved Exercise Let (V, ) be a normed vector space. Prove x y x y for all x, y V. Everybody knows how to do this! Exercise 2 If V is a
More informationCHAPTER 3. Hilbert spaces
CHAPTER 3 Hilbert spaces There are really three types of Hilbert spaces (over C). The finite dimensional ones, essentially just C n, for different integer values of n, with which you are pretty familiar,
More informationCHAPTER VIII HILBERT SPACES
CHAPTER VIII HILBERT SPACES DEFINITION Let X and Y be two complex vector spaces. A map T : X Y is called a conjugate-linear transformation if it is a reallinear transformation from X into Y, and if T (λx)
More informationMath 121 Homework 5: Notes on Selected Problems
Math 121 Homework 5: Notes on Selected Problems 12.1.2. Let M be a module over the integral domain R. (a) Assume that M has rank n and that x 1,..., x n is any maximal set of linearly independent elements
More information1 Invariant subspaces
MATH 2040 Linear Algebra II Lecture Notes by Martin Li Lecture 8 Eigenvalues, eigenvectors and invariant subspaces 1 In previous lectures we have studied linear maps T : V W from a vector space V to another
More informationLinear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016
Linear Algebra Notes Lecture Notes, University of Toronto, Fall 2016 (Ctd ) 11 Isomorphisms 1 Linear maps Definition 11 An invertible linear map T : V W is called a linear isomorphism from V to W Etymology:
More informationChapter 5. Basics of Euclidean Geometry
Chapter 5 Basics of Euclidean Geometry 5.1 Inner Products, Euclidean Spaces In Affine geometry, it is possible to deal with ratios of vectors and barycenters of points, but there is no way to express the
More informationx i e i ) + Q(x n e n ) + ( i<n c ij x i x j
Math 210A. Quadratic spaces over R 1. Algebraic preliminaries Let V be a finite free module over a nonzero commutative ring F. Recall that a quadratic form on V is a map Q : V F such that Q(cv) = c 2 Q(v)
More informationLinear and Bilinear Algebra (2WF04) Jan Draisma
Linear and Bilinear Algebra (2WF04) Jan Draisma CHAPTER 10 Symmetric bilinear forms We already know what a bilinear form β on a K-vector space V is: it is a function β : V V K that satisfies β(u + v,
More informationA linear algebra proof of the fundamental theorem of algebra
A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional
More informationWOMP 2001: LINEAR ALGEBRA. 1. Vector spaces
WOMP 2001: LINEAR ALGEBRA DAN GROSSMAN Reference Roman, S Advanced Linear Algebra, GTM #135 (Not very good) Let k be a field, eg, R, Q, C, F q, K(t), 1 Vector spaces Definition A vector space over k is
More informationLinear Algebra March 16, 2019
Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented
More informationA linear algebra proof of the fundamental theorem of algebra
A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional
More informationFinite-dimensional spaces. C n is the space of n-tuples x = (x 1,..., x n ) of complex numbers. It is a Hilbert space with the inner product
Chapter 4 Hilbert Spaces 4.1 Inner Product Spaces Inner Product Space. A complex vector space E is called an inner product space (or a pre-hilbert space, or a unitary space) if there is a mapping (, )
More informationTopics in linear algebra
Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over
More informationYORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #2 Solutions
YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 3. M Test # Solutions. (8 pts) For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For this
More informationBasics of Hermitian Geometry
Chapter 8 Basics of Hermitian Geometry 8.1 Sesquilinear Forms, Hermitian Forms, Hermitian Spaces, Pre-Hilbert Spaces In this chapter, we attempt to generalize the basic results of Euclidean geometry presented
More informationNOTES on LINEAR ALGEBRA 1
School of Economics, Management and Statistics University of Bologna Academic Year 207/8 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura
More informationIntroduction to Linear Algebra, Second Edition, Serge Lange
Introduction to Linear Algebra, Second Edition, Serge Lange Chapter I: Vectors R n defined. Addition and scalar multiplication in R n. Two geometric interpretations for a vector: point and displacement.
More information5 Compact linear operators
5 Compact linear operators One of the most important results of Linear Algebra is that for every selfadjoint linear map A on a finite-dimensional space, there exists a basis consisting of eigenvectors.
More informationSYLLABUS. 1 Linear maps and matrices
Dr. K. Bellová Mathematics 2 (10-PHY-BIPMA2) SYLLABUS 1 Linear maps and matrices Operations with linear maps. Prop 1.1.1: 1) sum, scalar multiple, composition of linear maps are linear maps; 2) L(U, V
More information