Math 396. Quotient spaces

Size: px
Start display at page:

Transcription

1 Math 396. Quotient spaces. Definition Let F be a field, V a vector space over F and W V a subspace of V. For v, v V, we say that v v mod W if and only if v v W. One can readily verify that with this definition congruence modulo W is an equivalence relation on V. If v V, then we denote by v = v + W = {v + w : w W } the equivalence class of v. We define the quotient space of V and W as V/W = {v : v V }, and we make it into a vector space over F with the operations v + v = v + v and λv = λv, λ F. The proof that these operations are well-defined is similar to what is needed in order to verify that addition and multiplication are well-posed operations on Z n, the integers mod n, n N. Lastly, define the projection π : V V/W as v v. Note that π is both linear and surjective. One can, but in general should not, try to visualize the quotient space V/W as a subspace of the space V. With this in mind, in Figure we have a diagram of how one might do this with V = R and W = {(x, y) R : y = 0} (the x-axis). Note that (x, y) ( x, ỹ) mod W if and only if y = ỹ. Equivalence classes of vectors (x, y) are of the form (x, y) = (0, y)+w. A few of these are indicated in Figure by dotted lines. Hence one could linearly identify R /W with the y-axis. Similarly, if L is any line in R distinct from W then L W = R and so consequently every element in R has a unique representative from L modulo W, so the composite map L R /W is an isomorphism. Of course, there are many choices of L and none is better than any other from an algebraic point of view (though the y-axis may look like the nicest since it is the orthogonal complement to the x-axis W, a viewpoint we will return to later). Figure. The quotient R /W After looking at Figure, one should forget it as quickly as possible. The problems with this sort of identification are discussed later on. Roughly speaking, there are many natural operations associated with the quotient space that are not compatible with the above non-canonical view of the quotient space as a subspace of the original vector space. The issues are similar to looking at {0,..., 9} Z as representative of Z 0. For example, a map f : Z Z, z z cannot be thought of in the same way as the map z z because f({0,..., 9}) {0,..., 9}. Although one should not view {0,..., 9} as a fixed set of elements lying in Z, just as one should not in general identify

2 V/W with a fixed set of representatives in V, this view can be useful for various computations. We will see this is akin to the course of action that we will take in the context of quotient spaces in a later section.. Universal Property of the Quotient Let F, V, W and π be as above. Then the quotient V/W has the following universal property: Whenever W is a vector space over F and ψ : V W is a linear map whose kernel contains W, then there exists a unique linear map φ: V/W W such that ψ = φ π. The universal property can be summarized by the following commutative diagram: () V π V/W ψ W φ Proof. The proof of this fact is rather elementary, but is a useful exercise in developing a better understanding of the quotient space. Let W be a vector space over F and ψ : V W be a linear map with W ker(ψ). Then define φ: V/W W to be the map v ψ(v). Note that φ is well defined because if v V/W and v, v V are both representatives of v, then there exists w W such that v = v + w. Hence, ψ(v ) = ψ(v + u) = ψ(v ) + ψ(w) = ψ(v ). The linearity of φ is an immediate consequence of the linearity of ψ. Furthermore, it is clear from the definition that ψ = φ π. It remains to show that φ is unique. Suppose that σ : V/W W is such that σ π = ψ = φ π. Then σ(v) = φ(v) for all v V. Since π is surjective, σ = φ. As an example, let V = C (R) be the vector space of infinitely differentiable functions over the field R, W V the space of constant functions and ϕ: V V the map f f, the differentiation operator. Then clearly W ker(ϕ) and so there exists a unique map φ: V/W V such that ϕ = φ π, π : V V/W the projection. Two functions f, g V satisfy f = g if and only if f = g. Note that the set of representatives for each f V/W can be thought of as f + R. Concretely, the content of this discussion is that the computation of the derivative of f depends only on f up to a constant factor. In a sense, all surjections are quotients. Let T : V V be a surjective map with kernel W V. Then there is an induced linear map T : V/W V that is surjective (because T is) and injective (follows from def of W ). Thus to factor a linear map ψ : V W through a surjective map T is the same as factoring ψ through the quotient V/W. One can use the univeral property of the quotient to prove another useful factorization. Let V and V be vector spaces over F and W V and W V be subspaces. Denote by π : V V /W and π : V V /W the projection maps. Suppose that T : V V is a linear map with T (W ) W and let T : V V /W be the map x π (T (x)). Note that W ker( T ). Hence by the universal property of the quotient, there exists a unique linear map T : V /W V /W such that T π = T. The contents of this statement can be summarized by saying that if everything is as above, then the following diagram commutes:

3 3 () T V V T π π V /W V /W T In such a situation, we will refer to T as the induced map. The purpose of this is that for v V, the class of T (v ) modulo W depends only on the class of v modulo W, and that the resulting association of congruence classes given by T is linear with respect to the linear structure on these quotient spaces. The induced map comes to be rather useful as a tool for inductive proofs on the dimension. We will see an example of this at the end of this handout. 3. A Basis of the Quotient Let V be a finite-dimensional F-vector space (for a field F), W a subspace, and {w,..., w m }, {w,..., w m, v,..., v n } ordered bases of W and V respectively. We claim that the image vectors {v,..., v n } in V/W are a basis of this quotient space. In particular, this shows dim(v/w ) = dim V dim W. The first step is to check that all v j s span V/W, and the second step is to verify their linear independence. Choose an element x V/W, and pick a representative v V of x (i.e., V V/W sends v V onto our chosen element, or in other words x = v). Since {w i, v j } spans V, we can write v = a i w i + b j v j. Applying the linear projection V V/W to this, we get x = v = a i w i + b j v j = b j v j since w i V/W vanishes for all i. Thus, the v j s span V/W. Meanwhile, for linear independence, if b j v j V/W vanishes for some b j F then we want to prove that all b j vanish. Well, consider the element b j v j V. This projects to 0 in V/W, so we conclude b j v j W. But W is spanned by the w i s, so we get bj v j = a i w i for some a i F. This can be rewritten as bj v j + ( a i )w i = 0 in V. By the assumed linear independence of {w,..., w m, v,..., v n } we conclude that all coefficients on the left side vanish in F. In particular, all b j vanish. Note that there is a converse result as well: if v,..., v n V have the property that their images v i V/W form a basis (so n = dim V dim W ), then for any basis {w j } of W the collection of v i s and w j s is a basis of V. Indeed, since n = dim V dim W then the size of the collection of v i s and w j s is dim V, and hence it suffices to check that they span. For any v V, its image v V/W is a linear combination a i v i, so v a i v i V has image 0 in V/W. That is, v a i v i W, and so v a i v i = b j w j for some b j s. This shows that v is in the span of the v i s and w j s, as desired.

4 4 4. Some calculations We now apply the above principles to compute matrices for an induced map on quotients using several different choice of bases. Let A : R 4 R 3 be the linear map described by the matrix Let L R 4 and L R 3 be the lines: L = span, 0 L = span 4 0 We wish to do two things: (i) In the specific setup at the start, check that A sends L into L. Letting A : R 4 /L R 3 /L denote the induced linear map, if {e i } and {e j } denote the respective standard bases of R4 and R 3 then we wish to compute the matrix of A with respect to the induced bases {e, e 3, e 4 } of R 4 /L and {e, e } of R3 /L. The consideration given in the previous section ensures that the e s form bases of their respective quotient spaces. Indeed, the e s along with the vector spanning their corresponding lines do in fact form a bases of their respective ambient vector spaces. (ii) We want to compute the matrix for A if we switch to the basis {e, e, e 4 } of R 4 /L, and check that this agrees with the result obtained from using our answer in (i) and a 3 by 3 change-of-basis matrix to relate the ordered bases {e, e 3, e 4 } and {e, e, e 4 } of R 4 /L. We also want to show that {e, e, e 3 } is not a basis of the 3-dimensional R 4 /L by explicitly exhibiting a vector x R 4 /L which is not in the span of this triple (and prove that this x really is not in the span). Solution (i) It is trivial to check that A sends the indicated spanning vector of L to exactly the indicated spanning vector of L, so by linearity it sends the line L over into the line L. Thus, we get a well-defined linear map A : R 4 /L R 3 /L characterized by A(v mod L) = A(v) mod L. Letting w denote the indicated spanning vector of L, the 4-tuple {e, e 3, e 4, w} is a basis of R 4 is a linearly independent 4-tuple (since w has non-zero first coordinate!) in a 4-dimensional space, and hence it is a basis. Thus, by the basis criterion for quotients, {e, e 3, e 4 } is a basis of R 4 /L. By a similar argument, since the indicated spanning vector of L has non-zero third coordinate, it follows that {e, e } is a basis of R3 /L. To compute the matrix {e,e } [A] {e,e 3,e 4 }, we have to find the (necessarily unique) elements a ij R such that A(e j ) = a j e + a j e in R 3 /L (or equivalently, A(e j ) a j e + a je mod L ) and then the sought-after matrix will be ( a a 3 a 4 a a 3 a 4 ) Note that the labelling of the a ij s here is chosen to match that of the e j s and the e i s, and of course has nothing to do with labelling positions within a matrix. We won t use such notation when actually doing calculations with this matrix, for at such time we will have actual numbers in the various slots and hence won t have any need for generic matrix notation.

5 Now for the calculation. We must express A(e j ) as a linear combination of e and e. Well, by definition we are given A(e j ) as a linear combination of e, e, e 3. Working modulo L we want to express such things in terms of e and e, so the issue is to express e 3 mod L as a linear combination of e mod L and e mod L. That is, we seek a, b R such that e 3 ae + be mod L, and then we ll substitute this into our formulas for each A(e j ). To find a and b, we let w = 4 0 be the indicated basis vector of L, and since {e, e, w } is a basis of R 3 we know that there is a unique expression e 3 = ae + be + cw and we merely have to make this explicit (for then e 3 ae + be mod L ). Writing out in terms of elements of R 3, this says 0 0 = a 4c b, c so we easily solve c = /, b = 0, a = 4c = 4/ = /. That is, e 3 (/)e mod L, or in other words e 3 = e in R 3 /L. Thus, going back to the definition of A, we compute so A(e ) = e 5e 3 = 0 e + e, A(e 3 ) = 3e + 7e 3 = 9 e, A(e 4 ) = 4e e + 6e 3 = 56 e e, ( 0 {e,e } [A] {e,e 3,e 4 } = 9 ) 56 0 (ii) Since the third coordinate of the indicated spanning vector w of L is non-zero, we see that {e, e, e 4, w} is a basis of R 4, so {e, e, e 4 } is a basis of R 4 /L. The hard part of the calculation was already done in (i) where we computed e 3 as a linear combination of e and e : Using this, we compute e 3 e mod L A(e ) = e + e + 3e 3 8 e + e mod L, so in conjunction with the computations of A(e ) and A(e 4 ) modulo L which we did in (ii), we get {e,e } [A] {e,e,e 4 } = ( ) 5

6 6 The relevant change of basis matrices for passing between our two coordinate systems are M : = {e,e 3,e 4 }[id V ] {e,e,e 4 }, N : = {e,e,e 4 }[id V ] {e,e 3,e 4 } which are inverse to each other. The choice of which matrix to use depends on which way you wish to change coordinates: we have {e,e } [A] {e,e,e 4 } = {e,e } [A] {e,e 3,e 4 } M, {e,e } [A] {e,e 3,e 4 } = {e,e } [A] {e,e,e 4 } N. We ll compute M, and then we ll invert it to find N (or could just as well directly compute N by the same method, and check our results are inverse to each other as a safety check). Then we ll see that our computed matrices do work as the theory ensures they must, relative to our calculations of matrices for A relative to two different choices of basis in R 4 /L. By starting at the two bases of R 4 /L which are involved, it is clear that M = a 0 b 0 0, c 0 where in R 4 /L, or equivalently e = ae + be 3 + ce 4 e = ae + be 3 + ce 4 + dw with w the indicated basis vector of L and d F. We need to explicate the fact that {e, e 3, e 4, w} is a basis of R 4 by solving the system of equations d 0 0 = a d b + d, 0 c so d =, a = d =, b = d =, c = 0. Thus, M = 0 0 0, 0 0 and likewise (either by inverting or doing a similar direct calculation, or perhaps even both!) we find N = Now the calculation comes down to checking one of the following identities: ( 8 0 ) ( 56? 0 = 9 ) , ( 0 9 ) ( 56 8? = 0 ) , 0 0 both of which are straightfoward to verify.

7 Finally, to show explicitly that e, e, e 3 are not a basis of R 4 /L, we consider the congruence class x = e 4. This vector cannot lie in the R-span of e, e, e 3 for otherwise there would exist a, b, c R with e 4 ae be ce 3 span, 0 but this would express e 4 in the span of four vectors which all have last coordinate zero. This is impossible. 5. The Interaction of the Quotient and an Inner Product Let V a vector space over R, dim V <, and W V a subspace. Further suppose that V is equipped with an inner product,. Then we could identify V/W with W since W W = V. In this case, the isomorphism is particularly easy to construct. Indeed, every element v V can be uniquely written as u + w, where u W and w W and we identify v with u. In other words, the direct sum decomposition implies that restricting the projection π : V V/W to W induces an isomorphism of W onto V/W, and by composing with the inverse of this isomorphism we get a composite map V V/W W that is exactly the orthogonal projection onto W (in view of how the decomposition of V into a direct sum of W and W is made). In this sense, when given an inner product on our finite-dimensional V over R we may use the inner product structure to set up a natural identification of quotient spaces V/W with subspaces of the given vector space V, and we see that this orthogonal picture is exactly what was suggested in the example in Section (see Figure ). However, since this mechanism depends crucially on the inner product structure, we stress a key point: if we change the inner product then the notion of orthogonal projection (and hence the physical subspace W in general) will change, so this method of linearly putting V/W back into V depends on more than just the linear structure. Consequently, if we work with auxiliary structures that do not respect the inner product (such as non-orthogonal self maps of V ), then we cannot expect such data to interact well with this construction even when such data may given something well-posed on the level of abstract vector spaces. Let us consider an eample to illustrate what can go wrong if we are too sloppy in this orthgonal projection method of putting V/W back into V. Suppose that V is another finite-dimensional vector space over R with inner product,, W V a subspace of V and T : V V a linear map with T (W ) W. Then it is not true, in general, that T (W ) (W ). As an example, consider V = R, W = {(x, y) R : y = 0}, V = R 3 and W = {(x, y, z) R 3 : z = 0} and let T : R R 3 be the map (x, y) (x, y, y). Assume that V and V are both equipped with the standard Euclidean inner product. Clearly, T (W ) W. On the other hand, (0, ) W but T ((0, )) = (0,, ) / (W ) and hence T (W ) (W ). So the restriction of T to W does not land in W and hence there is no orthogonal projection model inside of V and V for the induced map T betwen the quotients V/W and V /W. (Note that T does not preserve the inner products in this example.) Hence, although the induced quotient map T makes sense here, as T carries W into W, if we try to visualize the quotients back inside of the original space via orthogonal projection then we have a hard time seeing T. 6. The Quotient and the Dual Let V be a vector space over F, dim V < and W V a subspace. Then (V/W ), the dual space of V/W, can naturally be thought of as a linear subspace of V. More precisely, there exists 7

8 8 a natural isomorphism ϕ: U (V/W ), where U = {f V : f(w ) = 0}. The isomorphism can be easily constructed by appealing to the universal property of the quotient. Indeed, let f U. Then f : V F is linear with W ker(f) and so there exists a unique map g : V/W F, in other words g (V/W ), such that f = g π. Define ϕ(f) = g, where f and g as before. The linearity of ϕ is clear from the construction and the injectivity is also clear since if ϕ(f ) = g = ϕ(f ), then f = g π = f. To see that ϕ is surjective, pick g (V/W ) arbitrary and let f = g π U. By uniqueness, ϕ(f) = g follows from the definition of ϕ. Conversely, we can show that every subspace W of V has the form (V/W ) for a unique quotient V/W of V. In other words, the subspaces of V are in bijective correspondence with the quotients of V via duality. Indeed, to give a quotient that is the same as to give a surjective linear map V V, and to give a subspace of V is the same as to give an injective linear map U V, and we know that a linear map between finite-dimensional vector spaces is surjective if and only if its dual injective (and vice versa). By means of double-duality, these two procedures are the reverse of each other: if V V is a quotient and we form the subspace V V, then upon dualizing again we recover (by double duality) the original surjection, which is to say that V is the quotient by exactly the subspace of vectors in V killed by the functionals coming from V. That is, the concrete converse procedure to go from subspaces of V to quotients of V is the following: given a subspace of V, we form the quotient of V modulo the subspace of vectors killed by all functionals in the given subspace of V. In the context of inner product spaces, where we use the inner product to indentify quotients with subspaces of the original space, this procedure translates into the passage between subspaces of V and their orthogonal complements (and the fact that the above two procedures are reverse to each other corresponds to the fact that the orthogonal complement of the orthogonal complements is the initial subspace). 7. More on the Quotient and Matrices Let V and V be vector spaces over F of dimension n and n, respectively, W V and W V subspaces of dimension m and m, respectively, and T : V V a linear map such that T (W ) W. Let B and B be bases of V and V, respectively, that are extensions of bases of W and W, resectively. Let A be the matrix representation of T with respect to B and B. Then, the lower right (n m ) (n m ) block of A exactly corresponds to the matrix representation of the induced map T : V /W V /W with respect to the bases of V /W and V /W coming from the non-zero vectors in the sets π (B ) and π (B ), π and π the usual projection maps. Proof. Let m = n m and v,..., v m V /W be the non-zero vectors in π (B ). Note that these vectors form a basis of V /W. Further, let v B with v / W. Then T (v) = T (v) = α v + + α m v m. Hence there exists w W such that T (v) = α v + + α m v m + w. The claim follows since the matrix representation of a linear map with respect to choices of ordered bases arises exactly as the coefficients of the images of vectors making up the ordered basis of the source vector space written with respect ordered basis of the target vector space basis. Now, let V = V = V and W = W = W and T : V V a linear map with T (W ) W. Let T W : W W be the self map induced by T, and let T : V/W V/W be the map induced by V/W. We shall now apply the above result to show that det(t ) = det(t W ) det(t ). To this end,

9 one only needs to note that the upper left block M of the matrix representation ( ) M M A = M M of T with respect to a basis B of V extended from a basis B of W will correspond to the matrix representation of T W with respect to the basis B and also that the entries of the lower left block M of A will be 0. Both of these facts are immediate since the restriction T W maps into W. Furthermore, the previous result implies that M exactly corresponds to the matrix representation of T with respect to the basis B. As another application, consider the following. Let V be a vector space of finite dimension over C and T : V V a linear map. We will use the idea of the quotient space to show that there exists a basis B of V such that the map T written as a matrix with respect to the basis B is upper triangular. Proof. The proof is by induction on the dimension n N 0. The case where n = 0, i.e. V = 0, is somewhat meaningless and so we ignore it. The case where n = is also trivial. Now suppose that for some fixed n N that for every vector space V over C of dimension n that every linear map T : V V can be expressed as an upper triangular matrix with respect to some ordered basis B of V. Let W be an (n + )-dimensional vector space over C and S : W W a linear map. Recall that by the fundemental theorem of algebra C is algebraically closed, i.e. every polynomial has a root. In particular, it is immediate from this fact that there exists an eigenvector w W of S. Let L = span{w}. Note that S(L) L since w is an eigenvector of S and further that dim W/L = n. By the induction hypothesis, there exists an ordered basis B = {w,..., w n } of W/L such that induced map S : W/L W/L expressed as a matrix with respect to B is upper triangular. Let B = {w, w,..., w n }. Then B is an ordered basis of W (recall Section ). Furthermore, S written as a matrix A with respect to B is upper triangular. ( ) M M A = M M Above we express A in block form. M is a matrix, M a n matrix, M an n matrix and M is an n n matrix. Since w is an eigenvector, it is clear that M corresponds to the eigenvalue associated with w and that M consists only of zeroes. M exactly agrees with the matrix expression of S with respect to B (recall the construction of the induced map) and so is upper triangular. For our purposes, the form of M is irrelevant and thus A is upper triangular. 9

V (v i + W i ) (v i + W i ) is path-connected and hence is connected.

Math 396. Connectedness of hyperplane complements Note that the complement of a point in R is disconnected and the complement of a (translated) line in R 2 is disconnected. Quite generally, we claim that

A PRIMER ON SESQUILINEAR FORMS

A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS KEITH CONRAD. Introduction The easiest matrices to compute with are the diagonal ones. The sum and product of diagonal matrices can be computed componentwise

Math 210B. Profinite group cohomology

Math 210B. Profinite group cohomology 1. Motivation Let {Γ i } be an inverse system of finite groups with surjective transition maps, and define Γ = Γ i equipped with its inverse it topology (i.e., the

Math 396. An application of Gram-Schmidt to prove connectedness

Math 396. An application of Gram-Schmidt to prove connectedness 1. Motivation and background Let V be an n-dimensional vector space over R, and define GL(V ) to be the set of invertible linear maps V V

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

Exercises on chapter 0

Exercises on chapter 0 1. A partially ordered set (poset) is a set X together with a relation such that (a) x x for all x X; (b) x y and y x implies that x = y for all x, y X; (c) x y and y z implies that

Math 594. Solutions 5

Math 594. Solutions 5 Book problems 6.1: 7. Prove that subgroups and quotient groups of nilpotent groups are nilpotent (your proof should work for infinite groups). Give an example of a group G which possesses

Dot Products, Transposes, and Orthogonal Projections

Dot Products, Transposes, and Orthogonal Projections David Jekel November 13, 2015 Properties of Dot Products Recall that the dot product or standard inner product on R n is given by x y = x 1 y 1 + +

Topics in linear algebra

Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over

Linear and Bilinear Algebra (2WF04) Jan Draisma

Linear and Bilinear Algebra (2WF04) Jan Draisma CHAPTER 1 Basics We will assume familiarity with the terms field, vector space, subspace, basis, dimension, and direct sums. If you are not sure what these

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

2 bases of V. Largely for this reason, we will ignore the 0 space in this handout; anyway, including it in various results later on is always either m

Math 396. Orientations In the theory of manifolds there will be a notion called orientability that is required to integrate top-degree differential forms. Orientations arise from certain notions in linear

Equivalence Relations

Equivalence Relations Definition 1. Let X be a non-empty set. A subset E X X is called an equivalence relation on X if it satisfies the following three properties: 1. Reflexive: For all x X, (x, x) E.

MATH Linear Algebra

MATH 304 - Linear Algebra In the previous note we learned an important algorithm to produce orthogonal sequences of vectors called the Gramm-Schmidt orthogonalization process. Gramm-Schmidt orthogonalization

Math 121 Homework 4: Notes on Selected Problems

Math 121 Homework 4: Notes on Selected Problems 11.2.9. If W is a subspace of the vector space V stable under the linear transformation (i.e., (W ) W ), show that induces linear transformations W on W

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

Math 250A, Fall 2004 Problems due October 5, 2004 The problems this week were from Lang s Algebra, Chapter I.

Math 250A, Fall 2004 Problems due October 5, 2004 The problems this week were from Lang s Algebra, Chapter I. 24. We basically know already that groups of order p 2 are abelian. Indeed, p-groups have non-trivial

Math 396. Bijectivity vs. isomorphism

Math 396. Bijectivity vs. isomorphism 1. Motivation Let f : X Y be a C p map between two C p -premanifolds with corners, with 1 p. Assuming f is bijective, we would like a criterion to tell us that f 1

L(C G (x) 0 ) c g (x). Proof. Recall C G (x) = {g G xgx 1 = g} and c g (x) = {X g Ad xx = X}. In general, it is obvious that

ALGEBRAIC GROUPS 61 5. Root systems and semisimple Lie algebras 5.1. Characteristic 0 theory. Assume in this subsection that chark = 0. Let me recall a couple of definitions made earlier: G is called reductive

SPECTRAL THEORY EVAN JENKINS

SPECTRAL THEORY EVAN JENKINS Abstract. These are notes from two lectures given in MATH 27200, Basic Functional Analysis, at the University of Chicago in March 2010. The proof of the spectral theorem for

FINITE GROUP THEORY: SOLUTIONS FALL MORNING 5. Stab G (l) =.

FINITE GROUP THEORY: SOLUTIONS TONY FENG These are hints/solutions/commentary on the problems. They are not a model for what to actually write on the quals. 1. 2010 FALL MORNING 5 (i) Note that G acts

Math 396. Subbundles and quotient bundles

Math 396. Subbundles and quotient bundles 1. Motivation We want to study the bundle analogues of subspaces and quotients of finite-dimensional vector spaces. Let us begin with some motivating examples.

MODEL ANSWERS TO THE FOURTH HOMEWORK

MODEL ANSWES TO THE OUTH HOMEWOK 1. (i) We first try to define a map left to right. Since finite direct sums are the same as direct products, we just have to define a map to each factor. By the universal

08a. Operators on Hilbert spaces. 1. Boundedness, continuity, operator norms

(February 24, 2017) 08a. Operators on Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/real/notes 2016-17/08a-ops

2. Duality and tensor products. In class, we saw how to define a natural map V1 V2 (V 1 V 2 ) satisfying

Math 396. Isomorphisms and tensor products In this handout, we work out some examples of isomorphisms involving tensor products of vector spaces. The three basic principles are: (i) to construct maps involving

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

We simply compute: for v = x i e i, bilinearity of B implies that Q B (v) = B(v, v) is given by xi x j B(e i, e j ) =

Math 395. Quadratic spaces over R 1. Algebraic preliminaries Let V be a vector space over a field F. Recall that a quadratic form on V is a map Q : V F such that Q(cv) = c 2 Q(v) for all v V and c F, and

Math 210B. Artin Rees and completions

Math 210B. Artin Rees and completions 1. Definitions and an example Let A be a ring, I an ideal, and M an A-module. In class we defined the I-adic completion of M to be M = lim M/I n M. We will soon show

Math 396. Linear algebra operations on vector bundles

Math 396. Linear algebra operations on vector bundles 1. Motivation Let (X, O) be a C p premanifold with corners, 0 p. We have developed the notion of a C p vector bundle over X as a certain kind of C

1 Invariant subspaces

MATH 2040 Linear Algebra II Lecture Notes by Martin Li Lecture 8 Eigenvalues, eigenvectors and invariant subspaces 1 In previous lectures we have studied linear maps T : V W from a vector space V to another

Math 121 Homework 5: Notes on Selected Problems

Math 121 Homework 5: Notes on Selected Problems 12.1.2. Let M be a module over the integral domain R. (a) Assume that M has rank n and that x 1,..., x n is any maximal set of linearly independent elements

Exercise Sheet 8 Linear Algebra I

Fachbereich Mathematik Martin Otto Achim Blumensath Nicole Nowak Pavol Safarik Winter Term 2008/2009 (E8.1) [Morphisms] Exercise Sheet 8 Linear Algebra I Let V be a finite dimensional F-vector space and

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

MATH 110 - SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER 2009 GSI: SANTIAGO CAÑEZ 1. Given vector spaces V and W, V W is the vector space given by V W = {(v, w) v V and w W }, with addition and scalar

Groups of Prime Power Order with Derived Subgroup of Prime Order

Journal of Algebra 219, 625 657 (1999) Article ID jabr.1998.7909, available online at http://www.idealibrary.com on Groups of Prime Power Order with Derived Subgroup of Prime Order Simon R. Blackburn*

Honors Algebra 4, MATH 371 Winter 2010 Assignment 4 Due Wednesday, February 17 at 08:35

Honors Algebra 4, MATH 371 Winter 2010 Assignment 4 Due Wednesday, February 17 at 08:35 1. Let R be a commutative ring with 1 0. (a) Prove that the nilradical of R is equal to the intersection of the prime

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

Chapter 2 Linear Transformations

Chapter 2 Linear Transformations Linear Transformations Loosely speaking, a linear transformation is a function from one vector space to another that preserves the vector space operations. Let us be more

Lemma 1.3. The element [X, X] is nonzero.

Math 210C. The remarkable SU(2) Let G be a non-commutative connected compact Lie group, and assume that its rank (i.e., dimension of maximal tori) is 1; equivalently, G is a compact connected Lie group

First we introduce the sets that are going to serve as the generalizations of the scalars.

Contents 1 Fields...................................... 2 2 Vector spaces.................................. 4 3 Matrices..................................... 7 4 Linear systems and matrices..........................

Math 145. Codimension

Math 145. Codimension 1. Main result and some interesting examples In class we have seen that the dimension theory of an affine variety (irreducible!) is linked to the structure of the function field in

Sets and Functions. (As we will see, in describing a set the order in which elements are listed is irrelevant).

Sets and Functions 1. The language of sets Informally, a set is any collection of objects. The objects may be mathematical objects such as numbers, functions and even sets, or letters or symbols of any

Selected exercises from Abstract Algebra by Dummit and Foote (3rd edition).

Selected exercises from Abstract Algebra by Dummit and Foote (3rd edition). Bryan Félix Abril 12, 217 Section 11.1 Exercise 6. Let V be a vector space of finite dimension. If ϕ is any linear transformation

BILINEAR FORMS KEITH CONRAD The geometry of R n is controlled algebraically by the dot product. We will abstract the dot product on R n to a bilinear form on a vector space and study algebraic and geometric

1 Fields and vector spaces

1 Fields and vector spaces In this section we revise some algebraic preliminaries and establish notation. 1.1 Division rings and fields A division ring, or skew field, is a structure F with two binary

1.8 Dual Spaces (non-examinable)

2 Theorem 1715 is just a restatement in terms of linear morphisms of a fact that you might have come across before: every m n matrix can be row-reduced to reduced echelon form using row operations Moreover,

Math 676. A compactness theorem for the idele group. and by the product formula it lies in the kernel (A K )1 of the continuous idelic norm

Math 676. A compactness theorem for the idele group 1. Introduction Let K be a global field, so K is naturally a discrete subgroup of the idele group A K and by the product formula it lies in the kernel

Infinite-Dimensional Triangularization

Infinite-Dimensional Triangularization Zachary Mesyan March 11, 2018 Abstract The goal of this paper is to generalize the theory of triangularizing matrices to linear transformations of an arbitrary vector

ELEMENTARY SUBALGEBRAS OF RESTRICTED LIE ALGEBRAS

ELEMENTARY SUBALGEBRAS OF RESTRICTED LIE ALGEBRAS J. WARNER SUMMARY OF A PAPER BY J. CARLSON, E. FRIEDLANDER, AND J. PEVTSOVA, AND FURTHER OBSERVATIONS 1. The Nullcone and Restricted Nullcone We will need

NOTES ON FINITE FIELDS

NOTES ON FINITE FIELDS AARON LANDESMAN CONTENTS 1. Introduction to finite fields 2 2. Definition and constructions of fields 3 2.1. The definition of a field 3 2.2. Constructing field extensions by adjoining

Math 291-2: Lecture Notes Northwestern University, Winter 2016

Math 291-2: Lecture Notes Northwestern University, Winter 2016 Written by Santiago Cañez These are lecture notes for Math 291-2, the second quarter of MENU: Intensive Linear Algebra and Multivariable Calculus,

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

Math 113 Final Exam: Solutions

Math 113 Final Exam: Solutions Thursday, June 11, 2013, 3.30-6.30pm. 1. (25 points total) Let P 2 (R) denote the real vector space of polynomials of degree 2. Consider the following inner product on P

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

Contents. Preface for the Instructor. Preface for the Student. xvii. Acknowledgments. 1 Vector Spaces 1 1.A R n and C n 2

Contents Preface for the Instructor xi Preface for the Student xv Acknowledgments xvii 1 Vector Spaces 1 1.A R n and C n 2 Complex Numbers 2 Lists 5 F n 6 Digression on Fields 10 Exercises 1.A 11 1.B Definition

Then x 1,..., x n is a basis as desired. Indeed, it suffices to verify that it spans V, since n = dim(v ). We may write any v V as r

Practice final solutions. I did not include definitions which you can find in Axler or in the course notes. These solutions are on the terse side, but would be acceptable in the final. However, if you

where Σ is a finite discrete Gal(K sep /K)-set unramified along U and F s is a finite Gal(k(s) sep /k(s))-subset

Classification of quasi-finite étale separated schemes As we saw in lecture, Zariski s Main Theorem provides a very visual picture of quasi-finite étale separated schemes X over a henselian local ring

EXAMPLES AND EXERCISES IN BASIC CATEGORY THEORY

EXAMPLES AND EXERCISES IN BASIC CATEGORY THEORY 1. Categories 1.1. Generalities. I ve tried to be as consistent as possible. In particular, throughout the text below, categories will be denoted by capital

Chapter 6: Orthogonality

Chapter 6: Orthogonality (Last Updated: November 7, 7) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved around.. Inner products

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,

Homework set 4 - Solutions

Homework set 4 - Solutions Math 407 Renato Feres 1. Exercise 4.1, page 49 of notes. Let W := T0 m V and denote by GLW the general linear group of W, defined as the group of all linear isomorphisms of W

Linear Algebra I. Ronald van Luijk, 2015

Linear Algebra I Ronald van Luijk, 2015 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents Dependencies among sections 3 Chapter 1. Euclidean space: lines and hyperplanes 5 1.1. Definition

Representations of quivers

Representations of quivers Gwyn Bellamy October 13, 215 1 Quivers Let k be a field. Recall that a k-algebra is a k-vector space A with a bilinear map A A A making A into a unital, associative ring. Notice

Chapter 2 The Group U(1) and its Representations

Chapter 2 The Group U(1) and its Representations The simplest example of a Lie group is the group of rotations of the plane, with elements parametrized by a single number, the angle of rotation θ. It is

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

1 Dirac Notation for Vector Spaces

Theoretical Physics Notes 2: Dirac Notation This installment of the notes covers Dirac notation, which proves to be very useful in many ways. For example, it gives a convenient way of expressing amplitudes

Mic ael Flohr Representation theory of semi-simple Lie algebras: Example su(3) 6. and 20. June 2003

Handout V for the course GROUP THEORY IN PHYSICS Mic ael Flohr Representation theory of semi-simple Lie algebras: Example su(3) 6. and 20. June 2003 GENERALIZING THE HIGHEST WEIGHT PROCEDURE FROM su(2)

Math 113 Midterm Exam Solutions

Math 113 Midterm Exam Solutions Held Thursday, May 7, 2013, 7-9 pm. 1. (10 points) Let V be a vector space over F and T : V V be a linear operator. Suppose that there is a non-zero vector v V such that

Some notes on linear algebra

Some notes on linear algebra Throughout these notes, k denotes a field (often called the scalars in this context). Recall that this means that there are two binary operations on k, denoted + and, that

Problems in Linear Algebra and Representation Theory

Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific

MATH 115A: SAMPLE FINAL SOLUTIONS

MATH A: SAMPLE FINAL SOLUTIONS JOE HUGHES. Let V be the set of all functions f : R R such that f( x) = f(x) for all x R. Show that V is a vector space over R under the usual addition and scalar multiplication

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example

EILENBERG-ZILBER VIA ACYCLIC MODELS, AND PRODUCTS IN HOMOLOGY AND COHOMOLOGY

EILENBERG-ZILBER VIA ACYCLIC MODELS, AND PRODUCTS IN HOMOLOGY AND COHOMOLOGY CHRIS KOTTKE 1. The Eilenberg-Zilber Theorem 1.1. Tensor products of chain complexes. Let C and D be chain complexes. We define

Math 210C. A non-closed commutator subgroup

Math 210C. A non-closed commutator subgroup 1. Introduction In Exercise 3(i) of HW7 we saw that every element of SU(2) is a commutator (i.e., has the form xyx 1 y 1 for x, y SU(2)), so the same holds for

2.2. Show that U 0 is a vector space. For each α 0 in F, show by example that U α does not satisfy closure.

Hints for Exercises 1.3. This diagram says that f α = β g. I will prove f injective g injective. You should show g injective f injective. Assume f is injective. Now suppose g(x) = g(y) for some x, y A.

Lecture 7: Semidefinite programming

CS 766/QIC 820 Theory of Quantum Information (Fall 2011) Lecture 7: Semidefinite programming This lecture is on semidefinite programming, which is a powerful technique from both an analytic and computational

4.2. ORTHOGONALITY 161

4.2. ORTHOGONALITY 161 Definition 4.2.9 An affine space (E, E ) is a Euclidean affine space iff its underlying vector space E is a Euclidean vector space. Given any two points a, b E, we define the distance

MAT 445/ INTRODUCTION TO REPRESENTATION THEORY

MAT 445/1196 - INTRODUCTION TO REPRESENTATION THEORY CHAPTER 1 Representation Theory of Groups - Algebraic Foundations 1.1 Basic definitions, Schur s Lemma 1.2 Tensor products 1.3 Unitary representations

Math Linear Algebra

Math 220 - Linear Algebra (Summer 208) Solutions to Homework #7 Exercise 6..20 (a) TRUE. u v v u = 0 is equivalent to u v = v u. The latter identity is true due to the commutative property of the inner

Spectral Theorem for Self-adjoint Linear Operators

Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;

LECTURE 16: TENSOR PRODUCTS

LECTURE 16: TENSOR PRODUCTS We address an aesthetic concern raised by bilinear forms: the source of a bilinear function is not a vector space. When doing linear algebra, we want to be able to bring all

The Spinor Representation

The Spinor Representation Math G4344, Spring 2012 As we have seen, the groups Spin(n) have a representation on R n given by identifying v R n as an element of the Clifford algebra C(n) and having g Spin(n)

Lecture 4.1: Homomorphisms and isomorphisms

Lecture 4.: Homomorphisms and isomorphisms Matthew Macauley Department of Mathematical Sciences Clemson University http://www.math.clemson.edu/~macaule/ Math 4, Modern Algebra M. Macauley (Clemson) Lecture

WOMP 2001: LINEAR ALGEBRA. 1. Vector spaces

WOMP 2001: LINEAR ALGEBRA DAN GROSSMAN Reference Roman, S Advanced Linear Algebra, GTM #135 (Not very good) Let k be a field, eg, R, Q, C, F q, K(t), 1 Vector spaces Definition A vector space over k is

Math 249B. Nilpotence of connected solvable groups

Math 249B. Nilpotence of connected solvable groups 1. Motivation and examples In abstract group theory, the descending central series {C i (G)} of a group G is defined recursively by C 0 (G) = G and C

ISOMORPHISMS KEITH CONRAD 1. Introduction Groups that are not literally the same may be structurally the same. An example of this idea from high school math is the relation between multiplication and addition

Vector Spaces and Linear Transformations

Vector Spaces and Linear Transformations Wei Shi, Jinan University 2017.11.1 1 / 18 Definition (Field) A field F = {F, +, } is an algebraic structure formed by a set F, and closed under binary operations

LINEAR ALGEBRA REVIEW

LINEAR ALGEBRA REVIEW JC Stuff you should know for the exam. 1. Basics on vector spaces (1) F n is the set of all n-tuples (a 1,... a n ) with a i F. It forms a VS with the operations of + and scalar multiplication

The following definition is fundamental.

1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

Real representations

Real representations 1 Definition of a real representation Definition 1.1. Let V R be a finite dimensional real vector space. A real representation of a group G is a homomorphism ρ VR : G Aut V R, where

ALGEBRA 8: Linear algebra: characteristic polynomial

ALGEBRA 8: Linear algebra: characteristic polynomial Characteristic polynomial Definition 8.1. Consider a linear operator A End V over a vector space V. Consider a vector v V such that A(v) = λv. This

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

LINEAR ALGEBRA Fall 203 The final exam Almost all of the problems solved Exercise Let (V, ) be a normed vector space. Prove x y x y for all x, y V. Everybody knows how to do this! Exercise 2 If V is a

Tune-Up Lecture Notes Linear Algebra I

Tune-Up Lecture Notes Linear Algebra I One usually first encounters a vector depicted as a directed line segment in Euclidean space, or what amounts to the same thing, as an ordered n-tuple of numbers

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

Math 210B. The bar resolution

Math 210B. The bar resolution 1. Motivation Let G be a group. In class we saw that the functorial identification of M G with Hom Z[G] (Z, M) for G-modules M (where Z is viewed as a G-module with trivial

Supplementary Notes on Linear Algebra

Supplementary Notes on Linear Algebra Mariusz Wodzicki May 3, 2015 1 Vector spaces 1.1 Coordinatization of a vector space 1.1.1 Given a basis B = {b 1,..., b n } in a vector space V, any vector v V can

. = V c = V [x]v (5.1) c 1. c k

Chapter 5 Linear Algebra It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix Because they form the foundation on which we later work,