EULER FACTORIZATIONS IN CLASSICAL GROUPS. A.M. DuPre. Rutgers University. May 14, 1992

Size: px
Start display at page:

Download "EULER FACTORIZATIONS IN CLASSICAL GROUPS. A.M. DuPre. Rutgers University. May 14, 1992"

Transcription

1 EULE FACTOIZATIONS IN CLASSICAL GOUPS A.M. DuPre utgers University May 4, 99 Abstract. Several dierent factorizations of elements of the rotation, unitary, indenite orthogonal, symplectic and unitary symplectic groups into elements in these groups which act only on two coordinates, leaving the others xed, are constructively derived, as well as a similar factorization of the matrices which conjugate one of these elements into a maximal torus. These various factorizations are then used to derive a rational parametrization of the respective groups. Introduction G is a matrix with elements in the real or complex eld. The set of nonsingular n n matrices G with elements in I for which G t IG = I forms a group called the n{dimensional real orthogonal group and is denoted by O(n). It follows immediately from the denition that det G =. The subset of matrices in O(n) with det = forms a normal subgroup of index two called the n{ dimensional rotation or special orthogonal group and is denoted by SO(n). The set of matrices U having complex entries and satisfying U IU = I is a group called the n{dimensional unitary group and denoted by U(n). Clearly j det Uj =. The subgroup of matrices U with det u = is called the n{dimensional unitary unimodular or special unitary group and is denoted by SU(n). The set of unitary matrices satisfying S t JS = J?I where J = is called the unitary symplectic group and denoted by U Sp(n). Also I the generalized Lorentz groups, also called the orthogonal groups corresponding to indenite quadratic forms, are studied. The purpose of this paper is to give a reasonably clear introduction to the process of factorizing the members of the various groups of matrices mentioned above into products of planar ones of the same sort. An error in [Mu] is corrected, the same result being obtained in another, quite similar, way. Also the Lorentz groups and their generalizations, the orthogonal groups belonging to indenite quadratic forms are introduced and a general factorization theorem is obtained. I wish to thank professor F.E. Johnston for his unusually astute criticism while I was doing the research for this paper. 99 Mathematics Subject Classication. Primary 5A,9-4. Key words and phrases. Euler factorization, planar matrices, unirational variety. Typeset by AMS-TEX

2 A.M. DUPE. otation Matrices Lemma.. Let SO(). Then can be written in the following form Proof. Let = = cos ' sin ' where? ' < :? sin ' cos '. Now? +? = follows from the orthogonality of. Since a rotation matrix may be constructed having an arbitrary unit vector in its rst row, the point ( ) may lie anywhere on the unit circle in the {plane. But this point may be parametrically represented in the form (cos ' sin '), where? ' <. Since? +? =, we see that is either sin '. If it were + sin ', then using the orthogonality of columns one and two, we obtain sin ' cos ' + sin ' =, which forces =? cos ', contradicting the fact that det =. So =? sin ' = cos ', proving the lemma. We may apply the method in the above proof to show that if O() and det =?, then can be written as cos ' sin ' = where? ' < : sin '? cos ' Lemma.. Let SO() with = =. Then = = and =, assuming the form shown below. = cos ' sin ' 4? sin ' cos ' 5 and?? ' < : Proof. Let = 4 5 and : Since rows one and three and rows two and three are orthogonal, we obtain that = =, which forces = =, because 6=. Also we have? =, so that =. But, and hence =. Now det = det A det, where A is the matrix. Notice that A SO(), since t = we conclude the proof. A t A Theorem.. Every SO() can be written in the form. Applying lemma : to A, cos 4 ' sin ' cos sin? sin ' cos ' cos ' sin '? sin cos? sin ' cos ' 5

3 EULE FACTOIZATIONS IN CLASSICAL GOUPS these three matrices being written (' ) () (' ) and? ' ' <?= = : Proof. We may determine a number ' such that =, where cos '? sin ' 5 = 4 sin ' cos ' and the three matrices are t (' ). To see that this is possible, write 5 = cos ' + sin ' =? sin ' + cos ' : This may be interpreted geometrically as a rotation in the {plane, taking the point ( ) into the point ( ). Now unless both and are zero, we may take ' to be the angle in radians measured from the vector ( ) to the positive {axis. That is, we rotate the vector ( ) through an angle ', thus making =. If both and are zero, just choose ' =. In order to obtain the possible range on ', we observe that the only restriction on ( ) is that? +?, which follows from the fact that the last row of is a unit vector. Consequently, it may lie anywhere on the unit disc, which shows that any angle? ' < may be taken on by ( ). We next determine a number so that =, where now As before we write cos? sin 5 = sin cos? sin cos 4 cos sin : First notice that = =. Since we made, the point ( ) lies in the upper half of the {plane. Now then, and cannot both be zero, since then would have a last row of zeros, which is impossible, because = t (' ), and t are each nonsingular. Thus the range of is?= =. Having determined thus, assumes the form in the hypothesis of lemma, an application of which proves the theorem. The above factorization is called the Brauer factorization. The fact that so factorizes may be given a geometrical interpretation by choosing an orthonormal basis in I and associating each rotation matrix to a linear transformation in the usual manner. Under this interpretation, the linear transformations which are thus represented are rotations about an axis through the origin. A 9 rotation about the x{axis which takes the positive z{ axis into the positive y{axis is termed a clockwise rotation about the x{axis through an 5

4 4 A.M. DUPE angle =, similar denitions being understood for rotations about the y{ and z{axes respectively. Then the above factorization says that every rotation in three dimensions about an axis through the origin can be accomplished by performing three consecutive rotations. First a counterclockwise rotation about the x-axis through an angle ', a second clockwise rotation about the y{axis through an angle, and third, a rotation through an angle ' counterclockwise about the z{axis. There is, besides the Brauer, another useful factorization for three dimensional rotation matrices, called the Euler factorization, which we now proceed to develop. Theorem.. If SO() then it may be factored in the form cos sin 4 cos ' sin ' 5 4? sin ' cos '?sin cos 5 4 cos ' sin '? sin ' cos ' 5 as before calling the matrices (' ) () (' ), and the range of the parameters is? ' ' < and. Proof. Determine ' so that =, where t (' ) =, and cos ' sin ' =? sin ' cos ' The range of ' is seen to be? '. Next, determine so that =, where now we have t () =, and since the point ( ) lies in the right half of the {plane, the range of is. Notice that is now measured from the vector ( ) to the positive {axis. easoning as before in lemma :, we conclude that =, and assumes the form below, and this proves the theorem. = = 4 cos ' sin ' 5? sin ' cos ' Geometrically, this says that any three{dimensional rotation may be accomplished by rotating counterclockwise through an angle ' about the x{axis, clockwise through an angle about the y{axis, and nally rotating again about the x{axis, this time counterclockwise through an angle '. The two angles ' ' are called longitude angles and the latitude angle of the rotation. This economy of axes is obviously mechanically desirable. We shall next concern ourselves with four{dimensional rotations and their Brauer and Euler factorizations. As a simple extension of lemma : to matrices in SO(4), we remark without proofthat if SO(4) such that 4 = 4 = 4 =, then is actually of the form, and is a member of SO(). Theorem.. Let SO(4). Then = (' ) ( ) (' ) 4 ( ) 4 ( ) 4 (' ) :

5 EULE FACTOIZATIONS IN CLASSICAL GOUPS 5 The product of these last three matrices is shown below cos sin cos sin = cos ' sin '? sin cos? sin cos? sin ' cos ' where? ' i < and?= i =. Proof. First determine ' so that 4 = 4 4, where we have t 4(' ) = and? '. If both 4 and 4 4 are zero, then take ' =. We shall not repeat this last step in what follows and it should be understood as always taking place. Next determine so that 4 = and 4 4, where t 4( ) =?= =. Notice that 4 = 4 =, which fact shall henceforth not be mentioned if it is obvious. Lastly, choose the number which makes 4 = 4 4, where we have t 4( ) = and?= =. By the remark preceding the theorem, 4 = 4 = 4 = 4 4, and by theorem :, it readily follows that = (' ) ( ) (' ), and thus the complete Brauer factorization of is as stated in the theorem. As before, the ' i are called the longitude and the i the latitude angles. In order to obtain a geometrical interpretation for this factorization, we select an orthonormal basis fe e e e 4 g in I 4 and then observe that the matrix () xes the two dimensional space spanned by the vectors e e 4. We say then that () is a rotation about the e e 4 {plane through an angle. Notice that although a general rotation in three dimensions must x a one-dimensional subspace, a general four{dimensional rotation need not x any subspace except the zero{dimensional subspace, the origin, which is always xed by any linear transformation. For example, () 4 () represents such. The above theorem tells us that any four{dimensional rotation may be accomplished by rotating about each of the six planes determined by the six possible pairs of basis vectors. We need not use all six planes, as is shown by the next theorem. Theorem.4. Any SO(4) may be factored as with? ' i < i. = (' ) ( ) (' ) 4 ( ) ( ) (' ) Proof. Determine ' making 4 = 4, where t (' ) = and? ' <. Next determine so that 4 4 =, where we have t ( ) =. Since the point ( 4 4 ) is in the upper half of the 4 4 {plane, and because is the angle between the vector ( 4 4 ) and the positive 4 {axis, the range of is. Now choose so that 4 = 4 4, where t 4( ) =, and again. has a one in the 4 4 {place and zeros elsewhere in the last row and last column. The matrix formed from the rst three rows and columns is now a member of SO(). Using a variant of the Euler factorization, we have = (' ) ( ) (' ), which concludes the proof. In his book [Mu], Murnaghan has a mistake on pp.8 9. He proposes to reduce a four{dimensional rotation matrix to the form = D() t (' 4) t (' ) t 4 (' ) t 4 (' )

6 6 A.M. DUPE where D() is a matrix of the form ( ) 4 ( ) or of the form I a t ( ) t 4( ), where I is the matrix (?) i+j ij ij being the Kronecker. If one attempts to apply the procedure described in his book, one soon nds out that something is out of sequence. In other words, as straightfoward as the previous algorithms appear to be, it is possible to work oneself into a corner from which there is no escape. For example, the procedure in the book fails for the following rotation matrix. 6?? 7 4 5???? It is possible to show that the factorization attempted in [Mu] can actually be achieved. Theorem.5. If SO(4), then can be factored in the following form. = ( ) 4 ( ) (' ) (' ) 4 (' ) 4 (' 4 ) = D() (' ) : : : 4 (' 4 ) Proof. We shall factor t as t 4 (' 4) t 4 (' ) t (' ) t (' )D t (), which proves the theorem. First determine so that =, where ( ) = and? <. Then determine in such a way that + = 4 and 4 ( ) =. Next choose ' so that 4 = 4 (' ) =. Then we may apply a Brauer factorization to the matrix in the last three rows and columns of to obtain that = t 4(' 4 ) t 4(' ) t (' ), which proves the theorem. We now pass to the general factorization theorem for n{dimensional rotation matrices. Theorem.6. If SO(n), then there is a Brauer factorization = (' ) ( ) (' ) : : : n (' n? ) where the ' i are longitude angles,? ' i <, and the i are latitude angles?= i = Proof. We proceed by induction on the order of. Our theorem is clearly true for SO(). Let it be true for all SO(n?) and suppose that SO(n). Determine angles so that, in turn, where ' n? (n?)(n?) : : : (n?)(n?)?(n?) n = n n n = n n : : : n (n?) n? = n(n?) n t n(' n? ) = t n( (n?)(n?) ) = : : : : : : (n?) t n?n ( (n?)(n?) =(n?) ) = (n?) :

7 EULE FACTOIZATIONS IN CLASSICAL GOUPS 7 As we sucessively determine these angles, we see that the rst angle is unrestricted and in each following case, the point we must rotate always lies in the upper half plane, which restricts the angles i to the range?= i =. Notice too that once an entry is made zero, it is left zero by the planar matrices following which operate upon it. The resulting matrix (n?) has zeros in its last row except for n n(n?). Applying an obvious generalization of lemma : gives us that (n?) = where SO(n? ), and an application of the induction hypothesis proves the theorem. Theorem.7. If SO(n) then there is a generalized Euler factorization of in the form = (' ) ( ) (' ) : : : (' n? ) where the ranges on the longitude angles ' i are? ' i <, and the range for the latitude angles i j for i = : : : (n?)(n?) i not of the form (k?)(k?) for k = 4 : : : n, and j of the form (k?)(k?) for some k = 4 : : : n, is?= i = j Proof. The proof will again be by induction on the order of. There is only one possible factorization in case SO() and this clearly satises the conditions of the theorem with the only angle appearing being a longitude angle. Assume that the theorem is true if we have SO(n? ) and let SO(n). Choose angles so that, respectively, we have ' n? (n?)(n?) : : : (n?)(n?) n = n n = n : : : n (n?) n? = n(n?) where now and nally n (n?) = n n(n?) t (' n? ) = ( (n?)(n?) ) = : : : (n?) t n? ( ) = (n?)(n?)? (n?) (n?) t n ( (n?)(n?) ) = (n?) : The rst angle is unrestricted and in each sucessive determination except the last, the point to be rotated lies in the right half of the plane so the range on the rst angle ' n? is? ' n? <, and the ranges of the other angles except the last are?= i =. In determining the range of the last angle, we see that it still lies in the right half plane but it must now be rotated to the vertical axis and so the angle for the last determination is bound by (n?)(n?). n? then assumes the form of SO(n? ), which we factor according to the induction hypothesis.this completes the proof of the theorem and shows that a general n{dimensional rotation may be performed by rotating about only n? dierent (n? ){dimensional hyperspaces., where is a member

8 8 A.M. DUPE In these theorems it is clear that the order in which we have chosen the planar matrices of the factorization may be altered to obtain a class of factorizations. The reason for choosing these particular factorizations is that they are amenable to an induction proof and are in a certain way natural generalizations of the three{dimensional case. We may actually give the elements of the planar rotation matrices as algebraic functions of the elements of the original matrix. In fact, the algebraic functions involve radicals of degree at most two. We show how this is done in the three{dimensional case and omit the obvious generalization to the n{dimensional case. Theorem.8. Let SO(). Then may be factored in the form p? ( ) 6 4? p? ( ) 7 4? ? 7 where = p ( ) + ( ), unless, of course has a pair of zeros in the last row, in which case we must replace an appropriate one of the matrices by the identity matrix. Proof. If we wish to rotate the point (x y) to the point (x y ), then the angle is simply tan? (y=x) in case the point (x y ) lies on the positive x{axis and is cot? (y=x) in case it lies on the positive y{axis. But sin (tan? (y=x)) = x px + y cos (tan? (y=x)) = y px + y : 5 For the cot?, we have similar formulas. An application of these formulas to the determination of the angles in theorem : proves theorem :8. These factorizations are useful in determining the volume element of the rotation group, its homology ring and its homotopy groups. One immediate application is the calculation of the dimension as a Lie group of SO(n). In fact, we have the Theorem.9. The total number of planar matrices in an Euler or a Brauer factorization of SO(n) is n(n? )=. Proof. If we are factoring such an, we shell o n? planar matrices in order to reduce it to a form to which we can apply the induction hypothesis. Adding up all these gives us the number in the statement of the theorem. The angles in the various planar matrices into which we were factoring are often called parameters and the Euler and Brauer factorizations referred to as parametrizations.if the range of the parameter is unrestricted, the planar matrix represents what are called one parameter subgroups, which are then isomorphic to a circle group, denoted by T, and also called the {torus. If the range of the parameter is less than full, we are really dealing with a quotient of the circle group by a two element subgroup, which gives an intuitive reason for expecting that this two element group will show up in homology and homotopy. It is an important fact in algebraic geometry that the varieties of linear algebraic groups are rational, i.e., it is possible to parametrize them with parameters so that the functions

9 EULE FACTOIZATIONS IN CLASSICAL GOUPS 9 appearing in this parametrization are rational functions of the parameters. Now the circle group can be parametrized rationally,as follows: t t?! ( + t t? + ) t : We can use this to get an explicit rational parametrization of SO(n), using the results of this chapter. Because of the limited range of variability of some of the angles, we will have some of the parameters in which it is rational being correspondingly restricted. Since there is also a rational parametrization of SO(n) called the Cayley parametrization, it would be interesting to examine the connection between the two. We will not do this here, but leave this for another time.. Unitary Matrices Lemma.. Let U U(). Then U can be written e U = i ' cos e i? sin e?i e i ' sin e i cos e?i where = and? <. Proof. Let det u = e i'. Then U = e i ' U, and det U =. If U a b = then a(u )? d?b = and U = c Thus if we equate entries, we obtain U = d a b?c a a b c d, and aa + b b =, so we may let? b a jaj = cos jbj = sin, and since both sin and cos are positive, we get o =. Let be the arguments of a b respectively. Finally, we have shown that a = cos e i and b = sin e i, which proves the lemma. : Lemma.. Let U SU(). Then U can be written in the form e i cos? sin e U = i e?i sin cos e?i where =? <. Proof. Multiply to obtain U = cos e i sin e i? sin e?i cos e?i which is the most general form for a member of SU(). Lemma.. If U SU(), then U can be factored as e i cos? sin e?i U = e?i sin e i cos where o =? <. Proof. Let = + as in lemma : and multiply.

10 A.M. DUPE Lemma.4. If U SU() and U = U =, then U = U = and ju j =. Proof. Similar to lemma :. Notation. D( ) is and U ( ) is 4 ei e i e i 5 4 cos? sin e?i sin e i cos 5 : From the above, the meaning of U ( ) and U ( ) should be clear. Theorem.. If U SU(), then we may write U as shown below. U = D( )U ( )U ( )U ( ) where the ranges on the variables are? i i < i = + + (mod ) : Proof. Let U = UU( ). Then U = U cos? U sin e i. Set U =. Then U =U = tan e i, so if we let = tan? (ju j=ju j) and = arg U? arg U, where =, then we have U =. Of course, if U is zero, then just let =, and pick for any number? <. We shall assume, in what follows, that this exceptional case has been disposed of. Next we determine, in the same way, and so that U =, where U U ( ) = U.Then clearly, U obtain U U = z = U =, and we may apply lemma :4 to, where U U() and jzj =. We can write U = e i U, where U is a unimodular unitary matrix. Now factor U according to lemma : and we obtain the result stated in the theorem. The restriction + + (mod ) comes from the fact that U is supposed to be unimodular. As a result of this theorem, we see that every three=dimensional unitary matrix can be written in the form asserted in theorem with the exception of the removal of the restriction + + = (mod ). We have a corresponding factorization to the Euler, for unitary matrices. Theorem.. If U is a unitary matrix of dimension three, then it can be factored into the form below. U = D( )U ( )U ( )U ( ) and the ranges are i = and? i i <. Proof. First determine so that U =, where UU ( ) = U =. The argument from here is similar to the previous theorem.

11 EULE FACTOIZATIONS IN CLASSICAL GOUPS Theorem.. If U is in U(n) then it can be factored as U = D( : : : n )U ( )U ( )U ( ) : : : U n ( j j ) where the ranges are? i i < i =. Proof. As earlier, the proof will be by induction on the order of U. So let the theorem be true for all U U(n? ) and let U U(n). First choose numbers j j so that U n(j) j = for j = : : : n?, where we have assumed U (j) U j ( n?j n?j ) = U (j+). As these angles are chosen, it is clear that the ranges are? k < and k =. Once an entry is made zero, the matrices operating on it after its determination leave it zero. The matrix U (n?) has zeros in every entry in the rst row except the rst, which has absolute value one. It follows from the fact that U (n?) is unitary that the entries in the rst column are also zero except the rst. The matrix in the last n? rows and columns is a unitary matrix to which we may apply the induction hypothesis and factor. This concludes the proof of the theorem. There is a factorization of unitary matrices corresponding to the Euler factorization for rotation matrices. Only n? \planar" matrices of the form U ij ( ) need be used. This is shown in the theorem below. Theorem.4. If U U(n), then it can be factored as U = D( : : : n )U ( )U ( )U ( ) : : : U ( k k ) where the ranges are? i < i =. Proof. We prove the theorem by induction on the order of U. Assume the theorem true for U U(n? ) and let U U(n). Let j j j?? j? : : : be chosen which make U n = U n = : : :, where we have U (j) Uj+( n?j n?j ) = U (j+), for j = : : :. The ranges are easily seen to be? i i < i = : The remainder of the proof is similar to that of theorem : after we apply the induction hypothesis to U (n?). Further factorization of a general unitary matrix is possible if we factor each U ij according to lemma :. The order of the various matrices in the product may clearly be changed and a whole class of factorizations obtained. We merely remark that a geometrical interpretation can be given to this factorization, using complex geometry.. Symplectic Matrices If we consider the bilinear form x y? x y and all nonsingular transformsations on two variables which leave this form invariant. Then this set of transformations forms a group, called the two{dimensional symplectic group. The word symplectic means twisted together, which clearly describes the above bilinear form. Formulated matrix-theoretically, we are considering all nonsingular n{dimensional matrices S with real or complex entries such that S t JS = J, where J =?I I, and I is the n n identity matrix. If we now let

12 A.M. DUPE A S = C B D may be written, where A B C D are n n matrices, then the condition that S is symplectic S t JS = C t A? A t C D t A? B t C C t B? A t D D t B? B t D which gives the following set of equations to be satisied by A B C D. C t A? A t C = D B? B t D = D t A? B t C = I C t B? A t D =?I : The rst two equations tell us that C t A and D t B are symmetric and the last two equations are dependent, being transposes of one another. Using these equations, we prove the remarkable Theorem.. Every symplectic matrix is unimodular. Proof. Assume that S is symplectic and partitioned as above so that the above equations are satised. If A is nonsingular, then we may write I A B det S = det (A t )? = det? A t C A t A t? A B det D A t C A t : D Now we may multiply [ A B ] on the left by C t and subtract from [ A t C A t D ] without changing the value of the last determinant. det S = det? A t? A B det = det A? A det = : A t C? C t A A t D? C t B B I If A is singular then we shall construct a unimodular symplectic matrix S such that S = S S is also unimodular, which will prove the theorem.now let S I B =, where B is a symmetric matrix to be determined so that A is nonsingular, where A = S I B A B S = : S = B C D I C D I S is clearly symplectic and unimodular. A = A + B C cannot be singular for every choice of B. For, if this were so, choose for B the matrix with zeros everywhere except in one diagonal entry, say the k th. This implies that the k th row of B is a linear combination of the rows of A, which contradicts the fact that S is nonsingular, and we may apply the rst part of this proof to show thatr S= is unimodular. This completes the proof of the theorem. We now consider symplectic matrices which are at the same time unitary and call this group the unitary symplectic group and denote it by U Sp(n). The symplectic group itself will be denoted by Sp(n). The general unitary symplectic matrix S can be written as A? B S = A where A A + B B = I : B A C This may easily be seen by noticing that the inverse of S must be,and also B D D = t?b t = J? S t J, which follows from S t JS = J. Equating these two expressions?c t A t for the inverse and interchanging B andc, we have the assertion.

13 EULE FACTOIZATIONS IN CLASSICAL GOUPS Let U ij ( ) be a matrix of the form introduced in the last section. We denote by Uij ( ) S ij ( ) the symplectic matrix. We shall state and prove the two U ij ( ) main theorems of this section. As in the case of unitary matrices, there are two ways of factoring a unitary symplectic matrix into planar factors, corresponding to the Euler and Brauer factorizations. Theorem.. Every unitary symplectic matrix of order n may be factored in the form S = D ( : : : n )S ( )S ( )S ( ) : : : S n ( k k ) : : : where D is the diagonal matrix with entries e i : : : e i n e i : : : e i n, and the ranges on the variables are? i i < and i = : Proof. The proof is by induction on half the order of S. Let S be of order n and determine numbers j j j? j? : : : which make the rst row of S zero except the rst entry which is e i. It can then be easily shown that the (n+) st row and column are also zero except for a single common nonzero entry which must be e?i. Now apply the induction hypothesis to the matrix obtained by deleting from S the rst and (n + ) st row and column. This concludes the proof. Theorem.. Every s USp(n) can be written as S = D ( : : : n )S ( )S ( )S ( ) : : : S ( k k ) where the ranges are as in theorem. Proof. By induction on half the order of S, we show that we need use only a limited number of dierent planar unitary symplectic matrices. We omit the obvious required determinations. By counting parameters, we see that the n{dimensional unitary group has n parameters and the n{dimensional unitary symplectic group n(n? )=.These parameters are useful in dening an explicit invariant integral on these groups, as very clearly presented in [Ch],[We],[Po]. c d 4. Lorentz Matrices Let I nr be the n{dimensional matrix whose rst r entries are = and whose last n? r entries are?. Then consider the set of all nonsingular square matrices L r with real entries and such that L t I nr L = I nr. This set forms a group called the n{dimensional Lorentz group of type r and is denoted by L r (n).l (4) was introduced by H.A. Lorentz in connection with an electrodynamical transformation.it is clear that if L r L r (n) then L t r L r(n). We now turn our attention to L (). From now on, we denote the members of any particular Lorentz group under discussion by L rather than L r, If no confusion is possible.so a b let L L () L =. Then we may write L t a I L = b c d a? c b d a? c = ab? cd ab? cd b? d = I :

14 4 A.M. DUPE So we obtain the equations We may now assume that a? c = ab? cd = d? b = : a = cosh b = sinh c = k sinh d = k cosh : Clearly det L = k, which must be. If k =, there are two possibilities: if a = cosh, then d = + cosh, and if a =? cosh, then d =? cosh, both of which follow readily from ab = cd. If k =?, there are also two possibilities: if a = cosh then d =? cosh and if a =? cosh, thend = cosh.the four possible forms for an element L of L () are shown below: cosh sinh sinh cosh? cosh sinh sinh? cosh cosh sinh? sinh? cosh? cosh sinh? sinh cosh If we consider the matrices L () as points in a four{dimensional space, we see that L () falls into four pieces. This follows from the fact that the determinant is a continuous function and that it can never happen that cosh =? cosh, because cosh. The subgroup of L () which consists of either of the rst two forms above is called the unimodular Lorentz group and is denoted by SL () Lemma 4.. Let L L r () r =. Then if L = L = L, it follows that L = L = and L =. Proof. For L L (), we have 4 L L L L L L L? 4 L L L L L L L 5 = 4? L L L L L 5 = I : Since L is nonsingular, L 6=. So L = L =. Also,? L = and L force L =. The proof is similar for L (). Lemma 4.. If L L r () r =, then if L = L = L, then L = L = and L =. Proof. Apply lemma 4: to L t, which is in L r (), by a previous remark. Theorem 4.. Every L L () can be factored as cosh L = 4 cosh sinh 5 4 sinh cos sin 5 4? sin cos k sinh k cosh sinh cosh where? < < + and? <. The center matrix above is L ( ). Proof. Determine so that L = L, where L t () = L. Next let L L ( ) = L and observe that 5 : L = L cosh + L sinh L = L sinh + L cosh

15 It follows from tanh? (?L =L ), which gives EULE FACTOIZATIONS IN CLASSICAL GOUPS 5 L = L = that jl j > jl j. This allows us to choose = r sinh =?L = L r? L and cosh = L = L? L which gives L = and L >. Applying lemma 4:, we conclude that the matrix in the last two rows and columns of L is a Lorentz matrix of the group L (), which may take any one of the four forms listed previously. The proof is completed. Theorem 4.. Let L L (). Then L can be factored in the following form: L = cosh sinh 4 cos sin cos sin sin cos sinh cosh? sin cos where? < and? < < +. Proof. First determine so that L = L, where, as usual, L t ( ) = L. Then? <. Since, as in the previous theorem, jl j < jl j, and also since L, we may choose so that L L? () = L implies that we have L = and jl j. We now apply lemma 4: in case L is positive, and if it is negative, we see that it must be?. In either case, the matrix in the last two rows and columns of L is an orthogonal matrix. Notice that the groups L () and L () are isomorphic, since the permutation matrix which interchanges the rst and third rows will conjugate I into?i, but the group xing?i is just L (). Thus we expect that the two previous theorems apply equally to both these groups. Theorem 4.. Let L L (). Lorentz matrices. L = 5 Then it can be factored into the product of three planar 4 cosh sinh cosh k sinh k cosh 54 sinh cosh 54 sinh sinh cosh sinh cosh where? < i <. Proof. Choose so that L L = L, where LL? ( ) = L and choose so that = L, where L L? ( ) = L. We can do this because of the fact that jl j > jl j follows from L = L =, and jl j > jl j follows from L? L? L =. Apply lemma 4: to L and we see that the matrix in the rst two columns and rows of L is a member of L () and may be any of the four types described earlier. In order to obtain generalized orthogonality relations for the rows and columns of Lorentz matrices similar to those of orthogonal matrices, and in order to gain an explicit expression for the inverse of a general Lorentz matrixn notice that L? = I nr L t I nr, for L L r (n). 5

16 6 A.M. DUPE Then the relations may be immediately read o from L? L = LL? = I. Also, as a remark, the reader may convince herself that the groups L r (n) and L n?r (n) are isomorphic with an obvious change of variables. It is just as easy to see that there are exactly + [n=] non-mutually isomorphic groups L r (n) for xed n. The general Lorentz groups consist of four separated pieces, as was pointed out for the special case of L (). An excellent proof of this last fact is given in [Bo]. We shall not give a proof of this here but the following theorem should at least make this plausible. Theorem 4.4. Let L L r (n) r [n=]. Then L = L rr+ X, where the rst matrix is a planar Lorentz matrix of one of the four dierent forms listed earlier, and X is a product of L ij 's and ij 's. Proof. The matrices which we shall determine rst, hence those which will appear last in the factorization, are the following: r+r+ r+r+ : : : n?n r?r r?r? : : : : We choose the parameters in these matrices in such a way that L r+ = L r+ = : : : L (n?r?) n? = L (n?r) r = L (n?r+) r? = : : : L (n?) = : Suppose also that these numbers have been chosen so that L (n?) and L n(n?) are both positive. This allows us to x the parameter of L n so that L n(n?) = and so that L (n?) is strictly positive, and in fact, equal to one. We do not appeal to a proof by induction, as will become clear. Now one of two things is true: n = r or n = r + k k > In the event that the second case holds, we apply this process just described to the matrix in the last n? columns and rows of L (n?). If n = r, then the matrix in the last n? rows and columns is not a matrix of the sort we are considering because its r is too large. We can, though, apply a similar process to that described above to sucessively make the entries in its last column zero but for the rst and last, which may again be assumed to be positive. The parameter of the matrix L n is then determined so that the top entry in the last column is zero and so that the bottom entry is positive. We then look at the matrix formed from the rst n? columns and rows and consider its r, there being again the two possibilities above. Eventually we arrive at the situation of having a three{dimensional submatrix of a four{dimensional one before us. when it is required that the three{dimensional matrix be factored into a rotation matrix and two Lorentz matrices, it is seen that the two{dimensional Lorentz matrix left in the rst two rows and columns or last two rows and columns can be in any one of the four for Lemma 5.. Let be of the following form 4 a b x c d y x y z 5. A Special Theorem cos sin 5 and b > c: Then = 4? sin cos 5 and < :

17 EULE FACTOIZATIONS IN CLASSICAL GOUPS 7 Proof. We may write down the following set of equations: cx + dy + yz = since rows and columns are orthogonal bx + dy + yz = (b? c)x = subtracting the equations. x = because b > c. ax + cy + xz = since rows and columns are orthogonal. ax + by + xz = (b? c)y = subtracting the equations. y = because b > c. Two possibilities now arise: a b a det =? and z =?: or det c d c b d = + and z = +: We have seen that every member of O() of determinant? may be written as cos sin sin where? <? cos which immediately rules out the rst possibility above. This is because we assumed that b > c. Thus the second possibility holds and we see that sin >. An application of lemma : proves the lemma. Theorem 5.. Let SO() and 6= I. Then may be factored as = t ( ) t ( ) () ( ) ( ) where the ranges on the parameters are? <? = = : Proof. Determine so that =, where we have let = 4 cos sin cos? sin? sin cos sin cos 4 = t ( ) Multiplying this out, we obtain for 4 cos + sin? sin + cos cos + sin? cos + sin Now write?? cos sin =?? sin cos? 5 : 5 :

18 8 A.M. DUPE which shows that we may obtain as stated above. If, of course, both = and =, then let =. The range of is unrestricted, so? <. We next choose so that =, where we have let = ( ) t ( ). Again, in order to see that this is possible, we multiply and obtain for the following matrix: cos + sin 4 cos + sin? sin + cos? sin + cos 5 yielding? cos sin =?? sin cos?? : Since?, the point (?? ) lies in the upper half plane, and thus the range of is?= =. Notice that we also have =, since?? = cos? sin + cos + sin? sin + cos? sin cos + sin + cos cos + sin =? cos + sin?? sin + cos =? = Now not both = and =, since then t =, from which it would follow that ( ) = I, which would also force = I, because symmetry is preserved under conjugation by a rotation matrix. Thus we are denitely assured that >, and an application of lemma 5: proves the theorem. Theorem 5.. If SO() and = I then it can be factored as in theorem 5:. Proof. Since symmetry of a matrix is preserved under conjugation by rotation matrices and since is symmetric, we may choose ' so that =, where (') t (') =. Then we may assume that takes the form 4 a x b z 5 : x z We have xz =. If x = then = (), and if z = then = (), proving the theorem. It is, of course, possible to generalize this theorem to n dimensions, where it gives an explicit way of showing the well-known theorem of semisimple groups, that every element in such a group is conjugate to an element in a maximal torus, by exhibiting an element which performs the conjugation. The only group we have not covered in this paper is the symplectic group. This will be the subject of a forthcoming paper.

19 EULE FACTOIZATIONS IN CLASSICAL GOUPS 9 eferences [[Ch]] C. Chevalley, Theory of Lie Groups, Princeton, Princeton, 946. [[Kl]]A.Kleppner and H.V. McIntosh, The Theory of the Three{dimensional otation Group:Part I, Description and Parametrization, IAS, Baltimore, 958. [[Li]]D.E. Littlewood, The Theory of Group Characters and Matrix epresentations of Groups, Oxford, Oxford, 95. [[Mu]] F.D. Murnaghan, The Orthogonal and Symplectic Groups, Baltimore, 958. [[Mu]] F.D. Murnaghan, The Theory of Group epresentations, Dublin, 98. [[Po]] L. Pontrajagin, Topological Groups, Princeton, Princeton, 946. [[We]] H. Weyl, The Classical Groups: Their Invariants and epresentations, Princeton, Princeton, 99. Newark, N.J.

ISOMETRIES OF R n KEITH CONRAD

ISOMETRIES OF R n KEITH CONRAD ISOMETRIES OF R n KEITH CONRAD 1. Introduction An isometry of R n is a function h: R n R n that preserves the distance between vectors: h(v) h(w) = v w for all v and w in R n, where (x 1,..., x n ) = x

More information

GENERATING SETS KEITH CONRAD

GENERATING SETS KEITH CONRAD GENERATING SETS KEITH CONRAD 1 Introduction In R n, every vector can be written as a unique linear combination of the standard basis e 1,, e n A notion weaker than a basis is a spanning set: a set of vectors

More information

Assignment 12. O = O ij O kj = O kj O ij. δ ik = O ij

Assignment 12. O = O ij O kj = O kj O ij. δ ik = O ij Assignment 12 Arfken 4.1.1 Show that the group of n n orthogonal matrices On has nn 1/2 independent parameters. First we need to realize that a general orthogonal matrix, O, has n 2 elements or parameters,

More information

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS KOLMAN & HILL NOTES BY OTTO MUTZBAUER 11 Systems of Linear Equations 1 Linear Equations and Matrices Numbers in our context are either real numbers or complex

More information

Problem set 2. Math 212b February 8, 2001 due Feb. 27

Problem set 2. Math 212b February 8, 2001 due Feb. 27 Problem set 2 Math 212b February 8, 2001 due Feb. 27 Contents 1 The L 2 Euler operator 1 2 Symplectic vector spaces. 2 2.1 Special kinds of subspaces....................... 3 2.2 Normal forms..............................

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Algebra I Fall 2007

Algebra I Fall 2007 MIT OpenCourseWare http://ocw.mit.edu 18.701 Algebra I Fall 007 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. 18.701 007 Geometry of the Special Unitary

More information

GROUP THEORY PRIMER. New terms: so(2n), so(2n+1), symplectic algebra sp(2n)

GROUP THEORY PRIMER. New terms: so(2n), so(2n+1), symplectic algebra sp(2n) GROUP THEORY PRIMER New terms: so(2n), so(2n+1), symplectic algebra sp(2n) 1. Some examples of semi-simple Lie algebras In the previous chapter, we developed the idea of understanding semi-simple Lie algebras

More information

THE EULER CHARACTERISTIC OF A LIE GROUP

THE EULER CHARACTERISTIC OF A LIE GROUP THE EULER CHARACTERISTIC OF A LIE GROUP JAY TAYLOR 1 Examples of Lie Groups The following is adapted from [2] We begin with the basic definition and some core examples Definition A Lie group is a smooth

More information

Linear Systems and Matrices

Linear Systems and Matrices Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

LECTURE 25-26: CARTAN S THEOREM OF MAXIMAL TORI. 1. Maximal Tori

LECTURE 25-26: CARTAN S THEOREM OF MAXIMAL TORI. 1. Maximal Tori LECTURE 25-26: CARTAN S THEOREM OF MAXIMAL TORI 1. Maximal Tori By a torus we mean a compact connected abelian Lie group, so a torus is a Lie group that is isomorphic to T n = R n /Z n. Definition 1.1.

More information

[3] (b) Find a reduced row-echelon matrix row-equivalent to ,1 2 2

[3] (b) Find a reduced row-echelon matrix row-equivalent to ,1 2 2 MATH Key for sample nal exam, August 998 []. (a) Dene the term \reduced row-echelon matrix". A matrix is reduced row-echelon if the following conditions are satised. every zero row lies below every nonzero

More information

1 Linear Algebra Problems

1 Linear Algebra Problems Linear Algebra Problems. Let A be the conjugate transpose of the complex matrix A; i.e., A = A t : A is said to be Hermitian if A = A; real symmetric if A is real and A t = A; skew-hermitian if A = A and

More information

Linear Algebra, 4th day, Thursday 7/1/04 REU Info:

Linear Algebra, 4th day, Thursday 7/1/04 REU Info: Linear Algebra, 4th day, Thursday 7/1/04 REU 004. Info http//people.cs.uchicago.edu/laci/reu04. Instructor Laszlo Babai Scribe Nick Gurski 1 Linear maps We shall study the notion of maps between vector

More information

Special Lecture - The Octionions

Special Lecture - The Octionions Special Lecture - The Octionions March 15, 2013 1 R 1.1 Definition Not much needs to be said here. From the God given natural numbers, we algebraically build Z and Q. Then create a topology from the distance

More information

REPRESENTATIONS OF U(N) CLASSIFICATION BY HIGHEST WEIGHTS NOTES FOR MATH 261, FALL 2001

REPRESENTATIONS OF U(N) CLASSIFICATION BY HIGHEST WEIGHTS NOTES FOR MATH 261, FALL 2001 9 REPRESENTATIONS OF U(N) CLASSIFICATION BY HIGHEST WEIGHTS NOTES FOR MATH 261, FALL 21 ALLEN KNUTSON 1 WEIGHT DIAGRAMS OF -REPRESENTATIONS Let be an -dimensional torus, ie a group isomorphic to The we

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real

More information

Background on Chevalley Groups Constructed from a Root System

Background on Chevalley Groups Constructed from a Root System Background on Chevalley Groups Constructed from a Root System Paul Tokorcheck Department of Mathematics University of California, Santa Cruz 10 October 2011 Abstract In 1955, Claude Chevalley described

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Lifting to non-integral idempotents

Lifting to non-integral idempotents Journal of Pure and Applied Algebra 162 (2001) 359 366 www.elsevier.com/locate/jpaa Lifting to non-integral idempotents Georey R. Robinson School of Mathematics and Statistics, University of Birmingham,

More information

Symmetric Functions and the Symmetric Group 6 B. G. Wybourne

Symmetric Functions and the Symmetric Group 6 B. G. Wybourne 1 Symmetric Functions and the Symmetric Group 6 B. G. Wybourne This is why I value that little phrase I don t know so highly. It s small, but it flies on mighty wings. It expands our lives to include the

More information

Clifford Algebras and Spin Groups

Clifford Algebras and Spin Groups Clifford Algebras and Spin Groups Math G4344, Spring 2012 We ll now turn from the general theory to examine a specific class class of groups: the orthogonal groups. Recall that O(n, R) is the group of

More information

A PRIMER ON SESQUILINEAR FORMS

A PRIMER ON SESQUILINEAR FORMS A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form

More information

Tangent spaces, normals and extrema

Tangent spaces, normals and extrema Chapter 3 Tangent spaces, normals and extrema If S is a surface in 3-space, with a point a S where S looks smooth, i.e., without any fold or cusp or self-crossing, we can intuitively define the tangent

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

Math 121 Homework 5: Notes on Selected Problems

Math 121 Homework 5: Notes on Selected Problems Math 121 Homework 5: Notes on Selected Problems 12.1.2. Let M be a module over the integral domain R. (a) Assume that M has rank n and that x 1,..., x n is any maximal set of linearly independent elements

More information

ACI-matrices all of whose completions have the same rank

ACI-matrices all of whose completions have the same rank ACI-matrices all of whose completions have the same rank Zejun Huang, Xingzhi Zhan Department of Mathematics East China Normal University Shanghai 200241, China Abstract We characterize the ACI-matrices

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

We simply compute: for v = x i e i, bilinearity of B implies that Q B (v) = B(v, v) is given by xi x j B(e i, e j ) =

We simply compute: for v = x i e i, bilinearity of B implies that Q B (v) = B(v, v) is given by xi x j B(e i, e j ) = Math 395. Quadratic spaces over R 1. Algebraic preliminaries Let V be a vector space over a field F. Recall that a quadratic form on V is a map Q : V F such that Q(cv) = c 2 Q(v) for all v V and c F, and

More information

Topics in linear algebra

Topics in linear algebra Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over

More information

6 Orthogonal groups. O 2m 1 q. q 2i 1 q 2i. 1 i 1. 1 q 2i 2. O 2m q. q m m 1. 1 q 2i 1 i 1. 1 q 2i. i 1. 2 q 1 q i 1 q i 1. m 1.

6 Orthogonal groups. O 2m 1 q. q 2i 1 q 2i. 1 i 1. 1 q 2i 2. O 2m q. q m m 1. 1 q 2i 1 i 1. 1 q 2i. i 1. 2 q 1 q i 1 q i 1. m 1. 6 Orthogonal groups We now turn to the orthogonal groups. These are more difficult, for two related reasons. First, it is not always true that the group of isometries with determinant 1 is equal to its

More information

Lecture 4 Orthonormal vectors and QR factorization

Lecture 4 Orthonormal vectors and QR factorization Orthonormal vectors and QR factorization 4 1 Lecture 4 Orthonormal vectors and QR factorization EE263 Autumn 2004 orthonormal vectors Gram-Schmidt procedure, QR factorization orthogonal decomposition induced

More information

Solution Set 7, Fall '12

Solution Set 7, Fall '12 Solution Set 7, 18.06 Fall '12 1. Do Problem 26 from 5.1. (It might take a while but when you see it, it's easy) Solution. Let n 3, and let A be an n n matrix whose i, j entry is i + j. To show that det

More information

MAT 2037 LINEAR ALGEBRA I web:

MAT 2037 LINEAR ALGEBRA I web: MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear

More information

MATH 215B HOMEWORK 5 SOLUTIONS

MATH 215B HOMEWORK 5 SOLUTIONS MATH 25B HOMEWORK 5 SOLUTIONS. ( marks) Show that the quotient map S S S 2 collapsing the subspace S S to a point is not nullhomotopic by showing that it induces an isomorphism on H 2. On the other hand,

More information

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100 Preliminary Linear Algebra 1 Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 100 Notation for all there exists such that therefore because end of proof (QED) Copyright c 2012

More information

Math Camp Notes: Linear Algebra I

Math Camp Notes: Linear Algebra I Math Camp Notes: Linear Algebra I Basic Matrix Operations and Properties Consider two n m matrices: a a m A = a n a nm Then the basic matrix operations are as follows: a + b a m + b m A + B = a n + b n

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

NOTES on LINEAR ALGEBRA 1

NOTES on LINEAR ALGEBRA 1 School of Economics, Management and Statistics University of Bologna Academic Year 207/8 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

More information

LINEAR SYSTEMS AND MATRICES

LINEAR SYSTEMS AND MATRICES CHAPTER 3 LINEAR SYSTEMS AND MATRICES SECTION 3. INTRODUCTION TO LINEAR SYSTEMS This initial section takes account of the fact that some students remember only hazily the method of elimination for and

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K. R. MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 1998 Comments to the author at krm@maths.uq.edu.au Contents 1 LINEAR EQUATIONS

More information

Introduction to Group Theory

Introduction to Group Theory Chapter 10 Introduction to Group Theory Since symmetries described by groups play such an important role in modern physics, we will take a little time to introduce the basic structure (as seen by a physicist)

More information

Operations On Networks Of Discrete And Generalized Conductors

Operations On Networks Of Discrete And Generalized Conductors Operations On Networks Of Discrete And Generalized Conductors Kevin Rosema e-mail: bigyouth@hardy.u.washington.edu 8/18/92 1 Introduction The most basic unit of transaction will be the discrete conductor.

More information

,, rectilinear,, spherical,, cylindrical. (6.1)

,, rectilinear,, spherical,, cylindrical. (6.1) Lecture 6 Review of Vectors Physics in more than one dimension (See Chapter 3 in Boas, but we try to take a more general approach and in a slightly different order) Recall that in the previous two lectures

More information

A group G is a set of discrete elements a, b, x alongwith a group operator 1, which we will denote by, with the following properties:

A group G is a set of discrete elements a, b, x alongwith a group operator 1, which we will denote by, with the following properties: 1 Why Should We Study Group Theory? Group theory can be developed, and was developed, as an abstract mathematical topic. However, we are not mathematicians. We plan to use group theory only as much as

More information

using the Hamiltonian constellations from the packing theory, i.e., the optimal sphere packing points. However, in [11] it is shown that the upper bou

using the Hamiltonian constellations from the packing theory, i.e., the optimal sphere packing points. However, in [11] it is shown that the upper bou Some 2 2 Unitary Space-Time Codes from Sphere Packing Theory with Optimal Diversity Product of Code Size 6 Haiquan Wang Genyuan Wang Xiang-Gen Xia Abstract In this correspondence, we propose some new designs

More information

Math 291-2: Lecture Notes Northwestern University, Winter 2016

Math 291-2: Lecture Notes Northwestern University, Winter 2016 Math 291-2: Lecture Notes Northwestern University, Winter 2016 Written by Santiago Cañez These are lecture notes for Math 291-2, the second quarter of MENU: Intensive Linear Algebra and Multivariable Calculus,

More information

Basic Concepts of Group Theory

Basic Concepts of Group Theory Chapter 1 Basic Concepts of Group Theory The theory of groups and vector spaces has many important applications in a number of branches of modern theoretical physics. These include the formal theory of

More information

Lecture 10: A (Brief) Introduction to Group Theory (See Chapter 3.13 in Boas, 3rd Edition)

Lecture 10: A (Brief) Introduction to Group Theory (See Chapter 3.13 in Boas, 3rd Edition) Lecture 0: A (Brief) Introduction to Group heory (See Chapter 3.3 in Boas, 3rd Edition) Having gained some new experience with matrices, which provide us with representations of groups, and because symmetries

More information

MATRICES. a m,1 a m,n A =

MATRICES. a m,1 a m,n A = MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of

More information

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12 24 8 Matrices Determinant of 2 2 matrix Given a 2 2 matrix [ ] a a A = 2 a 2 a 22 the real number a a 22 a 2 a 2 is determinant and denoted by det(a) = a a 2 a 2 a 22 Example 8 Find determinant of 2 2

More information

Citation Osaka Journal of Mathematics. 43(2)

Citation Osaka Journal of Mathematics. 43(2) TitleIrreducible representations of the Author(s) Kosuda, Masashi Citation Osaka Journal of Mathematics. 43(2) Issue 2006-06 Date Text Version publisher URL http://hdl.handle.net/094/0396 DOI Rights Osaka

More information

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY

LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT FALL 2006 PRINCETON UNIVERSITY LECTURE VI: SELF-ADJOINT AND UNITARY OPERATORS MAT 204 - FALL 2006 PRINCETON UNIVERSITY ALFONSO SORRENTINO 1 Adjoint of a linear operator Note: In these notes, V will denote a n-dimensional euclidean vector

More information

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional

More information

G METHOD IN ACTION: FROM EXACT SAMPLING TO APPROXIMATE ONE

G METHOD IN ACTION: FROM EXACT SAMPLING TO APPROXIMATE ONE G METHOD IN ACTION: FROM EXACT SAMPLING TO APPROXIMATE ONE UDREA PÄUN Communicated by Marius Iosifescu The main contribution of this work is the unication, by G method using Markov chains, therefore, a

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

TRANSLATION-INVARIANT FUNCTION ALGEBRAS ON COMPACT GROUPS

TRANSLATION-INVARIANT FUNCTION ALGEBRAS ON COMPACT GROUPS PACIFIC JOURNAL OF MATHEMATICS Vol. 15, No. 3, 1965 TRANSLATION-INVARIANT FUNCTION ALGEBRAS ON COMPACT GROUPS JOSEPH A. WOLF Let X be a compact group. $(X) denotes the Banach algebra (point multiplication,

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 998 Comments to the author at krm@mathsuqeduau All contents copyright c 99 Keith

More information

Extra Problems for Math 2050 Linear Algebra I

Extra Problems for Math 2050 Linear Algebra I Extra Problems for Math 5 Linear Algebra I Find the vector AB and illustrate with a picture if A = (,) and B = (,4) Find B, given A = (,4) and [ AB = A = (,4) and [ AB = 8 If possible, express x = 7 as

More information

1 Determinants. 1.1 Determinant

1 Determinants. 1.1 Determinant 1 Determinants [SB], Chapter 9, p.188-196. [SB], Chapter 26, p.719-739. Bellow w ll study the central question: which additional conditions must satisfy a quadratic matrix A to be invertible, that is to

More information

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given

More information

The Determinant: a Means to Calculate Volume

The Determinant: a Means to Calculate Volume The Determinant: a Means to Calculate Volume Bo Peng August 16, 2007 Abstract This paper gives a definition of the determinant and lists many of its well-known properties Volumes of parallelepipeds are

More information

Notes 10: Consequences of Eli Cartan s theorem.

Notes 10: Consequences of Eli Cartan s theorem. Notes 10: Consequences of Eli Cartan s theorem. Version 0.00 with misprints, The are a few obvious, but important consequences of the theorem of Eli Cartan on the maximal tori. The first one is the observation

More information

LS.1 Review of Linear Algebra

LS.1 Review of Linear Algebra LS. LINEAR SYSTEMS LS.1 Review of Linear Algebra In these notes, we will investigate a way of handling a linear system of ODE s directly, instead of using elimination to reduce it to a single higher-order

More information

a11 a A = : a 21 a 22

a11 a A = : a 21 a 22 Matrices The study of linear systems is facilitated by introducing matrices. Matrix theory provides a convenient language and notation to express many of the ideas concisely, and complicated formulas are

More information

Notation. For any Lie group G, we set G 0 to be the connected component of the identity.

Notation. For any Lie group G, we set G 0 to be the connected component of the identity. Notation. For any Lie group G, we set G 0 to be the connected component of the identity. Problem 1 Prove that GL(n, R) is homotopic to O(n, R). (Hint: Gram-Schmidt Orthogonalization.) Here is a sequence

More information

Homework 1 Elena Davidson (B) (C) (D) (E) (F) (G) (H) (I)

Homework 1 Elena Davidson (B) (C) (D) (E) (F) (G) (H) (I) CS 106 Spring 2004 Homework 1 Elena Davidson 8 April 2004 Problem 1.1 Let B be a 4 4 matrix to which we apply the following operations: 1. double column 1, 2. halve row 3, 3. add row 3 to row 1, 4. interchange

More information

CHAPTER 6. Representations of compact groups

CHAPTER 6. Representations of compact groups CHAPTER 6 Representations of compact groups Throughout this chapter, denotes a compact group. 6.1. Examples of compact groups A standard theorem in elementary analysis says that a subset of C m (m a positive

More information

PHYS 705: Classical Mechanics. Rigid Body Motion Introduction + Math Review

PHYS 705: Classical Mechanics. Rigid Body Motion Introduction + Math Review 1 PHYS 705: Classical Mechanics Rigid Body Motion Introduction + Math Review 2 How to describe a rigid body? Rigid Body - a system of point particles fixed in space i r ij j subject to a holonomic constraint:

More information

Problems in Linear Algebra and Representation Theory

Problems in Linear Algebra and Representation Theory Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

MATH 369 Linear Algebra

MATH 369 Linear Algebra Assignment # Problem # A father and his two sons are together 00 years old. The father is twice as old as his older son and 30 years older than his younger son. How old is each person? Problem # 2 Determine

More information

LINEAR ALGEBRA W W L CHEN

LINEAR ALGEBRA W W L CHEN LINEAR ALGEBRA W W L CHEN c W W L Chen, 994, 28. This chapter is available free to all individuals, on the understanding that it is not to be used for financial gain, and may be downloaded and/or photocopied,

More information

c 1 v 1 + c 2 v 2 = 0 c 1 λ 1 v 1 + c 2 λ 1 v 2 = 0

c 1 v 1 + c 2 v 2 = 0 c 1 λ 1 v 1 + c 2 λ 1 v 2 = 0 LECTURE LECTURE 2 0. Distinct eigenvalues I haven t gotten around to stating the following important theorem: Theorem: A matrix with n distinct eigenvalues is diagonalizable. Proof (Sketch) Suppose n =

More information

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey

Linear Algebra. Solving Linear Systems. Copyright 2005, W.R. Winfrey Copyright 2005, W.R. Winfrey Topics Preliminaries Echelon Form of a Matrix Elementary Matrices; Finding A -1 Equivalent Matrices LU-Factorization Topics Preliminaries Echelon Form of a Matrix Elementary

More information

Some notes on Coxeter groups

Some notes on Coxeter groups Some notes on Coxeter groups Brooks Roberts November 28, 2017 CONTENTS 1 Contents 1 Sources 2 2 Reflections 3 3 The orthogonal group 7 4 Finite subgroups in two dimensions 9 5 Finite subgroups in three

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 207, v 260) Contents Matrices and Systems of Linear Equations Systems of Linear Equations Elimination, Matrix Formulation

More information

REPRESENTATION THEORY WEEK 7

REPRESENTATION THEORY WEEK 7 REPRESENTATION THEORY WEEK 7 1. Characters of L k and S n A character of an irreducible representation of L k is a polynomial function constant on every conjugacy class. Since the set of diagonalizable

More information

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3 Linear Algebra Row Reduced Echelon Form Techniques for solving systems of linear equations lie at the heart of linear algebra. In high school we learn to solve systems with or variables using elimination

More information

Chapter 8. Rigid transformations

Chapter 8. Rigid transformations Chapter 8. Rigid transformations We are about to start drawing figures in 3D. There are no built-in routines for this purpose in PostScript, and we shall have to start more or less from scratch in extending

More information

QUATERNIONS AND ROTATIONS

QUATERNIONS AND ROTATIONS QUATERNIONS AND ROTATIONS SVANTE JANSON 1. Introduction The purpose of this note is to show some well-known relations between quaternions and the Lie groups SO(3) and SO(4) (rotations in R 3 and R 4 )

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K. R. MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Corrected Version, 7th April 013 Comments to the author at keithmatt@gmail.com Chapter 1 LINEAR EQUATIONS 1.1

More information

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS chapter MORE MATRIX ALGEBRA GOALS In Chapter we studied matrix operations and the algebra of sets and logic. We also made note of the strong resemblance of matrix algebra to elementary algebra. The reader

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

(x, y) = d(x, y) = x y.

(x, y) = d(x, y) = x y. 1 Euclidean geometry 1.1 Euclidean space Our story begins with a geometry which will be familiar to all readers, namely the geometry of Euclidean space. In this first chapter we study the Euclidean distance

More information

Lattices and Hermite normal form

Lattices and Hermite normal form Integer Points in Polyhedra Lattices and Hermite normal form Gennady Shmonin February 17, 2009 1 Lattices Let B = { } b 1,b 2,...,b k be a set of linearly independent vectors in n-dimensional Euclidean

More information

Some Remarks on the Discrete Uncertainty Principle

Some Remarks on the Discrete Uncertainty Principle Highly Composite: Papers in Number Theory, RMS-Lecture Notes Series No. 23, 2016, pp. 77 85. Some Remarks on the Discrete Uncertainty Principle M. Ram Murty Department of Mathematics, Queen s University,

More information

1 Matrices and Systems of Linear Equations

1 Matrices and Systems of Linear Equations March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.

More information

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in Vectors and Matrices Continued Remember that our goal is to write a system of algebraic equations as a matrix equation. Suppose we have the n linear algebraic equations a x + a 2 x 2 + a n x n = b a 2

More information

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information

22m:033 Notes: 3.1 Introduction to Determinants

22m:033 Notes: 3.1 Introduction to Determinants 22m:033 Notes: 3. Introduction to Determinants Dennis Roseman University of Iowa Iowa City, IA http://www.math.uiowa.edu/ roseman October 27, 2009 When does a 2 2 matrix have an inverse? ( ) a a If A =

More information

Chapter 4. Solving Systems of Equations. Chapter 4

Chapter 4. Solving Systems of Equations. Chapter 4 Solving Systems of Equations 3 Scenarios for Solutions There are three general situations we may find ourselves in when attempting to solve systems of equations: 1 The system could have one unique solution.

More information

ENGINEERING MATH 1 Fall 2009 VECTOR SPACES

ENGINEERING MATH 1 Fall 2009 VECTOR SPACES ENGINEERING MATH 1 Fall 2009 VECTOR SPACES A vector space, more specifically, a real vector space (as opposed to a complex one or some even stranger ones) is any set that is closed under an operation of

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information