TRANSVECTION AND DIFFERENTIAL INVARIANTS OF PARAMETRIZED CURVES

Size: px
Start display at page:

Download "TRANSVECTION AND DIFFERENTIAL INVARIANTS OF PARAMETRIZED CURVES"

Transcription

1 TRANSVECTION AND DIFFERENTIAL INVARIANTS OF PARAMETRIZED CURVES G. MARÍ BEFFA AND JAN A. SANDERS Abstract. In this paper we describe an sl 2 representation in the space of differential invariants of parametrized curves in homogeneous spaces. The representation is described by three operators, one of them being the total derivative D. We use this representation to find a basis for the space of differential invariants of curves in a complement of the image of D, and so generated by transvection. These are natural representatives of first cohomology classes in the invariant bicomplex. We describe algorithms to find these basis and study most well-known geometries. 1. Introduction In classical invariant theory, that is in the study of the invariants of the action of SL 2 (R) on binary forms, the basic computational tool is that of transvection. Given any two covariants (or finite dimensional irreducible sl 2 -representations in modern language) one can construct a number of (possibly) new covariants by computing the transvectants. The simplest example is given by two linear forms, a o X + a 1 Y and b 0 X + b 1 Y. Their first transvectant is the determinant a 0 a 1 b 0 b 1 of the coefficients. Another example is the discriminant a 0 a 2 a 2 1 of a quadratic form a 0 X 2 +2a 1 XY +a 2 Y 2, which is the second transvectant of the quadratic form with itself. The process of transvection generates the kernel of a certain operator F. This operator is obtained as follows. Differentiation can be thought of as a map of u, u 1, u 2, onto itself using the rule D : u k u k+1, combined with linearity. (This map can be extended to (tensor) products by prescribing the usual product rule, which makes D a derivation.) If one restricts to a finite number of derivatives, say n, then this leads to a nilpotent matrix which can be imbedded into an sl 2 by the Jacobson-Morozow lemma (or by hand). If one now lets n go to infinity, one 1991 Mathematics Subject Classification. Primary: 37K; Secondary: 53A55. Key words and phrases. Differential invariants of curves, classical invariant theory, representations, transvectants, Clebsch-Gordan. 1

2 2 G. MARÍ BEFFA AND JAN A. SANDERS recovers in the limit a Heisenberg algebra, with operators D = E = F = k=0 k=1 k=1 u k+1 u k, u k. u k ku k 1, u k This has been described in [15, 12, 14]. Motivated by modular function theory, John McKay asked the second author of the present paper in December 1999 whether the Schwarzian derivative was in ker F. The answer was: Let g i = u i+1 /(i + 1)!. Then the Schwarzian derivative S(u) = u3 u maps to (g u 2 0 g 2 g1)/g 2 0, 2 which 1 lies in ker F. In the u i -coordinates the operator algebra {F, E, D} gives a representation of sl 2 defined as F = k(k 1)u k 1 u k k=1 u 2 2 (1.1) E = 2ku k u k k=0 D = k=0 u k+1. u k As we will see later, the kernel of F is generated by recurrent transvection of u 1, first with itself, then with the produced transvectants, as is done in classical invariant theory of SL 2 (R). The fact that the Schwarzian derivative is in the kernel of F hints to a possible connection between transvection and the differential invariants of parametrized curves. Indeed, the Schwarzian derivative is the generator of projective differential invariants of u : R RP 1. That is, any other differential invariant can be written as a function of the Schwarzian and its derivatives. The Schwarzian lies in the kernel of the operator F and can be generated by transvection of u 1. After a natural generalization to several dependent variables, a very simple calculation shows that ( ) 2 u 2 u 2 u 1 u 1 u1 u 2 u 1 u 1 is in the kernel of F. The expression is the square of the Euclidean curvature of u : R R n, expressed in terms of an arbitrary parametrization. It can be generated by transvection of u 1 (in fact, of its components u α i ) with itself and its transvectants. The Euclidean curvature is also one of the generating differential invariants for Euclidean curves. A slightly longer calculation shows that torsion of u, when written in terms of an arbitrary parametrization, also lies in the kernel of the operator F and it is also a function of transvectants of u 1 with themselves. We found other examples of this situation for curves in the Möbius sphere, the local model for flat conformal manifolds. These examples strongly suggest a connection between transvection and differential invariants of parametrized curves.

3 DIFFERENTIAL INVARIANTS OF CURVES AND TRANSVECTANTS 3 This paper is based on a very basic and simple property. The operators F, D and E all commute with the prolongation of vector fields, insofar the action of the group does not affect the parameter. They define a representation of sl 2 in the space of differential invariants of curves. In this paper we show how such a representation can be used to show that transvection can take the role of differentiation in the process of finding differential invariants of parametrized curves. By doing so we are capable of finding basis of differential invariants for most well-known geometries which are always in the kernel of F. If we study polynomials on u i or other simple cases, the kernel of F is the complement of the image of D, so in that sense we are producing generators that do not include the derivative of lower order invariants. In many cases one can follow a process that ensures the result is also invariant under reparametrizations. As it is shown in the examples, one can also consider simple fractional expressions in u i, rather than polynomials. It is not clear the method works in general for fractional expressions or how far one can go weakening this assumption. In section 2 we introduce basic notions and describe the simple theorems that allow us to use transvection to generate differential invariants in general. We show how, under certain assumptions, one can always find a generating set of differential invariants produced by transvection. In section 3 we study the case of affine geometries. We describe an algorithm to find a system of independent relative invariants in the kernel of F using iterative transvection of u 1 with an appropriately chosen initial differential invariant. Group invariant contractions of these relative invariants produce a basis of differential invariants. We also show how transvection produces results which are invariant under reparametrization. Section 4 is perhaps a more surprising one. There we show how to apply this same algorithm to some non affine geometries. In the case of differential invariants of projective curves Wylczinski proved in [17] that one could lift a curve in RP n to a curve in R n+1 in the standard way, multiply the lift by a factor and define a different vector µ R n+1. He then proceeded to recurrently differentiate this vector and to produce a basis of differential invariants for projective curves by taking determinants of the derivatives (this was not his original idea, but that is what the process boils down to). It just happens that the vector µ is a relative invariant of the prolonged projective action with constant weight and Wylczinski process mirrors the transvection process we describe in section 3. In section 5 we show that transvection can substitute differentiation in Wylczinski s original method. Furthermore, we show that in many other geometric manifolds G/H the method works identically for both differentiation and transvection. That is, one can lift the curve to a vector in a higher dimensional R m where the group acts linearly. We can modify the vector to produce a relative differential invariant and we can then apply recurrent transvection of it with a properly chosen differential invariant. By doing so we can produce enough relative invariants and combine them to generate a basis for the space of differential invariants of curves in G/H all of them in the kernel of the operator F. We show explicitly the projective, conformal and Grasmann Lagrangian cases. As a conclusion, it is not at all surprising that Schwarzian and Euclidean curvature lie in the kernel of the operator F and can be written as transvectants of the derivative u 1. In fact, being differential invariants of lowest order within their

4 4 G. MARÍ BEFFA AND JAN A. SANDERS corresponding geometrical background (projective and Euclidean respectively) they were very likely to be so. This research was funded by a grant from the Netherlandse Organisatie voor Wetenschappelijk Onderzoek (NWO). The first author would like to thank the Vrije Universiteit in Amsterdam for their support during a very pleasant visit. 2. Basic definitions and results Let M be a manifold of dimension n and let G be a group acting on M. Let J (k) (R, M) be the kth order jet bundle or set of equivalence classes of curves under the equivalence relation of kth order contact. If we introduce coordinates u = (u α ) on M, x is the parameter and we denote by u α r, we can introduce coordinates in J (k) (R, M) given by (x, u (k) ) = (x, (u α ), (u α 1 ), (u α 2 ),..., (u α k )) = (x, u, u 1,..., u k ). Very often one works in the infinite jet J ( ) (R, M) as a way to avoid specifying at each step the highest order involved. See [11] for more details. = dr u α dx r Definition 1. If a transformation group G acts on M, the action preserves the order of contact between curves and so there is an induced action in J (k) (R, M) called the prolonged action or kth prolongation. If the parameter x is preserved, it is locally given by g (x, u (k) ) = g (x, u, u 1,..., u k ) = (x, g u, (g u) 1,..., (g u) k ). where by (g u) k we denote the formula on u r obtained after differentiating k times. A (kth order) differential invariant is a local scalar function I : J (k) (R, M) R invariant under the prolonged action of G, that is, such that I(g (x, u (k) )) = I(x, u (k) ) for all g G. A kth order relative differential invariant with Jacobian weight is a vector function V : J (k) (R, M) R n such that ( ) V g (x, u (k) ) = J g (u)v (x, u (k) ) where J g is the Jacobian matrix of the map φ g : M M given by φ g (u) = g u (see [11] for more details on the Jacobian multiplier representation). Classical frames (that is, an invariant curve in the frame bundle) and relative differential invariants with Jacobian weight are the same concept, the first one emphasizes geometric properties while the second emphasizes the invariance under the prolonged action (see [11]). Given a group acting on a manifold, let v = ξ n x + φ α u α, α=1 be a infinitesimal generator of the action, where ξ = ξ(x, u) and φ α = φ α (x, u). The infinitesimal generators of the prolonged action are well known. They are given by the prolongation formula v (n) = ξ x + ( D k (φ α ξu α 1 ) + ξu α (2.1) k+1) u α, k k=0 where D is the total derivative with respect to the parameter x. Infinitesimally, a differential invariant is a local scalar function in the kernel of the infinitesimal generators of the prolonged action.

5 DIFFERENTIAL INVARIANTS OF CURVES AND TRANSVECTANTS 5 Next we will define operators on the infinite jet space forming an sl 2 representation. Definition 2. Consider the following operators defined on J ( ) (R, M) n F λ = k(λ + k 1)u α k 1, λ C α=1 k=1 u α k (2.2) E λ = n (λ + 2k)u α k α=1 k=0 u α k D = n u α k+1 u α α=1 k=0 k In the geometric context, λ = 0 will be the natural value when the operators are acting on the coordinates, by going to tensor products other integer values of λ will become relevant. It is very simple to check that these operators have the following commutation relations (2.3) [F λ, D] = E λ [E λ, F λ ] = 2F λ [E λ, D] = 2D. The infinite dimensional representation theory of sl 2 is described in [4], to which we refer at the appropriate places. The value λ = 0 is particularly interesting. Later on, in Theorem 3, we will see that if ξ 1 = 0 one has (2.4) [F 0, v (n) ] = [D, v (n) ] = [E 0, v (n) ] = 0, so that these three operators will form an sl 2 representation in the space of differential invariants of parametrized curves. Definition 3. We define the (abstract) space V λ = v 0, v 1,, (, represents the linear span) as an irreducible sl 2 -module on which sl 2 acts as follows (2.5) F λ v k = k(λ + k 1)v k 1, E λ v k = (λ + 2k)v k, Dv k = v k+1. and where v 0 is a given element in the kernel of F λ with E λ v 0 = λv 0. The sl 2 - module V λ is called a Lowest Weight Module. Example 1. The space of polynomials (or tensors) in u α k is an sl 2-module. We now give two constructions, the second based on the first, of Lowest ( Weight ) Modules, u starting with the coordinate functions u i j. Let n = 2 and v 1 0 = 1, with u i ker F 0 for i = 1, 2. Then ( ) ( ) F v 0 = F u 1 1 F0 u u 2 = F 0 u 2 = 0, 1 where we denote by F the induced operator on the space of vectors in R 2 when applying F to each entry. Furthermore ( ) ( ) ( ) u Ẽv 0 = Ẽ 1 1 E0 u u 2 = 1 1 u 1 1 E 0 u 2 = u 2 = 2v u 2 1

6 6 G. MARÍ BEFFA AND JAN A. SANDERS So we can see that v 0 generates V 2, if we define ( ) u 1 v i = i+1 u 2. i+1 Let us denote this sl 2 -module by U 2. Remark that the operator F can be written as F 2 abstractly, that is, on V 2. Next we can consider U 2 U 2. We see that v 0 v 0 is the lowest weight vector of a Lowest Weight Module V 4, since ˆF (v 0 v 0 ) = F v 0 v 0 + v 0 F v 0 = 0 and Ê(v 0 v 0 ) = Ẽv 0 v 0 + v 0 Ẽv 0 = 4v 0 v 0. However, this V 4 is not the whole of U 2 U 2, as we shall see later. One should realize that the sl 2 -module V 4 on the one hand abstractly is spanned by v 0, v 1,, but concretely is given by tensor products like v 0 v 0 and v 1 v 0 + v 0 v 1, with v i U 2. The following is a list of properties that hold for any representation of the Lie algebra sl 2. We will use these properties in the next section. Lemma 1. The following relations hold true: [E λ, D n ] = 2nD n, [E λ, Fλ n] = 2nF λ n; [F λ, D n ] = nd n 1 (E λ + n 1); [Fλ n n 1, D] = nfλ (E λ n + 1). We denote F 0 and E 0 by F and E, respectively. If Ep = λp, we denote λ by ω p and we call it the weight or eigenvalue of p. Definition 4. Let V be an sl 2 -module. For p V we define its depth d(p) as the lowest m N such that F m p 0, but F m+1 p = 0. Definition 5. Let V be an sl 2 -module such that every element in V can be written as v ι, where the v ι are E-eigenvectors with eigenvalue ω v ι 0 and the sum is finite. Then we say that V is a nonnegative sl 2 -module. Notice that the tensor product of two nonnegative sl 2 -modules is again a nonnegative sl 2 -module, but a nonnegative sl 2 -module can be a tensor product of modules that are not necessarily nonnegative. Theorem 1. Let V be a nonnegative sl 2 -module. Then p V can be written uniquely as p = v 0 + D d(p) j=1 Dj 1 v j, with v j ker F, that is, p v 0 im D. If, moreover, ω v j > 2, then v j / im D. Proof. First we remark that d(p) <, otherwise the eigenvalues would become negative. We first prove the last statement. If v j im D kerf, then v j would be part of a V λ with λ < 0. Indeed, for v j V λ to be in ker F, one needs to have v j = D j v 0 and F v j = jd j 1 (E + j 1)v 0 = j(λ + j 1)D j 1 v 0 so that j = 0 or j = 1 λ. If j = 0, it cannot be in im D, so j = 1 λ 1 if v j im D. This implies that λ < 0. This is in contradiction with the fact that V is an sl 2 -module spanned by eigenvectors with nonnegative eigenvalues. Since p can be written as a finite sum of E-eigenvectors, we write p = k i=1 pi. Let q i = F d(pi) p i for i fixed. Its E-eigenvalues are ω q i = ω p i 2d(p i ). By the assumptions on V one has ω q i 0. We consider first the case ω q i > 0, then we assume ω q i = 0. We let v i = (ω q i 1)!F m p i m!(ω q i +m 1)!, where m = d(pi ). We now show that

7 DIFFERENTIAL INVARIANTS OF CURVES AND TRANSVECTANTS 7 Lemma 2. One has and hence d(p i v i m) < d(p i ). F m (p i v i m) = 0. Once we have shown this, we can replace p i by p i v i m and start again. Proof. We compute F m (p i v i m) = F m (1 D m (ω q i 1)!F m m!(ω q i + m 1)! )pi = F m p i F m 1 md m 1 (ω q i 1)!F m (E + m 1) m!(ω q i + m 1)! pi = F m p i F m 1 md m 1 (ω q i 1)!F m (ω q i + m 1) m!(ω q i + m 1)! pi = F m p i F m 1 D m 1 (ω q i 1)!F m (m 1)!(ω q i + m 2)! pi = F m 1 (1 D m 1 (ω q i 1)!F m 1 (m 1)!(ω q i + m 2)! = = F (1 D F )F m 1 p i ω q i )F pi = 0 and this proves the lemma. We now consider the case ω q i = 0. It follows that DF m p i = 0, that is to say, F m p i is a constant. The simplest example is 1 u 2 2 u 1, which goes to 1 under F. Clearly there is no way back since we obtain zero when we apply D. In this case, if the constant F m p i happens to be the last element before zero, one can conclude that the element before that is (a multiple of) τ1 τ, with τ ker F, which is D log(τ) im D, and proceed as before by defining v i = log(τ) 2m!(m 1)! F m p i. The sl 2 -module is of the type U(0, 0) as defined in [4, Chapter II, section 1.2]. Here log(τ) is defined to be 1 τ1 D τ, we can extend the differential module to include D 1 in the standard way. See [?]. The following Lemma justifies the correction factor. Lemma 3. F m D m log(τ) = 2m!(m 1)!. Proof. We show this by induction on m. For m = 1 the statement is easy to verify. Then, using Lemma 1 we see that F m+1 D m+1 log(τ) = (m + 1)F m (E m)d m log(τ) and this proves the statement. = m(m + 1)F m D m log(τ) = 2m!(m + 1)!, We only need to prove uniqueness to conclude the proof of Theorem 1. But uniqueness is quite simple: if the depth is zero, it is obvious. one can then apply

8 8 G. MARÍ BEFFA AND JAN A. SANDERS induction concentrating on the highest depth term. The difference would lie in the kernel of D, which is zero. Corollary 1. If r = F m p i R, then F m (p i r 2m!(m 1)! Dm log(τ)) = 0, and so one has d(p i v i m) < d(p i ) Comment 1. For the benefit of those readers who worry about the sudden appearance of logarithmic in their computations, we add that if one reads the theorem like: Then p V can be written uniquely as p = v 0 +w, with v 0 ker F, and w im D, then there are no logarithmic terms in either v 0 or w. We now give two examples, one where Theorem 1 applies, and one where it does not. After that we formulate Lemma 4 as a corollary to Theorem 1, covering both examples. Example 2. Let p = u4 u 1. Then F p = 12 u 3 u 1, F 2 p = 72 u 2 u 1, F 3 p = 144. So d(p) = 3 and ω F 3 p = 0. Then v 3 = 6 log(u 1 ) and Let q = p D 3 v 3 = 5 u4 u u2u3 u 2 1 Dv 3 = 6 u 2 u 1, D 2 v 3 = 6 u 3 u 1 6 u2 2 u 2 1 D 3 v 3 = 6 u 4 18 u 2u 3 u 1 u u3 2 1 u 3 1 F q = 24( u 3 3 u 2 1 so d(q) = 1 and ω F q = 4, v 1 = 6( u3 12 u3 2. Then u 3 1 u 2 2 u 2 1 u 1 3 u u 2 1 ), F 2 q = 0. Then v 0 = q Dv 1 = u4 u 1 6 u2u3 + 6 u3 u 2 2 ker F. So 1 u 3 1 p = u 4 6 u 2u 3 u 1 u u3 2 1 u 3 6D( u u 2 1 ) and Dv 1 = 6 u4 u u2u3 u 2 1 u 2 2 u 2 1 ) + 6D 3 log(u 1 ). Example 3. Take p = 1 u 1. Then Dp = u2, D 2 p = u3 u 2 1 u u2u3 6 u3 u u 4 1 of Theorem 1 would give a division by zero. u4 u u3 2. u u2 2, and D 3 p = u 3 1 ker F. Thus if one starts with u5, the algorithm in the proof u 2 1 Lemma 4. Consider p = P (u 1,, u k )/u m 1, where P is a polynomial. Clearly, d(p) <. If the u-degree of the polynomial P is m, Theorem 1 applies as in Example 2, if < m it does not as in Example 3.

9 DIFFERENTIAL INVARIANTS OF CURVES AND TRANSVECTANTS 9 Comment 2. If we require that the u-degree of the polynomial P is > m, it follows that after applying F enough times on p, we end up with a polynomial. Once we have a polynomial, applying F will not change that fact, and so the final eigenvalue ω F d(p) p will be strictly positive. If the u-degree of the polynomial P is = m, then we are in the situation of lemma 3. For the algorithm to work one needs a condition that guarantees ω F d(p) p > 0, a consequence of the condition we give here. Definition 6. Let V 1 and V 2 be sl 2 -modules. We define the n-transvectant τ (n) : V 1 V 2 V 1 V 2 as follows. Let f V 1 and g V 2 be eigenvectors of E. We denote f i = D i f, g i = D i g. Then (2.6) τ (m) f g = (f, g) (m) = ( ) ( ) ( 1) i ωf + m 1 ωg + m 1 f i g j. j i The definition is extended to the span of the eigenvectors by bilinearity. (See example 4.) Lemma 5. If f and g are both in ker F, then τ (m) f g ker F, where F (f g) = F f g + f F g, as usual. Furthermore, for general f and g, w (f,g) (m) = w f + w g + 2m. Proof. F τ (m) f g = F ( ) ( ) ( 1) i wf + m 1 wg + m 1 f i g j j i ( ) ( ) = ( 1) i wf + m 1 (D i F + id i 1 wg + m 1 (E + i 1))f g j j i ( ) ( ) + ( 1) i wf + m 1 wg + m 1 f i (D j F + jd j 1 (E + j 1))g j i ( ) ( ) = ( 1) i wf + m 1 wg + m 1 i(w f + i 1)f i 1 g j j i ( ) ( ) + ( 1) i wf + m 1 wg + m 1 f i j(w g + j 1)g j 1 j i ( ) ( ) = ( 1) i wf + m 1 wg + m 1 i(j + 1)f i 1 g j j + 1 i ( ) ( ) + ( 1) i wf + m 1 wg + m 1 f i j(i + 1)g j 1 j i + 1 = ( ) ( ) ( 1) i wf + m 1 wg + m 1 (i + 1)(j + 1)f i g j j + 1 i ( ) ( ) + ( 1) i wf + m 1 wg + m 1 f i (j + 1)(i + 1)g j j + 1 i + 1 = 0. 1

10 10 G. MARÍ BEFFA AND JAN A. SANDERS The eigenvalue computation is similar but easier. Eτ (m) f g = E ( ) ( ) ( 1) i wf + m 1 wg + m 1 f i g j j i ( ) ( ) = ( 1) i wf + m 1 (D i E + 2iD i wg + m 1 )f g j j i ( ) ( ) + ( 1) i wf + m 1 wg + m 1 f i (D j E + 2jD j )g j i ( ) ( ) = ( 1) i wf + m 1 wg + m 1 (w f + 2i)f i g j j i ( ) ( ) + ( 1) i wf + m 1 wg + m 1 f i (w g + 2j)g j j i = (w f + w g + 2m) ( ) ( ) ( 1) i wf + m 1 wg + m 1 f i g j. j i Notice that for the last computation f and g need not be in ker F, see Theorem 2. Comment 3. If w g + m = 1, then τ (m) f g = ( w f ) +m 1 m f0 g m ker F. Theorem 2. Let V λ and V µ be lowest weight sl 2 -modules and assumne λ+µ / Z. Then V λ V µ = V λ+µ+2i, i=0 where the projections are τ i : V λ V µ V λ+µ+2i. If f is the lowest weight vector of V λ, and g of V µ, then (f, g) (i) is the lowest weight vector of V λ+µ+2i. Proof. See [4]. Corollary 2. Let p ker F be an E-eigenvector in V λ V µ with E-eigenvalue ν and with λ + µ N. Then p is a 1 2 (ν λ µ)-transvectant of elements in ker F. Proof. It follows from Theorem 2 that p is the generator of one of the V λ+µ+2i. Therefore ν = λ + µ + 2i. The result follows. Corollary 3. Let p(u 1,, u k ) be a polynomial of degree m in ker F. Then p can be written as a the sum of repeated transvectants of u 1. Proof. Consider u 1 as the generator of the space V 2. Then the polynomial is an element in the symmetrization of V2 m. Applying Theorem 2 repeatedly, we can express p as the sum of repeated transvectants of u 1. Proposition 1. Consider p = P (u 1,, u k )/(u 1 ) m ker F, where P is a polynomial. Clearly, d(p) <. If the u-degree l of the polynomial P is > m, p can be written as a polynomial in transvectants of u 1 and 1 u 1 (the generator of V 2 ).

11 DIFFERENTIAL INVARIANTS OF CURVES AND TRANSVECTANTS 11 Proof. We consider p as an element of the symmetrization of V 2 m V2 l = V 2l 2m+2i. Since l > m, one can always apply Theorem 2 such that the sum of the eigenvalues is strictly positive. Corollary 4. The boundary case l = m can also be treated: One has V 2 m V2 l = V 2 V 2i+2 = (V 2 V 2 ) V 2i. i=0 This implies that if the eigenvalue of p is strictly positive, it has to be a repeated transvectant. The only difficult case is when the eigenvalue is zero, which will correspond to the case p R. Comment 4. When w f + w g = 0, one has (f, g) (0) = f g and (f, g) (1) = w f (f 0 g 1 + f 1 g 0 ) = w f D(f, g) (0). This contradicts the pretty picture V wf V wf = V 2i, since the term generating V 2 is already generated in V 0. Therefore one cannot have the direct sum decomposition generalizing the Clebsch-Gordan formula to the infinite dimensional case as given by Theorem 2. This problem arises in general whenever one has V λ V µ with λ+µ an integer 0. Compare [4, Chapter 2, Exercise 6]. Let w f = 1 and w g = 1. The Casimir operator C = E 2 + 2(DF + F D) has on i=0 the basis f 0 g 1, f 1 g 0 the matrix [ This has Jordan normal form [ In fact, C(f 0 g 1 f 1 g 0 ) = 8(f 0 g 1 + f 1 g 0 ) and C(f 0 g 1 + f 1 g 0 ) = 0. This means that the representation of sl 2 on V 1 V 1 is not quasisimple. 3. A generating set of differential invariants created by transvection In this section we will first describe how, given a finite dimensional manifold M and a group action G, the operators above take always differential invariants of curves to differential invariants of curves, as far as the action of the group does not affect the parameter. This result is a corollary of the following Theorem and of standard differential invariance theory. Recall that the space of differential invariants for curves on a manifold M of dimension n is generated by a set of n functionally independent differential invariants of minimal order. We call such a set a minimal basis. It is simple to see that [D, v (n) ] = ξ 1 D. So we assume ξ 1 to be zero in the sequel. i=0 ]. ]. i=1

12 12 G. MARÍ BEFFA AND JAN A. SANDERS Theorem 3. Assume v is a vector field on M given by (3.1) v = ξ n x + α=1 φ α u α then, if ξ 1 = 0, the commutator of F λ and v (n) is given by n [F λ, v (n) ] = λ kd k 1 (u φ u φ) u α. k α=1 k=1 In particular, F = F 0 and v (n) commute. This singles out F as the geometric choice to complement D. Proof. The proof of this theorem is a simple calculation. Indeed, straightforward calculations give us [F λ, v (n) ] = n α=1 k=0 F λ(d k (φ ξu α 1 + ξu α k+1 ) u α k n α=1 k=0 (λ + k)(k + 1)(Dk (φ α ξu α 1 ) + ξu k+1 ). We use the fact that to rewrite this as [F λ, ] = (s + 1)(λ + s) u s [F λ, v (n) ] = n ( = α=1 k=0 n α=1 k=0 u s+1 kd k 1 (λu α φ + (k 1)φ) uα D k φ(k + 1)(λ + k) u α k+1 From here one easily gets the expression in the Theorem. These two formulas result in the following Corollary.. ) u α k u α k+1 Corollary 5. Assume that the action of G leaves the parameter x invariant. Then, the operators F, D and E take differential invariants to differential invariants. Notice that the fact that D and v (n) commute if x is left invariant by the action follows from invariant theory since D is the invariant differentiation. From now on we will assume the parameter x is invariant. First of all, the following simple lemma. Lemma 6. Given u : I G/H generic. Assume a minimal basis of differential invariants can be found as rational functions of u α k s. Then, there exists a basis of differential invariants of u formed by invariants which are eigenvectors of the operator E. Proof. Notice that, if the action of G is rational, a minimal basis of differential invariants can be found as rational functions of u k, using the algebraic formulation of the moving frame method found in [5]. Notice also that such a basis can always be decomposed in terms that are eigenvectors of E.

13 DIFFERENTIAL INVARIANTS OF CURVES AND TRANSVECTANTS 13 Assume k is any differential invariant for u, and assume k = ν +µ where ν and µ are eigenvectors of E, that is, E(ν) = αν and E(µ) = βµ, α β. Since E and v (r) commute for any r, we conclude that v (r) (ν) and v (r) (µ) will also be eigenvectors with the same eigenvalue as ν and µ. Hence, if v (r) (ν + µ) = 0. This implies that v (r) (ν) = v (r) (µ) and so E(v (r) (ν)) = αv (r) (ν) = E(v (r) (µ)) = βv (r) (µ) = βv (r) (ν). If α β we have v (r) (ν) = v (r) (µ) = 0. Both ν and µ are differential invariants. This is also clearly the case if instead of two terms we have any number of them. Let {k 1,..., k n } be a minimal basis of differential invariants. Splitting these into eigenvectors, we can then produce a system of generators which are eigenvectors {ν 1,..., ν m }. Since the dimension of the space of differential invariants is n, and, being minimal, the number of generators at each degree r is determined by the rank of the r prolonged action, following a standard procedure we can extract a subset of n elements from {ν 1,..., ν m } formed by independent and generating invariants. Notice that after splitting k 1,..., k n we will always have enough differential invariants at each order r to cover the needed number of generators, it suffices to choose the terms with the highest order, for example. The lemma follows. Using Theorem 1, Corollary 2 and Lemma 6 we get the following result. Theorem 4. Let M = G/H be a homogeneous space. Given a curve u : I M, assume that we can find a generating set of differential invariants forming a nonnegative sl 2 -module, and assume that each member of this generating set can also be found to be elements of some V λ V µ with λ + µ N. Then, there exists a generating set for differential invariants of curves that lies in the kernel of F. This set is generated by transvection of elements in the kernel of F. Proof. The proof of this theorem is the direct application of Theorem 1 and its consequences. If we start with a bases {k i } of differential invariants, we can choose a bases which are also E-eigenvectors following Lemma 6. Therefore, without loss of generality we can assume they are eigenvectors with nonnegative eigenvalues. Therefore, applying Theorem 1, we can find a set {vj i }, all of them in the kernel of F, and such that k i = d(k i ) k=0 D j v i j. Since k i are all differential invariants, the algorithm shown in the proof of Theorem 1 to produce vj i ensures that they are also differential invariants. Indeed, they are linear combinations of elements obtained after applying F and D repeatedly to k i and these operators take differential invariants to differential invariants. Since {k i } generate all differential invariants, {vj i} also do. Observe that since the vi j are computed using the elements of sl 2, they are part of the nonnegative sl 2 -module, and therefore have nonnegative eigenvalues. This implies that if they lie in a tensorproduct V λ V µ, one must have λ + µ 0. Finally, also due to the process followed to generate vj i, they are guaranteed to belong to tensor products of V α factors. Applying Corollary 2 we obtain that vj i are generated by transvection of elements in the kernel of F.

14 14 G. MARÍ BEFFA AND JAN A. SANDERS Notice that, in general, one cannot guarantee that a minimal bases of differential invariants can be extracted from {vk i }. Indeed, if p = d(p) k=0 Dk v k, the differential order of v k could be higher than that of p. For example, if p = u 3 2, the process in Theorem 1 results in p = v 0 + Dv 1 + D 3 v 3, where v 0 = 3 5 (u3 2 u 1 u 2 u 3 ) u2 1u 4, v 1 = 6 35 (u2 1u u 1u 2 2), v 3 = 1 42 u3 1. One can find non minimal sets of differential invariants that generate all other invariants, are functionally independent and have more elements that the minimal bases. Still, no linear combination of them will produce a minimal set, only functional combinations. This fact gives the impression that generating a bases using transvection is not possible, it is possible only to generate large sets of generating invariants. But that is far from the real situation in specific cases, and in fact this situation does not happen in any of the known geometric cases, where lower order differential invariants are always in the kernel of F. Example 4. Assume that M = O(2) R 2 /O(2) is the Euclidean plane. A system of generators for the differential invariants of a planar Euclidean curve is given by u 1 2 = u 1 u 1 and the curvature k = p 22 p 2 12 where p ij = ui uj u 1 u 1. It is trivial to see that both u 1 2 and k are in the kernel of F. Assume that M = P SL(2)/H = RP 1. It is known that any differential invariant for curves in RP 1 can be written as a function of the Schwarzian derivative of u, S(u) = u 3 u i+j=2 ( ) 2 u2 and its derivatives with respect to x. It is also trivial to check that S(u) is in the kernel of F. We compute, following the procedure in Corollary 4, τ (2) u 1 1 = ( ) 3 u i+1 D j 1 u 1 j u 1 = 3u 1 (2 u2 2 u 3 1 and this contracts by symmetrization to u 1 u 3 u 2 ) 3u 2 u 2 1 u 2 + u u 1 Alternatively, 3 u2 2 u 2 2 u 3 = 2S(u). 1 u 1 and we can write τ (2) u 1 u 1 = 6u 3 u 1 9u 2 u 2, τ (0) u 1 u 1 = u 1 u 1 S(u) = 1 Cτ (2) u 1 u 1 6 Cτ (0) u 1 u 1 where Cp q = p q is an example of a contraction operator (and thus Cτ (0) u 1 u 1 = u 2 1). It follows that there is no unique way to see differential invariants as transvectants, and that one can try and make the choice optimal with respect to the type of general result one wants to prove.

15 DIFFERENTIAL INVARIANTS OF CURVES AND TRANSVECTANTS 15 Finally, the restriction of having a system of generating DI that form a nonnegative module AND are elements of tensor products might seem very restrictive. In reality it is not. Traditional ways of generating differential invariants for curves in many geometric manifolds include the use of bilinear forms or group invariant operators (like the determinant in the case of SL(n)) and their applications to combination of derivatives of the curve. This method naturally produces system of differential invariants which are elements of tensor products and form nonnegative modules. Even for non-affine geometries for which the use of these group invariant forms is not so apparent, it is still possible to effectively apply the method generating a basis of differential invariants directly by transvection of u 1 and basic invariants. In fact, it will be interesting to find a situation in which this method is not applicable. In our next sections we will try to show how one can effectively use this representation to generate differential invariants by recurrent transvection of u 1 starting with a lowest weight invariant which is guaranteed to be generated by transvection. We will do first the simplest case, that of affine geometries. We will also show how the process can be minimally adapted to ensure the result is also invariant under reparametrization. 4. Differential invariant of curves in (G R n ) /G Let M T be the R-module of elements of the form P i u i where the P i R are polynomials in the u α i, i = 1, 2,..., α = 1,... n, divided by elements in the kernel of F. Notice that the elements of M T are vector functions. Define F T on M T by applying F as in (2.2) to each entry of this vector to produce another vector. Likewise we define E T and D T by applying E and D to each entry. We can also extend these operators to tensor products of M T using the standard product rule. We will denote the resulting operators with the same notation, F T, E T and D T. Using Proposition 1 (and its obvious generalization to several dependent variables where one divides by inner products or other combinations of u α 1 ) and its preceding comments, we know that under certain verifiable conditions the kernel of F T is generated by transvectants of the form (2.6). Assume next that we have a bilinear map (or contraction) and assume that C : M T M T R (4.1) F (C(f g)) = C(F T (f g)). Consider the special case of M = R n = (G R n )/G with action given by g u = Au + b for any g = (A, b) G R n. These Klein geometries include Euclidean, symplectic, affine and equi-affine, for example. For these special type of geometric background we will see how the sl 2 representation produced above will describe a method to find differential invariants by recursive transvection. The process can be done to ensure that they are also invariant under reparametrization. We will use the first transvectant to generate all relative differential invariants of Jacobian weight (a classical moving frame) by recurrent transvection of one initial invariant with u 1. The following are very simple observations. The expression φ f

16 16 G. MARÍ BEFFA AND JAN A. SANDERS represents the change of parameter x in f by a diffeomorphism φ. The proofs are straightforward. Proposition 2. If φ f = (φ p 1 f) φ 1, then Ef = 2pf. Notice that the reciprocal statement is not true: take f = u 2. Proposition 3. Assume f and h are real functions defined on J (k) (M) and assume they hold φ f = (φ p 1 f) φ 1, φ g = (φ r 1g) φ 1 for any change of x-variable φ : I R R. Then, (f, g) (1) holds φ (f, g) (1) = (φ r+p+1 1 (f, g) (1) ) φ 1. Likewise, if E(f) = ω f f, then ω (f,g) (1) = ω f + ω g + 2. The basis for the generation process is the following simple observation. Proposition 4. Let k be a differential invariant, and let R be a relative differential invariant with Jacobian weight. Then (k, R) (1) is also a relative differential invariant with Jacobian weight. Proof. The proof of this proposition is rather simple. If k is a differential invariant and R is a relative invariant of Jacobian weight, then v (n) (k) = 0 and v (n) (R) = J v R, where by v (n) (R) we mean the application of the prolongation vector to each one of the entries of R. Since the action is the composition of a linear action and a translation, J v will be a constant matrix for any v g (given in fact by the element in G generating v). Therefore, since D and the prolongation commute, v (n) ((k, R) (1) ) = J v (k, R) (1). Consider now a contraction map associated to the manifold. If the Lie group G is defined as the group preserving a bi-linear form, G = {g GL(n), such that g T Jg = J} then we can define C to be C(v w) = v T Jw R. The main example for which this does not exist is the equi-affine case G = SL(n). In this case we will need to use a slightly different approach that will be described separately. We start by finding a differential invariant for the action. The lowest order differential invariants, let s call it k, will be in the kernel of F and hence it will be written in terms of transvectants. One can then analyze the lower order transvectants to search for a first invariant. The simplest one will be [ k = C(u 1, u 1 ) (0)] 1 2 = [ u T ] 1 1 Ju 2 1. This is indeed a nonzero differential invariant in most geometric cases and E(k) = 2k. One case for which this vanishes is G = Sp(n). In that case we would choose [ k = C(u 1, u 1 ) (1)] 1 3 = [ u T ] 1 2 Ju 3 1. Both choices are in the kernel of F. In the case at hand there is a basic relative differential invariant of Jacobian weight, namely u 1, which trivially holds E T (u 1 ) = 2u 1. We then define V 2 = ((k) r, u 1 ) (1)

17 DIFFERENTIAL INVARIANTS OF CURVES AND TRANSVECTANTS 17 and choose r so that E T (V 2 ) = 2V 2, namely r = 1. We repeat the process and define V 3 to be given by V 3 = ((k) r, V 2 ) (1) where r = 1 is chosen so that E T (V 3 ) = 2V 3. And so on. Clearly this process is generating independent relative invariants, assuming the curve is generic so that u 1,..., u n are all independent. If one wishes the system to be also independent under reparametrizations, then we merely need to divide the result by k and apply Proposition 2. Theorem 5. If G is defined by a bilinear form as above and u is nondegenerate, any differential invariant for curves in (G R n )/G can be written as a function of k, k i = C(V i, V i ) (0), i = 2, 3,..., n and their derivatives if G Sp(n) and of k, k i = C(V i, V i+1 ) (0), i = 2,..., n + 1 and their derivatives if G = Sp(n). Furthermore, in either case k and k i lie in the kernel of the operator F and k and k i k are additionally invariant under reparametrizations. Proof. The nondegeneracy asumption is usually guaranteed by {u 1,..., u n } being independent, but one can also think of generic curves. We can use standard differential invariant theory to conclude that for the groups specified in the statement of the Theorem, one has r independent differential invariants at each one of the orders r = 1, 2,..., n, except for the case G = Sp(n), for which we have r 1 of order r for r = 2,..., n 1. That means we have one first (or second for Sp(n)) order invariant, the differential of this plus an extra invariant of order 2 (or 3 for Sp(n)), the differential of these two plus an extra invariant at the next order and so on. Thus, to prove the Theorem it suffices to show that k, k i, i = 2,..., n are all functionally independent. The other two properties (invariant under reparametrizations and in the kernel of F ) are clearly true since they hold for each V i. Indeed, if G Sp(n), it is simple to see that k and k i, i = 2,..., n are independent. Notice that, up to terms and factors depending of k and its derivatives, k 2 is equivalent to u T 2 Ju 2. Up to terms and factors depending on k, k 2 and their derivatives, k 3 is equivalent to u T 3 Ju 3 and, in general, up to the previous invariants and their derivatives, k i is equivalent to u T i Ju i. Clearly, if the curve is nondegenerate, u T i Ju i are all functionally independent and the proof in this case is concluded. In the case G = Sp(n) a similar argument is valid. Up to k and derivatives of k, k 2 is equivalent to u T 2 Ju 3, up to k, k 2 and their derivatives, k 3 is equivalent to u T 3 Ju 4. In general, up to derivatives of k and k s, s r, k r is equivalent to u T r Ju r+1. These are all clearly independent, in the generic case, and so are the original differential invariants. In the case of G = SL(n, R) classical invariant theory tells us that there is one independent differential invariant of order n plus n 1 (other than the derivative of the n order one) independent ones of order n + 1. These can be obtained as above, with some changes. As before, the key is to make an initial choice of differential invariant and relative invariant, but the contraction, since it needs to be invariant under the group, is now different. Theorem 6. If G = SL(n, R), then we can choose k = det(u n,..., u 1 ) 1 r, with r = ( n 2) and produce V i as above, i = 1,... n+1 with V 1 = u 1. Then, a basis for the space of differential invariants of equi-affine curves is given by k and the differential invariants k i = det(v n+1, V n,..., V i+1, V i 1,..., V 1 ), i = 1, 2,..., n 1.

18 18 G. MARÍ BEFFA AND JAN A. SANDERS Note : notice that we are using here the contraction map n C : ˆV 0 R = S(V0 n ) given by C(v 1,..., v n ) = det(v 1,..., v n ). Proof. The proof is as simple as the previous ones. It is quite easy to see that, up to factors of k, k n 1 is equivalent to det(v n+1, V n, u n 2,..., u 1 ). Therefore, again up to functions of k and its derivatives, k n 1 will be determined by det(u n+1, u n, u n 2,..., u 1 ) and det(u n+1, u n 1, u n 2,..., u 1 ). This last one is the derivative of (k) r and so k n 1 is equivalent to ν n 1 = det(u n+1, u n, u n 2,..., u 1 ). Next, one looks at k n 2 and uses the same reasoning to conclude that, up to factors and terms depending on k, k n 1 and their derivatives, k n 2 is equivalent to ν n 2 = det(u n+1, u n, u n 1, u n 3,..., u 1 ). A simple recursion show that k i will be equivalent to ν i = det(u n+1,..., u i+1, u i 1,..., u 1 ). It is well known that {(k) r, ν 1,..., ν i 1 } form a basis for the differential invariants of equi-affine curves, and hence the proof is concluded. Comment 5. From these simple cases one can appreciate certain things. It is in fact the differentiation that generates relative differential invariants - this was of course known -. Still, the transvection process allows us to generate invariants in the kernel of F, the complement to the image of D. So, despite the fact that F has no geometrical meaning here, it is still very useful to project everything on a complement of (the trivial) im D. In this sense, the basis generated by transvection give natural representatives for first cohomology classes of the invariant variational bicomplex. See [6] for more information. We will next analyze some non-affine geometries for which, surprisingly, the same algorithm applies, including the role of differentiating and transvecting in the generation of relative and absolute differential invariants. 5. Differential invariant of curves in G/H, G GL(n, R) semisimple In this section we will analyze three manifolds, PSL(n, R)/H 1 RP n, O(n + 1, 1)/H 2 M n the Möbius sphere or local model for flat conformal manifolds, and Sp(n)/H 3 L n the Grasmann Lagrangian. Each H i is an appropriate isotropy subgroup of a distinguished point. In all cases we will describe how to write a basis of differential invariants for curves in M as a combination of transvectants. Unlike the previous cases, the action of the groups PSL(n, R), O(n + 1, 1) and Sp(n) on their respective manifolds are not linear as it affects second order frames. Still, we can use an analogous method to the one used in the linear case with proper choices of initial invariant and relative invariant. The main difference is that our relative invariants will not be vectors tangent to the manifold any longer. Several other cases of G/H, G GL(n, R) semisimple (G = O(2n, 2n) or O(p + 1, q + 1) for example) follow one of these three models as it is explained in [10]. Their study would be identical to the ones presented here.

19 DIFFERENTIAL INVARIANTS OF CURVES AND TRANSVECTANTS Differential invariants for parametrized projective curves. In this section we will show how one can find a generating system of differential invariants for projective RP n expressed in terms of transvection. Given a curve in RP n, it is known that a complete set of generators of the differential invariants of the curve is given by the so-called Wylczinski invariants ([17]). These are given by the formulas k m = det(µ n+1, µ n,..., µ m 1, µ m+1,..., µ) ( u where µ = W 1) 1 n+1, W = det(un,..., u 1 ) and where m = 0, 1,..., n 1. Proposition 5. F (k m ) = (m n)(m + 1)k m+1 E(k m ) = (2(n + 1 i) + n)k m F (W ) = 0 The proof of the propositions is straightforward using the formulas in Lemma 1. It shows how {k m } and their derivatives generate V λ for λ = 3n + 2 and v 0 = k n 1. The functional space generated by V λ would be the space of differential invariants of the curve. Wylczinski originally found the invariants k m as coefficients of the unique n + 1 order scalar differential equation with zero n order term having each entry of µ as solution. Still, one can look at them as contractions of iterated differentiation of µ. According to previous Theorems we can always find a combination of Wylczinski s invariants generated by transvection. Instead of using the algorithm, we will use a method similar to the one used for equi-affine geometry to generate this basis. Indeed, consider the vector µ above. Proposition 6. The vector µ is a relative invariant for the projective action of PSL(n+1, R). Indeed, if g PSL(n+1, R) and we denote by g (r) u (r) the prolonged projective action of that element on J (r) (R, M), then is not hard to check that g (r) µ = gµ where g (r) µ indicates the application of the prolonged action to each entry of the vector µ and where gµ is the standard multiplication of matrices. In fact, this property is an immediate consequence of µ being a column of a left-invariant moving frame for RP n, see [7]. Once we have a relative invariant with constant weight, we can apply Proposition 4 to generate several independent ones. As before, we will need a differential invariant to start the process. We choose the Wylczinski invariant we know to be in the kernel of F already, namely the one with lowest weight k = k n 1. It is not hard to see that one cannot find, in general, a differential invariant that behaves as a one form, that is, such that φ k = φ 1 k. Indeed, consider the case n = 2. In that case there is only one generating differential invariant for the action of PSL(2) on RP 1, namely the Schwarzian derivative of u : R RP 1. It is well known that φ S(u) = S(u) φ 1 + S(φ) φ 1 so that only fractional transformations will preserve S(u). If one can find an invariant in the kernel of F behaving like a one form respect to changes of parameter, then the process can be repeated as in the linear case so ensure that the resulting generators are also invariant under reparametrization, but that is not the case here.

20 20 G. MARÍ BEFFA AND JAN A. SANDERS Let us choose k = k n 1. Indeed, k n 1 = α( 1 W, W )(2) + ( 1 W, det(u n+1, u n, u n 2,..., u 1 )) (0), where α = 1 2n(n(n + 1) + 1)!)Γ( n(n + 1)). Notice that kn 1 (and indeed all projective invariants in the kernel of F ) can be written as transvection of equi-affine differential invariants, although not simply polynomial transvection. Define V i = (k, V i 1 ) (1) for i = 1,..., n + 1, where V 0 = µ. The vectors V i are all in the kernel of F so we need to use the group preserved contraction to generate invariants. Theorem 7. Consider the expressions ν i = det(v n+1, V n,..., V i+1, V i 1,..., V 0 ) for i = 0, 1,..., n 2. Then {k, ν 0,..., ν n 2 } form a basis for the space of differential invariants of projective curves. Clearly, they all lie in the kernel of F. Proof. The proof of this theorem is very simple. Indeed, V 0 = µ and hence, since V 1 is a combination of µ 1 = (V 0 ) x and V 0 with coefficients depending on k and its derivatives, we can safely substitute V 1 by µ 1 in the definition of ν i for the purposes of this proof, as far as V 0 is also present in the determinant. Likewise, for the purpose of the proof of this theorem, V 2 can be substituted by a multiple of kµ 2 since the other terms appearing in V 2 are combinations of µ and µ 1 with coefficients depending on k and its derivatives. And so on. Therefore, ν n 2 is equivalent to det(v n+1, V n, V n 1, µ n 3,..., µ) in the sense that it is equal to a multiple of it with (nonzero) coefficients depending on k. We can now write out this determinant to find it is a combination (again with coefficients depending on k and its derivatives) of k n+1 = 1, k n = 0, k n 1 = k and k n 2, the first two Wylczinski invariants. Very clearly the coefficient of k n 2 is a nonzero function of k and hence we can conclude that {k, ν k 2 } generates the same subspace of differential invariants as {k n 1, k n 2 }. Furthermore they are also functionally independent since {k n 1, k n 2 } are. If we now look at ν n 3 we see it is equivalent to analyzing the generating and independence properties of k, ν n 2 and det(v n+1, V n, V n 1, V n 2, µ n 4,..., µ). As before, this determinant can be written as a combination of k n+1 = 1, k n = 0, k n 1, k n 2 and k n 3, with coefficients depending on k and its derivatives. The coefficient of k n 3 is given by a nonzero function of k. Hence, one can conclude that {k, ν n 2, ν n 3 } generates the same subspace of differential invariants as {k n 1, k n 2, k n 3 }. They are also functionally independent since the Wylczinski invariants are. A simple iteration of the argument proves the theorem.

Topics in linear algebra

Topics in linear algebra Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over

More information

Math 594. Solutions 5

Math 594. Solutions 5 Math 594. Solutions 5 Book problems 6.1: 7. Prove that subgroups and quotient groups of nilpotent groups are nilpotent (your proof should work for infinite groups). Give an example of a group G which possesses

More information

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

A PRIMER ON SESQUILINEAR FORMS

A PRIMER ON SESQUILINEAR FORMS A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form

More information

LECTURE 4: REPRESENTATION THEORY OF SL 2 (F) AND sl 2 (F)

LECTURE 4: REPRESENTATION THEORY OF SL 2 (F) AND sl 2 (F) LECTURE 4: REPRESENTATION THEORY OF SL 2 (F) AND sl 2 (F) IVAN LOSEV In this lecture we will discuss the representation theory of the algebraic group SL 2 (F) and of the Lie algebra sl 2 (F), where F is

More information

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional

More information

1. Classifying Spaces. Classifying Spaces

1. Classifying Spaces. Classifying Spaces Classifying Spaces 1. Classifying Spaces. To make our lives much easier, all topological spaces from now on will be homeomorphic to CW complexes. Fact: All smooth manifolds are homeomorphic to CW complexes.

More information

1 Invariant subspaces

1 Invariant subspaces MATH 2040 Linear Algebra II Lecture Notes by Martin Li Lecture 8 Eigenvalues, eigenvectors and invariant subspaces 1 In previous lectures we have studied linear maps T : V W from a vector space V to another

More information

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

The Cayley-Hamilton Theorem and the Jordan Decomposition

The Cayley-Hamilton Theorem and the Jordan Decomposition LECTURE 19 The Cayley-Hamilton Theorem and the Jordan Decomposition Let me begin by summarizing the main results of the last lecture Suppose T is a endomorphism of a vector space V Then T has a minimal

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

5 Quiver Representations

5 Quiver Representations 5 Quiver Representations 5. Problems Problem 5.. Field embeddings. Recall that k(y,..., y m ) denotes the field of rational functions of y,..., y m over a field k. Let f : k[x,..., x n ] k(y,..., y m )

More information

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS KEITH CONRAD. Introduction The easiest matrices to compute with are the diagonal ones. The sum and product of diagonal matrices can be computed componentwise

More information

ALGEBRAIC GROUPS J. WARNER

ALGEBRAIC GROUPS J. WARNER ALGEBRAIC GROUPS J. WARNER Let k be an algebraically closed field. varieties unless otherwise stated. 1. Definitions and Examples For simplicity we will work strictly with affine Definition 1.1. An algebraic

More information

Math 396. Quotient spaces

Math 396. Quotient spaces Math 396. Quotient spaces. Definition Let F be a field, V a vector space over F and W V a subspace of V. For v, v V, we say that v v mod W if and only if v v W. One can readily verify that with this definition

More information

Notes on nilpotent orbits Computational Theory of Real Reductive Groups Workshop. Eric Sommers

Notes on nilpotent orbits Computational Theory of Real Reductive Groups Workshop. Eric Sommers Notes on nilpotent orbits Computational Theory of Real Reductive Groups Workshop Eric Sommers 17 July 2009 2 Contents 1 Background 5 1.1 Linear algebra......................................... 5 1.1.1

More information

Supplementary Notes March 23, The subgroup Ω for orthogonal groups

Supplementary Notes March 23, The subgroup Ω for orthogonal groups The subgroup Ω for orthogonal groups 18.704 Supplementary Notes March 23, 2005 In the case of the linear group, it is shown in the text that P SL(n, F ) (that is, the group SL(n) of determinant one matrices,

More information

QUATERNIONS AND ROTATIONS

QUATERNIONS AND ROTATIONS QUATERNIONS AND ROTATIONS SVANTE JANSON 1. Introduction The purpose of this note is to show some well-known relations between quaternions and the Lie groups SO(3) and SO(4) (rotations in R 3 and R 4 )

More information

MILNOR SEMINAR: DIFFERENTIAL FORMS AND CHERN CLASSES

MILNOR SEMINAR: DIFFERENTIAL FORMS AND CHERN CLASSES MILNOR SEMINAR: DIFFERENTIAL FORMS AND CHERN CLASSES NILAY KUMAR In these lectures I want to introduce the Chern-Weil approach to characteristic classes on manifolds, and in particular, the Chern classes.

More information

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

Linear Algebra I. Ronald van Luijk, 2015

Linear Algebra I. Ronald van Luijk, 2015 Linear Algebra I Ronald van Luijk, 2015 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents Dependencies among sections 3 Chapter 1. Euclidean space: lines and hyperplanes 5 1.1. Definition

More information

MAT 445/ INTRODUCTION TO REPRESENTATION THEORY

MAT 445/ INTRODUCTION TO REPRESENTATION THEORY MAT 445/1196 - INTRODUCTION TO REPRESENTATION THEORY CHAPTER 1 Representation Theory of Groups - Algebraic Foundations 1.1 Basic definitions, Schur s Lemma 1.2 Tensor products 1.3 Unitary representations

More information

THE EULER CHARACTERISTIC OF A LIE GROUP

THE EULER CHARACTERISTIC OF A LIE GROUP THE EULER CHARACTERISTIC OF A LIE GROUP JAY TAYLOR 1 Examples of Lie Groups The following is adapted from [2] We begin with the basic definition and some core examples Definition A Lie group is a smooth

More information

LECTURE 2: SYMPLECTIC VECTOR BUNDLES

LECTURE 2: SYMPLECTIC VECTOR BUNDLES LECTURE 2: SYMPLECTIC VECTOR BUNDLES WEIMIN CHEN, UMASS, SPRING 07 1. Symplectic Vector Spaces Definition 1.1. A symplectic vector space is a pair (V, ω) where V is a finite dimensional vector space (over

More information

Linear and Bilinear Algebra (2WF04) Jan Draisma

Linear and Bilinear Algebra (2WF04) Jan Draisma Linear and Bilinear Algebra (2WF04) Jan Draisma CHAPTER 3 The minimal polynomial and nilpotent maps 3.1. Minimal polynomial Throughout this chapter, V is a finite-dimensional vector space of dimension

More information

9. Integral Ring Extensions

9. Integral Ring Extensions 80 Andreas Gathmann 9. Integral ing Extensions In this chapter we want to discuss a concept in commutative algebra that has its original motivation in algebra, but turns out to have surprisingly many applications

More information

Comparison for infinitesimal automorphisms. of parabolic geometries

Comparison for infinitesimal automorphisms. of parabolic geometries Comparison techniques for infinitesimal automorphisms of parabolic geometries University of Vienna Faculty of Mathematics Srni, January 2012 This talk reports on joint work in progress with Karin Melnick

More information

1 Fields and vector spaces

1 Fields and vector spaces 1 Fields and vector spaces In this section we revise some algebraic preliminaries and establish notation. 1.1 Division rings and fields A division ring, or skew field, is a structure F with two binary

More information

LECTURE 3: REPRESENTATION THEORY OF SL 2 (C) AND sl 2 (C)

LECTURE 3: REPRESENTATION THEORY OF SL 2 (C) AND sl 2 (C) LECTURE 3: REPRESENTATION THEORY OF SL 2 (C) AND sl 2 (C) IVAN LOSEV Introduction We proceed to studying the representation theory of algebraic groups and Lie algebras. Algebraic groups are the groups

More information

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information

Cambridge University Press The Mathematics of Signal Processing Steven B. Damelin and Willard Miller Excerpt More information Introduction Consider a linear system y = Φx where Φ can be taken as an m n matrix acting on Euclidean space or more generally, a linear operator on a Hilbert space. We call the vector x a signal or input,

More information

Real representations

Real representations Real representations 1 Definition of a real representation Definition 1.1. Let V R be a finite dimensional real vector space. A real representation of a group G is a homomorphism ρ VR : G Aut V R, where

More information

A proof of the Jordan normal form theorem

A proof of the Jordan normal form theorem A proof of the Jordan normal form theorem Jordan normal form theorem states that any matrix is similar to a blockdiagonal matrix with Jordan blocks on the diagonal. To prove it, we first reformulate it

More information

Math Linear Algebra II. 1. Inner Products and Norms

Math Linear Algebra II. 1. Inner Products and Norms Math 342 - Linear Algebra II Notes 1. Inner Products and Norms One knows from a basic introduction to vectors in R n Math 254 at OSU) that the length of a vector x = x 1 x 2... x n ) T R n, denoted x,

More information

L(C G (x) 0 ) c g (x). Proof. Recall C G (x) = {g G xgx 1 = g} and c g (x) = {X g Ad xx = X}. In general, it is obvious that

L(C G (x) 0 ) c g (x). Proof. Recall C G (x) = {g G xgx 1 = g} and c g (x) = {X g Ad xx = X}. In general, it is obvious that ALGEBRAIC GROUPS 61 5. Root systems and semisimple Lie algebras 5.1. Characteristic 0 theory. Assume in this subsection that chark = 0. Let me recall a couple of definitions made earlier: G is called reductive

More information

Since G is a compact Lie group, we can apply Schur orthogonality to see that G χ π (g) 2 dg =

Since G is a compact Lie group, we can apply Schur orthogonality to see that G χ π (g) 2 dg = Problem 1 Show that if π is an irreducible representation of a compact lie group G then π is also irreducible. Give an example of a G and π such that π = π, and another for which π π. Is this true for

More information

Classification of root systems

Classification of root systems Classification of root systems September 8, 2017 1 Introduction These notes are an approximate outline of some of the material to be covered on Thursday, April 9; Tuesday, April 14; and Thursday, April

More information

6 Orthogonal groups. O 2m 1 q. q 2i 1 q 2i. 1 i 1. 1 q 2i 2. O 2m q. q m m 1. 1 q 2i 1 i 1. 1 q 2i. i 1. 2 q 1 q i 1 q i 1. m 1.

6 Orthogonal groups. O 2m 1 q. q 2i 1 q 2i. 1 i 1. 1 q 2i 2. O 2m q. q m m 1. 1 q 2i 1 i 1. 1 q 2i. i 1. 2 q 1 q i 1 q i 1. m 1. 6 Orthogonal groups We now turn to the orthogonal groups. These are more difficult, for two related reasons. First, it is not always true that the group of isometries with determinant 1 is equal to its

More information

6 Lecture 6: More constructions with Huber rings

6 Lecture 6: More constructions with Huber rings 6 Lecture 6: More constructions with Huber rings 6.1 Introduction Recall from Definition 5.2.4 that a Huber ring is a commutative topological ring A equipped with an open subring A 0, such that the subspace

More information

1.3 Linear Dependence & span K

1.3 Linear Dependence & span K ( ) Conversely, suppose that every vector v V can be expressed uniquely as v = u +w for u U and w W. Then, the existence of this expression for each v V is simply the statement that V = U +W. Moreover,

More information

Lecture 13 The Fundamental Forms of a Surface

Lecture 13 The Fundamental Forms of a Surface Lecture 13 The Fundamental Forms of a Surface In the following we denote by F : O R 3 a parametric surface in R 3, F(u, v) = (x(u, v), y(u, v), z(u, v)). We denote partial derivatives with respect to the

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

IRREDUCIBLE REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS. Contents

IRREDUCIBLE REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS. Contents IRREDUCIBLE REPRESENTATIONS OF SEMISIMPLE LIE ALGEBRAS NEEL PATEL Abstract. The goal of this paper is to study the irreducible representations of semisimple Lie algebras. We will begin by considering two

More information

10. Smooth Varieties. 82 Andreas Gathmann

10. Smooth Varieties. 82 Andreas Gathmann 82 Andreas Gathmann 10. Smooth Varieties Let a be a point on a variety X. In the last chapter we have introduced the tangent cone C a X as a way to study X locally around a (see Construction 9.20). It

More information

(VII.B) Bilinear Forms

(VII.B) Bilinear Forms (VII.B) Bilinear Forms There are two standard generalizations of the dot product on R n to arbitrary vector spaces. The idea is to have a product that takes two vectors to a scalar. Bilinear forms are

More information

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1) EXERCISE SET 5. 6. The pair (, 2) is in the set but the pair ( )(, 2) = (, 2) is not because the first component is negative; hence Axiom 6 fails. Axiom 5 also fails. 8. Axioms, 2, 3, 6, 9, and are easily

More information

12. Hilbert Polynomials and Bézout s Theorem

12. Hilbert Polynomials and Bézout s Theorem 12. Hilbert Polynomials and Bézout s Theorem 95 12. Hilbert Polynomials and Bézout s Theorem After our study of smooth cubic surfaces in the last chapter, let us now come back to the general theory of

More information

OPERATOR PENCIL PASSING THROUGH A GIVEN OPERATOR

OPERATOR PENCIL PASSING THROUGH A GIVEN OPERATOR OPERATOR PENCIL PASSING THROUGH A GIVEN OPERATOR A. BIGGS AND H. M. KHUDAVERDIAN Abstract. Let be a linear differential operator acting on the space of densities of a given weight on a manifold M. One

More information

Representations. 1 Basic definitions

Representations. 1 Basic definitions Representations 1 Basic definitions If V is a k-vector space, we denote by Aut V the group of k-linear isomorphisms F : V V and by End V the k-vector space of k-linear maps F : V V. Thus, if V = k n, then

More information

Highest-weight Theory: Verma Modules

Highest-weight Theory: Verma Modules Highest-weight Theory: Verma Modules Math G4344, Spring 2012 We will now turn to the problem of classifying and constructing all finitedimensional representations of a complex semi-simple Lie algebra (or,

More information

Infinite-Dimensional Triangularization

Infinite-Dimensional Triangularization Infinite-Dimensional Triangularization Zachary Mesyan March 11, 2018 Abstract The goal of this paper is to generalize the theory of triangularizing matrices to linear transformations of an arbitrary vector

More information

Math 121 Homework 5: Notes on Selected Problems

Math 121 Homework 5: Notes on Selected Problems Math 121 Homework 5: Notes on Selected Problems 12.1.2. Let M be a module over the integral domain R. (a) Assume that M has rank n and that x 1,..., x n is any maximal set of linearly independent elements

More information

Theorem 5.3. Let E/F, E = F (u), be a simple field extension. Then u is algebraic if and only if E/F is finite. In this case, [E : F ] = deg f u.

Theorem 5.3. Let E/F, E = F (u), be a simple field extension. Then u is algebraic if and only if E/F is finite. In this case, [E : F ] = deg f u. 5. Fields 5.1. Field extensions. Let F E be a subfield of the field E. We also describe this situation by saying that E is an extension field of F, and we write E/F to express this fact. If E/F is a field

More information

Representation theory and quantum mechanics tutorial Spin and the hydrogen atom

Representation theory and quantum mechanics tutorial Spin and the hydrogen atom Representation theory and quantum mechanics tutorial Spin and the hydrogen atom Justin Campbell August 3, 2017 1 Representations of SU 2 and SO 3 (R) 1.1 The following observation is long overdue. Proposition

More information

3.1. Derivations. Let A be a commutative k-algebra. Let M be a left A-module. A derivation of A in M is a linear map D : A M such that

3.1. Derivations. Let A be a commutative k-algebra. Let M be a left A-module. A derivation of A in M is a linear map D : A M such that ALGEBRAIC GROUPS 33 3. Lie algebras Now we introduce the Lie algebra of an algebraic group. First, we need to do some more algebraic geometry to understand the tangent space to an algebraic variety at

More information

CALCULUS ON MANIFOLDS. 1. Riemannian manifolds Recall that for any smooth manifold M, dim M = n, the union T M =

CALCULUS ON MANIFOLDS. 1. Riemannian manifolds Recall that for any smooth manifold M, dim M = n, the union T M = CALCULUS ON MANIFOLDS 1. Riemannian manifolds Recall that for any smooth manifold M, dim M = n, the union T M = a M T am, called the tangent bundle, is itself a smooth manifold, dim T M = 2n. Example 1.

More information

Honors Algebra 4, MATH 371 Winter 2010 Assignment 4 Due Wednesday, February 17 at 08:35

Honors Algebra 4, MATH 371 Winter 2010 Assignment 4 Due Wednesday, February 17 at 08:35 Honors Algebra 4, MATH 371 Winter 2010 Assignment 4 Due Wednesday, February 17 at 08:35 1. Let R be a commutative ring with 1 0. (a) Prove that the nilradical of R is equal to the intersection of the prime

More information

ALMOST COMPLEX PROJECTIVE STRUCTURES AND THEIR MORPHISMS

ALMOST COMPLEX PROJECTIVE STRUCTURES AND THEIR MORPHISMS ARCHIVUM MATHEMATICUM BRNO Tomus 45 2009, 255 264 ALMOST COMPLEX PROJECTIVE STRUCTURES AND THEIR MORPHISMS Jaroslav Hrdina Abstract We discuss almost complex projective geometry and the relations to a

More information

Groups of Prime Power Order with Derived Subgroup of Prime Order

Groups of Prime Power Order with Derived Subgroup of Prime Order Journal of Algebra 219, 625 657 (1999) Article ID jabr.1998.7909, available online at http://www.idealibrary.com on Groups of Prime Power Order with Derived Subgroup of Prime Order Simon R. Blackburn*

More information

IDEAL CLASSES AND SL 2

IDEAL CLASSES AND SL 2 IDEAL CLASSES AND SL 2 KEITH CONRAD Introduction A standard group action in complex analysis is the action of GL 2 C on the Riemann sphere C { } by linear fractional transformations Möbius transformations:

More information

DETECTING REGULI AND PROJECTION THEORY

DETECTING REGULI AND PROJECTION THEORY DETECTING REGULI AND PROJECTION THEORY We have one more theorem about the incidence theory of lines in R 3. Theorem 0.1. If L is a set of L lines in R 3 with B lines in any plane or regulus, and if B L

More information

BACKGROUND IN SYMPLECTIC GEOMETRY

BACKGROUND IN SYMPLECTIC GEOMETRY BACKGROUND IN SYMPLECTIC GEOMETRY NILAY KUMAR Today I want to introduce some of the symplectic structure underlying classical mechanics. The key idea is actually quite old and in its various formulations

More information

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction

MAT2342 : Introduction to Applied Linear Algebra Mike Newman, fall Projections. introduction MAT4 : Introduction to Applied Linear Algebra Mike Newman fall 7 9. Projections introduction One reason to consider projections is to understand approximate solutions to linear systems. A common example

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

GALOIS THEORY BRIAN OSSERMAN

GALOIS THEORY BRIAN OSSERMAN GALOIS THEORY BRIAN OSSERMAN Galois theory relates the theory of field extensions to the theory of groups. It provides a powerful tool for studying field extensions, and consequently, solutions to polynomial

More information

Exercises on chapter 1

Exercises on chapter 1 Exercises on chapter 1 1. Let G be a group and H and K be subgroups. Let HK = {hk h H, k K}. (i) Prove that HK is a subgroup of G if and only if HK = KH. (ii) If either H or K is a normal subgroup of G

More information

fy (X(g)) Y (f)x(g) gy (X(f)) Y (g)x(f)) = fx(y (g)) + gx(y (f)) fy (X(g)) gy (X(f))

fy (X(g)) Y (f)x(g) gy (X(f)) Y (g)x(f)) = fx(y (g)) + gx(y (f)) fy (X(g)) gy (X(f)) 1. Basic algebra of vector fields Let V be a finite dimensional vector space over R. Recall that V = {L : V R} is defined to be the set of all linear maps to R. V is isomorphic to V, but there is no canonical

More information

LINEAR EQUATIONS WITH UNKNOWNS FROM A MULTIPLICATIVE GROUP IN A FUNCTION FIELD. To Professor Wolfgang Schmidt on his 75th birthday

LINEAR EQUATIONS WITH UNKNOWNS FROM A MULTIPLICATIVE GROUP IN A FUNCTION FIELD. To Professor Wolfgang Schmidt on his 75th birthday LINEAR EQUATIONS WITH UNKNOWNS FROM A MULTIPLICATIVE GROUP IN A FUNCTION FIELD JAN-HENDRIK EVERTSE AND UMBERTO ZANNIER To Professor Wolfgang Schmidt on his 75th birthday 1. Introduction Let K be a field

More information

Vectors. January 13, 2013

Vectors. January 13, 2013 Vectors January 13, 2013 The simplest tensors are scalars, which are the measurable quantities of a theory, left invariant by symmetry transformations. By far the most common non-scalars are the vectors,

More information

9. Birational Maps and Blowing Up

9. Birational Maps and Blowing Up 72 Andreas Gathmann 9. Birational Maps and Blowing Up In the course of this class we have already seen many examples of varieties that are almost the same in the sense that they contain isomorphic dense

More information

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts (in

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date May 9, 29 2 Contents 1 Motivation for the course 5 2 Euclidean n dimensional Space 7 2.1 Definition of n Dimensional Euclidean Space...........

More information

1 Introduction: connections and fiber bundles

1 Introduction: connections and fiber bundles [under construction] 1 Introduction: connections and fiber bundles Two main concepts of differential geometry are those of a covariant derivative and of a fiber bundle (in particular, a vector bundle).

More information

Lecture 10: A (Brief) Introduction to Group Theory (See Chapter 3.13 in Boas, 3rd Edition)

Lecture 10: A (Brief) Introduction to Group Theory (See Chapter 3.13 in Boas, 3rd Edition) Lecture 0: A (Brief) Introduction to Group heory (See Chapter 3.3 in Boas, 3rd Edition) Having gained some new experience with matrices, which provide us with representations of groups, and because symmetries

More information

Abstract Vector Spaces and Concrete Examples

Abstract Vector Spaces and Concrete Examples LECTURE 18 Abstract Vector Spaces and Concrete Examples Our discussion of linear algebra so far has been devoted to discussing the relations between systems of linear equations, matrices, and vectors.

More information

(VI.D) Generalized Eigenspaces

(VI.D) Generalized Eigenspaces (VI.D) Generalized Eigenspaces Let T : C n C n be a f ixed linear transformation. For this section and the next, all vector spaces are assumed to be over C ; in particular, we will often write V for C

More information

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra D. R. Wilkins Contents 3 Topics in Commutative Algebra 2 3.1 Rings and Fields......................... 2 3.2 Ideals...............................

More information

COMPLEX ALGEBRAIC SURFACES CLASS 9

COMPLEX ALGEBRAIC SURFACES CLASS 9 COMPLEX ALGEBRAIC SURFACES CLASS 9 RAVI VAKIL CONTENTS 1. Construction of Castelnuovo s contraction map 1 2. Ruled surfaces 3 (At the end of last lecture I discussed the Weak Factorization Theorem, Resolution

More information

As always, the story begins with Riemann surfaces or just (real) surfaces. (As we have already noted, these are nearly the same thing).

As always, the story begins with Riemann surfaces or just (real) surfaces. (As we have already noted, these are nearly the same thing). An Interlude on Curvature and Hermitian Yang Mills As always, the story begins with Riemann surfaces or just (real) surfaces. (As we have already noted, these are nearly the same thing). Suppose we wanted

More information

Math 145. Codimension

Math 145. Codimension Math 145. Codimension 1. Main result and some interesting examples In class we have seen that the dimension theory of an affine variety (irreducible!) is linked to the structure of the function field in

More information

Chapter 2 Linear Transformations

Chapter 2 Linear Transformations Chapter 2 Linear Transformations Linear Transformations Loosely speaking, a linear transformation is a function from one vector space to another that preserves the vector space operations. Let us be more

More information

Chapter 4 & 5: Vector Spaces & Linear Transformations

Chapter 4 & 5: Vector Spaces & Linear Transformations Chapter 4 & 5: Vector Spaces & Linear Transformations Philip Gressman University of Pennsylvania Philip Gressman Math 240 002 2014C: Chapters 4 & 5 1 / 40 Objective The purpose of Chapter 4 is to think

More information

2: LINEAR TRANSFORMATIONS AND MATRICES

2: LINEAR TRANSFORMATIONS AND MATRICES 2: LINEAR TRANSFORMATIONS AND MATRICES STEVEN HEILMAN Contents 1. Review 1 2. Linear Transformations 1 3. Null spaces, range, coordinate bases 2 4. Linear Transformations and Bases 4 5. Matrix Representation,

More information

642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004

642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004 642:550, Summer 2004, Supplement 6 The Perron-Frobenius Theorem. Summer 2004 Introduction Square matrices whose entries are all nonnegative have special properties. This was mentioned briefly in Section

More information

Upper triangular matrices and Billiard Arrays

Upper triangular matrices and Billiard Arrays Linear Algebra and its Applications 493 (2016) 508 536 Contents lists available at ScienceDirect Linear Algebra and its Applications www.elsevier.com/locate/laa Upper triangular matrices and Billiard Arrays

More information

ESSENTIAL KILLING FIELDS OF PARABOLIC GEOMETRIES: PROJECTIVE AND CONFORMAL STRUCTURES. 1. Introduction

ESSENTIAL KILLING FIELDS OF PARABOLIC GEOMETRIES: PROJECTIVE AND CONFORMAL STRUCTURES. 1. Introduction ESSENTIAL KILLING FIELDS OF PARABOLIC GEOMETRIES: PROJECTIVE AND CONFORMAL STRUCTURES ANDREAS ČAP AND KARIN MELNICK Abstract. We use the general theory developed in our article [1] in the setting of parabolic

More information

Classification of semisimple Lie algebras

Classification of semisimple Lie algebras Chapter 6 Classification of semisimple Lie algebras When we studied sl 2 (C), we discovered that it is spanned by elements e, f and h fulfilling the relations: [e, h] = 2e, [ f, h] = 2 f and [e, f ] =

More information

GEOMETRIC CONSTRUCTIONS AND ALGEBRAIC FIELD EXTENSIONS

GEOMETRIC CONSTRUCTIONS AND ALGEBRAIC FIELD EXTENSIONS GEOMETRIC CONSTRUCTIONS AND ALGEBRAIC FIELD EXTENSIONS JENNY WANG Abstract. In this paper, we study field extensions obtained by polynomial rings and maximal ideals in order to determine whether solutions

More information

Chapter 2 The Group U(1) and its Representations

Chapter 2 The Group U(1) and its Representations Chapter 2 The Group U(1) and its Representations The simplest example of a Lie group is the group of rotations of the plane, with elements parametrized by a single number, the angle of rotation θ. It is

More information

SYMMETRIES OF SECTIONAL CURVATURE ON 3 MANIFOLDS. May 27, 1992

SYMMETRIES OF SECTIONAL CURVATURE ON 3 MANIFOLDS. May 27, 1992 SYMMETRIES OF SECTIONAL CURVATURE ON 3 MANIFOLDS Luis A. Cordero 1 Phillip E. Parker,3 Dept. Xeometría e Topoloxía Facultade de Matemáticas Universidade de Santiago 15706 Santiago de Compostela Spain cordero@zmat.usc.es

More information

0.2 Vector spaces. J.A.Beachy 1

0.2 Vector spaces. J.A.Beachy 1 J.A.Beachy 1 0.2 Vector spaces I m going to begin this section at a rather basic level, giving the definitions of a field and of a vector space in much that same detail as you would have met them in a

More information

Choice of Riemannian Metrics for Rigid Body Kinematics

Choice of Riemannian Metrics for Rigid Body Kinematics Choice of Riemannian Metrics for Rigid Body Kinematics Miloš Žefran1, Vijay Kumar 1 and Christopher Croke 2 1 General Robotics and Active Sensory Perception (GRASP) Laboratory 2 Department of Mathematics

More information

We simply compute: for v = x i e i, bilinearity of B implies that Q B (v) = B(v, v) is given by xi x j B(e i, e j ) =

We simply compute: for v = x i e i, bilinearity of B implies that Q B (v) = B(v, v) is given by xi x j B(e i, e j ) = Math 395. Quadratic spaces over R 1. Algebraic preliminaries Let V be a vector space over a field F. Recall that a quadratic form on V is a map Q : V F such that Q(cv) = c 2 Q(v) for all v V and c F, and

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Linear Algebra. Preliminary Lecture Notes

Linear Algebra. Preliminary Lecture Notes Linear Algebra Preliminary Lecture Notes Adolfo J. Rumbos c Draft date April 29, 23 2 Contents Motivation for the course 5 2 Euclidean n dimensional Space 7 2. Definition of n Dimensional Euclidean Space...........

More information

Introduction to Real Analysis Alternative Chapter 1

Introduction to Real Analysis Alternative Chapter 1 Christopher Heil Introduction to Real Analysis Alternative Chapter 1 A Primer on Norms and Banach Spaces Last Updated: March 10, 2018 c 2018 by Christopher Heil Chapter 1 A Primer on Norms and Banach Spaces

More information

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by

MATH SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER Given vector spaces V and W, V W is the vector space given by MATH 110 - SOLUTIONS TO PRACTICE MIDTERM LECTURE 1, SUMMER 2009 GSI: SANTIAGO CAÑEZ 1. Given vector spaces V and W, V W is the vector space given by V W = {(v, w) v V and w W }, with addition and scalar

More information

Introduction to Group Theory

Introduction to Group Theory Chapter 10 Introduction to Group Theory Since symmetries described by groups play such an important role in modern physics, we will take a little time to introduce the basic structure (as seen by a physicist)

More information

16.2. Definition. Let N be the set of all nilpotent elements in g. Define N

16.2. Definition. Let N be the set of all nilpotent elements in g. Define N 74 16. Lecture 16: Springer Representations 16.1. The flag manifold. Let G = SL n (C). It acts transitively on the set F of complete flags 0 F 1 F n 1 C n and the stabilizer of the standard flag is the

More information