A Geometric Approach to Linear Ordinary Differential Equations

Size: px
Start display at page:

Download "A Geometric Approach to Linear Ordinary Differential Equations"

Transcription

1 A Geometric Approach to Linear Ordinary Differential Equations R.C. Churchill Hunter College and the Graduate Center of CUNY, and the University of Calgary Address for Correspondence Department of Mathematics Hunter College 695 Park Avenue, New York, NY 10021, USA October 19, 2006

2 A Geometric Approach to Linear Ordinary Differential Equations R.C. Churchill Prepared for the Kolchin Seminar on Differential Algebra Graduate Center, City University of New York 13 October 2006 Amended and Corrected, 19 October 2006 Abstract We offer a formulation of linear ordinary differential equations midway between what one encounters in a first undergraduate ODE course and what one encounters in a graduate Differential Geometry course (in the latter instance under the heading of connections ). Analogies with elementary linear algebra are emphasized; no familiarity with Differential Geometry is assumed. Introduction Contents 1. Preliminaries on Derivations 2. Differential Modules 3. n th -Order Linear Differential Equations 4. Dimension Considerations 5. Fundamental Matrix Solutions 6. Dual Structures and Adjoint Equations 7. Cyclic Vectors 8. Extensions of Differential Structures 9. The Differential Galois Group Bibliography 1

3 Introduction Let V be a finite-dimensional vector space over a field K and let T : V V be a K-linear operator. In a first course in linear algebra one learns that T can be studied via matrices as follows. Select an ordered basis e = (e 1, e 2,..., e n ) of V and let B = (b ij ) be the e-matrix of T, i.e., the n n matrix with entries b ij K defined by (i) T e j = n j=1 b ije i, j = 1, 2,..., n. Let col n (K) K n denote the K-space of n 1 column vectors of elements of K i.e., n 1 matrices, and let β e : V col n (K) be the isomorphism defined by k 1 (ii) v = j k k 2 je j V v e :=. col n(k). k n Let T B : col n (K) col n (K) denote left multiplication by B, i.e., the function x col n (K) Bx col n (K). Then T B is K-linear and one has a commutative diagram (iii) V β e T V β e col n (K) T B coln (K) The commutativity of the diagram suggests that anything one wants to know about T should somehow be reflected in the K-linear mapping T B or, better yet, in the matrix B alone. The hope is that information desired about T is more easily extracted from B. Example : An ordered pair (λ, v) F (V \ {0}) is an eigenpair for T if (iv) T v = λv; one calls λ an eigenvalue of T, v an eigenvector, and the two entities are said to be associated. 2

4 We interpret (iv) geometrically: the effect of T on v is to scale this vector by a factor of λ. The idea is easy enough to comprehend, but suppose one actually needs to compute such pairs? In that case one switches from T to the matrix B: one learns that the eigenvalues of T are the roots of the characteristic polynomial char(b) := det(xi B) of B, and once these have been determined one can begin searching for associated eigenvectors. Of course the matrix B introduced above is not the unique matrix representation of T : a different choice of basis results in a different matrix, say A. One learns that A and B must be similar, i.e., that A = P BP 1 for some non-singular n n matrix P, and that similar matrices have the same characteristic polynomial. This shows that the characteristic polynomial is something intrinsically associated with T, not simply with matrix representations of T. In particular, one can unambiguously define the characteristic polynomial char(t ) of T by char(t ) := char(b). Since the determinant (up to sign) and the negative of the trace of B appear as coefficients of this polynomial, these entities are also directly associated with T, not simply with matrix representations thereof. One can therefore define det(t ) := det(b) and tr(t ) := tr(b), again without ambiguity. To summarize: there are entities intrinsically associated with T which can easily be computed from matrix representations of this operator. In these notes we will view linear ordinary differential equations in an analogous manner. Specifically, we will explain how to think of a first order system (v) ẋ 1 + b 11 x 1 + b 12 x b 1n x n = 0 ẋ 2 + b 21 x 1 + b 22 x b 2n x n = 0. ẋ n + b n1 x 1 + b n2 x b nn x n = 0 as a basis representation of a geometric entity which we call a differential module. For our purposes the fundamental object intrinsically associated with such an entity is the differential Galois group, a group traditionally associated with basis representations as in (v). We need some preliminaries on derivations. 3

5 1. Preliminaries on Derivations In this section R is a (not necessarily commutative) ring with multiplicative identity 1, with 1 = 0 allowed (in which case R = 0). An additive group endomorphism δ : r R r product or Leibniz rule R is a derivation if the (1.1) (rs) = rs + r s holds for all r, s R. One also writes r as r (1) and defines r (n) := (r (n 1) ) for n 2. The notation r (0) := r proves convenient. The usual differentiation operators d on the polynomial ring C[z] and the quotient field C(z) are the basic examples of derivations. The second can be viewed as a dz derivation on the field M(P 1 ) of meromorphic functions on the Riemann sphere by means of the usual identification C(z) M(P 1 ). The same derivation on M(P 1 ) is described in terms of the usual coordinate t = 1/z at by t 2 d. dt Another example of a derivation is provided by the zero mapping r R 0 R; this is the trivial derivation. For an example involving a non-commutative ring choose an integer n > 1, let R be the collection of n n matrices with entries in a commutative ring A with a derivation a a, and for r = (a ij ) R define r := (a ij). When r r is a derivation on R one sees from (1.1) that 1 = (1 1) = = 1 + 1, and as a result that (1.2) 1 = 0. When r R is a unit it then follows from 1 = rr 1 and (1.1) that 0 = (rr 1 ) = r (r 1 ) + r r 1, whence (1.3) (r 1 ) = r 1 r r 1. This formula is particularly useful in the matrix example given above. When R is commutative it assumes the more familiar form (1.4) (r 1 ) = r r 2. 4

6 The ring assumption suggests the generality of the concept of a derivation, but our main interest will be in derivations on fields. In this regard we note that any derivation on an integral domain extends uniquely, via the quotient rule, to the quotient field. Henceforth K denotes a differential field (of characteristic 0), i.e., a field K equipped with a non-trivial derivation k k. By a constant we mean an element k K satisfying k = 0, e.g., we see from (1.2) that 1 R has this property. Indeed, the collection K C K of constants is easily seen to be a subfield containing Q; this is the field of constants (of K = (K, δ)). When K = C(z) with derivation d we have K dz C = C. The constants of the differential field M(P 1 ) defined above are the constant functions f : P 1 C. The determinant k 1 k 2 k n k 1 k 2. k n (1.5) W := W (k 1,..., k n ) := det k (2) 1 k (2) 2. k (n 1) 1 k (n 1) 2 k n (n 1) is the Wronskian of the elements k 1,..., k n K. This entity is useful for determining linear (in)dependence over K C. Proposition 1.6 : Elements k 1,..., k n of a differential field K are linearly dependent over the field of constants K C if and only if their Wronskian is 0. Proof : For any c 1,..., c n K C and any 0 m n we have ( j c jk j ) (m) = j c jk (m) j. In particular, when j c jk j = 0 the same equality holds when k j is replaced by the j th column of the Wronskian and 0 is replaced by a column of zeros. The forward assertion follows. The vanishing of the Wronskian implies a dependence relation (over K) among columns, and as a result there must be elements c 1,..., c n K, not all 0, such that. (i) n j=1 c jk (m) j = 0 for m = 0,..., n 1. What requires proof is that the c j may be chosen in K C, and this we establish by induction on n. As the case n = 1 is trivial we assume n > 1 and that the result holds for any subset of K with at most n 1 elements. 5

7 If there is also a dependence relation (over K) among the columns of the Wronskian of y 2,..., y n, e.g., if c 1 = 0, then by the induction hypothesis the elements y 2,..., y n K must be linearly dependent over K C. But the same then holds for y 1,..., y n, which is precisely what we want to prove. We therefore assume (w.l.o.g.) that c 1 = 1 and that the columns of the Wronskian of y 2,..., y k are linearly independent over K. From (i) we then have 0 = ( n j=1 c jk (m) j ) = n j=1 c jk (m+1) j + n j=2 c jk (m) j = 0 + n j=2 c jk (m) j = n j=2 c jk (m) j for m = 0,..., n 2, thereby forcing c 2 = = c n = 0. But this means c j K C for j = 1,..., n, and the proof is complete. 6

8 2. Differential Modules Throughout this section K denotes a differential field with derivation k k and V is a K-space (i.e., a vector space over K). The collection of n n matrices with entries in K is denoted gl(n, K); the group of invertible matrices in gl(n, K) is denoted GL(n, k). A differential structure on V satisfying is an additive group homomorphism D : V V (2.1) D(kv) = k v + kdv, k K, v V, where Dv abbreviates D(v). The Leibniz rule terminology is also used with (2.1). Vectors v V satisfying Dv = 0 are said to be horizontal 1. The zero vector 0 V is always has this property; other such vectors need not exist. When D : V V is a differential structure the pair (V, D) is called a differential K-module, or simply a differential module when K is clear from context. When dim K (V ) = n < the integer n is the dimension of the differential module. For an example of a differential structure/module take V := col n (K) and define D : col n (K) col n (K) by (2.2) D k 1 k 2. k n := Further examples will be evident from Proposition Since K C is a subfield of K we can regard V as a vector space over K C by restricting scalar multiplication to K C V. Proposition 2.3 : k 1 k 2. k n. (a) Any differential structure D : V V is K C -linear. (b) The collection of horizontal vectors of a differential structure D : V V coincides with the kernel ker D of D when D is considered as a K C -linear mapping. 1 The terminology is borrowed from Differential Geometry. 7

9 (c) The horizontal vectors of a differential structure D : V V constitute a K C - subspace of (the K C -space) V. Proof : (a) Immediate from (2.1). (b) Obvious from the definition of horizontal. (c) Immediate from (b). To obtain a basis description of a differential structure D : V V let e = (e j ) n j=1 V n be a(n ordered) basis of V let and B = (b ij ) gl(n, K) be defined by (2.4) De j := n j=1 b ije i, j = 1,..., n. (Example: For D as in (2.2) and 2 e j = (0,..., 0, 1, 0,..., 1) τ [1 in slot j] for j = 1,..., n we have B = (0) [the zero matrix].) We refer to B as the defining (e)-matrix of D, or as the defining matrix of D relative to the basis e. Note that, for any v = n j=1 v je j V, additivity and the Leibniz rule (2.1) give (2.5) Dv = n i=1 (v i + n j=1 b ijv j )e i. This is better expressed in the matrix form (2.6) (Dv) e = v e + Bv e, wherein v e is as in (ii) of the introduction, i.e., the image of v under the isomorphism v 1 β e : v = j v v 2 je j V. col n(k), v n and v e := v 1 v 2.. v n 2 The superscript τ ( tau ) denotes transposition. 8

10 Where all this is leading should hardly be a surprise: the mapping D B : x col n (K) x +Bx col n (K) defines a differential structure on the K-space col n (K), and one has a commutative diagram (2.7) V β e D V β e col n (K) D B coln (K). An immediate connection with linear ordinary differential equations is seen from (2.6): a vector v V is horizontal if and only if v e col n (K) is a solution of the first-order linear system (2.8) x + Bx = 0. This is the defining (e)-equation of D. Linear systems of ordinary differential equations of the form (2.9) x + Bx = 0 are called homogeneous. One can also ask for solutions of inhomogeneous systems, i.e., systems of the form (2.10) x + Bx = b, wherein 0 b col n (K) is given. For b = w e vector v V satisfying this is equivalent to the search for a (2.11) Dv = w. Equation (2.9) is the homogeneous equation corresponding to (2.10). Proposition 2.12 : When dim K V < and e is a basis the correspondence between differential structures D : V V and n n matrices B defined by (2.4) is bijective; the inverse assigns to a matrix B gl(n, K) the differential structure D : V V defined by (2.6). Proof : The proof is by routine verification. 9

11 Proposition 2.13 : (a) The solutions of (2.9) within col n (K) form a vector space over K C. (b) When dim K V = n < and e is a basis of V the K-linear isomorphism v V v e col n (K) restricts to a K C -linear isomorphism between the K C - subspace of V consisting of horizontal vectors and the K C -subspace of col n (K) consisting of solutions of (2.9). Proof : (a) When y 1, y 2 col n (K) are solutions and c 1, c 2 K C we have (c 1 y 1 + c 2 y 2 ) = (c 1 y 1 ) + (c 2 y 2 ) = c 1y 1 + c 1 y 1 + c 2y 2 + c 2 y 2 = 0 y 1 + c 1 ( By 1 ) + 0 y 2 + c 2 ( By 2 ) = Bc 1 y 1 Bc 2 y 2 = B(c 1 y 1 + c 2 y 2 ). (b) That the mapping restricts to a bijection between horizontal vectors and solutions was already noted immediately before (2.9), and since the correspondence v v e is K-linear and K C is a subfield of K any restriction to a K C -subspace must be K C -linear. Suppose ê = (ê j ) n j=1 V n is a second basis and P = (p ij ) is the transition matrix, i.e., ê j = n i=1 p ije i. Then the defining e and ê-matrices B and A of D are easily seen to be related by (2.14) A := P BP 1 P P 1, where P := (p ij). The transition from B to A is viewed classically as a change of variables: substitute w = P x in (2.9); then note from that w = P x + P x = P ( Bx) + P P 1 w = P BP 1 w + P P 1 w w + (P BP 1 P P 1 )w = 0. The modern viewpoint is to regard (P, B) P BP 1 P P 1 as defining a left action of GL(n, K) on gl(n, K); this is the action by gauge transformations. 10

12 Example 2.15 : Assume K = C(z) with derivation d dz system ( ) 10z 4 (2ν 2 1)z 2 2 z(2z 4 1) (i) x + ( z2 (z 4 +3z 2 ν 2 ) 2z 4 1 ) 4z 6 4ν 2 z 4 4z 2 +1 z 4 (2z 4 1) (2ν 2 1)z 2 +1 z(2z 4 1) and consider the first-order x = 0, i.e., x 1 + x 2 ( 10z 4 (2ν 2 1)z 2 2 z(2z 4 1) ( z 2 (z 4 +3z 2 ν 2 ) 2z 4 1 ) x 1 ) + x 1 + ( 4z 6 4ν 2 z 4 4z 2 +1 z 4 (2z 4 1) ( (2ν 2 1)z 2 +1 z(2z 4 1) ) x 2 ) = 0 x 2 = 0 wherein ν is a complex parameter. This has the form (2.9) with ) B := ( 10z 4 (2ν 2 1)z 2 2 z(2z 4 1) ( z2 (z 4 +3z 2 ν 2 ) 2z 4 1 ) 4z 6 4ν 2 z 4 4z 2 +1 z 4 (2z 4 1) (2ν 2 1)z 2 +1 z(2z 4 1),, and with the choice 3 P := ( z 2 2z ) one sees that the transformed system is z 3 z 2 (ii) x + Ax = 0, where A := P BP 1 P P 1 = ( ν2 1 z 2 z ). We regard (i)-(ii) as distinct basis descriptions of the same differential structure. 3 At this point readers should not be concerned with how this particular P was constructed. 11

13 3. n th -Order Linear Differential Equations The concept of an n th -order linear homogeneous equation in the context of a differential field K is formulated in the obvious way: an element k K is a solution of (3.1) y (n) + l 1 y (n 1) + + l n 1 y + l n y = 0, where l 1,..., l n K, if and only if (3.2) k (n) + l 1 k (n 1) + + l n 1 k + l n k = 0, where k (2) := k := (k ) and k (j) := (k (j 1) ) for j > 2. Using a Wronskian argument one can easily prove that (3.1) has at most n solutions (in K) linearly independent over K C. As in the classical case k K is a solution of (3.1) if and only if the column vector (k, k,..., k (n 1) ) τ is a solution of (3.3) x + Bx = 0, B = l n l n 1 l 2 l 1 Indeed, one has the following analogue of Proposition Proposition 3.4 : (a) The solutions of (3.1) within K form a vector space over K C. (b) The K C -linear mapping (y, y,..., y (n 1) ) τ col n (K) y K restricts to a K C -linear isomorphism between the K C -subspace of V consisting of horizontal vectors and the K C -subspace of K described in (a).. Proof : The proof is a routine verification. 12

14 Example 3.5 : Equation (ii) of Example 2.15, i.e., ( ) 0 1 (i) x + x = 0, 1 ν2 1 z 2 z has the form seen in (3.3). This linear differential equation would commonly be written as (ii) y + 1 ) z y + (1 ν2 y = 0, or as (iii) z 2 y + z y + (z 2 ν 2 ) y = 0. Either form can be regarded as a basis description of the differential module of that example. Some readers will immediately recognize (iii): it is Bessel s equation. Converting the n th -order equation (3.1) to the first-order system (3.3) is standard practice. Less well-known is the fact that any first-order system of n equations can be converted to the form (3.3), and as a consequence can be expressed n th -order form. We will prove this in 7. For many purposes n th -order form has distinct advantages, e.g., explicit solutions are often easily constructed with series expansions, e.g., equation (iii) of Example 3.5 can be solved explicitly in terms of Bessel functions. z 2 13

15 4. Dimension Considerations In this section K denotes a differential field and (V, D) is a differential K-module of dimension n 1. Proposition 4.1 : When V the following assertions hold. is a K-space with differential structure D : V V (a) A collection of horizontal vectors within V is linearly independent over K if and only if it is linearly independent over K C. (b) The collection of horizontal vectors of V is a vector space over K C of dimension at most n. Proof : (a) Immediate from the inclusion K C K. (In this direction the horizontal assumption is unnecessary.) If the implication is false there is a collection of horizontal vectors in V which is K C -(linearly) independent but K-dependent, and from this collection we can choose vectors v 1,..., v m which are K-dependent with m > 1 minimal w.r.t. this property. We can then write v m = m 1 j=1 k jv j, with k j K, whereupon applying D and the hypotheses Dv j = 0 results in the identity 0 = m 1 k jv j. By the minimality of m this forces k j = 0, j = 1,..., m 1, i.e., k j K C, and this contradicts linear independence over K C. (b) This is immediate from (a) and the fact that any K-linearly independent subset of V can be extended to a basis. Suppose dim K V = n <, e is a basis of V, and x +Ax = 0 is the defining e- equation of D. Then assertion (b) of the preceding result has the following standard formulation. Corollary 4.2 : For any matrix B gl(n, K) a collection of solutions of (i) x + Bx = 0 within col n (K) is linearly independent over K if and only if the collection is linearly independent over K C. In particular, the K C -subspace of col n (K) consisting of solutions of (i) has dimension at most n. 14

16 Proof : By Proposition Equation (i) of Corollary 4.2 is always satisfied by the column vector x = (0, 0,..., 0) τ ; this is the trivial solution, and any other is non-trivial. Unfortunately, non-trivial solutions (with entries in K) need not exist. For example, the linear differential equation y y = 0 admits only the trivial solution in the field C(z) : for non-trivial solutions one must recast the problem so as to include the extension field (C(z))(exp(z)). Corollary 4.3 : For any elements l 1,..., l n 1 K a collection of solutions {y j } m j=1 K of (i) y (n) + l 1 y (n 1) + + l n 1 y + l n y = 0 is linearly independent over K C if and only if the collection {(y j, y j,..., y (n 1) j } m j=1 is linearly independent over K. In particular, the K C -subspace of K consisting of solutions of (i) has dimension at most n. Proof : Use Proposition 3.4(b) and Corollary

17 5. Fundamental Matrix Solutions Throughout the section K is a differential field and D : V V is a differential module of positive dimension n. Choose any basis e of V and let (5.1) x + Bx = 0 be the defining e-equation of D. A non-singular matrix M gl(n, K) is a fundamental matrix solution of (5.1) if M satisfies this equation, i.e., if and only if (5.2) M + BM = 0. Example: Take K = C(z) with the usual derivation δ = d/dz; then ( ) z z M := 0 z 3 is a fundamental matrix solution of the equation ( ) 2 z+2 x z z x = 0. z Proposition 5.3 : A matrix M gl(n, K) is a fundamental matrix solution of (5.1) if and only if the columns of M constitute n solutions of that equation linearly independent over K C. Of course linear independence over K is equivalent to the non-vanishing of the Wronskian W (y 1,..., y n ). Proof : First note that (5.2) holds if and only if the columns of M are solutions of (5.1). Next observe that M is non-singular if and only if these columns are linearly independent over K. Finally, note from Propositions 2.13(b) and 4.1(a) that this will be the case if and only if these columns are linearly independent over K C. Corollary 5.4 : Equation (5.1) admits a fundamental matrix solution in GL(n, K) if and only if V admits a basis consisting of D-horizontal vectors. 16

18 Proof : By Proposition 5.3 the existence of a fundamental matrix solution in GL(n, K) is equivalent to the existence of n D-horizontal vectors linearly independent over K C. Now recall Proposition 4.1. Proposition 5.5 : Suppose M, N gl(n, K) and M is a fundamental matrix solution of (5.1). Then N is a fundamental matrix solution if and only if N = M C for some matrix C GL(n, K C ). Proof : B : By (1.3) we have (M 1 N) = M 1 N = (M 1 ) N = M 1 ( BN) + ( M 1 M M 1 ) N = M 1 BN + ( M 1 )( BM)( M 1 )N = M 1 BN + M 1 BN = 0. : We have N = (MC) = M C = BM C = B MC = BN. Fundamental matrix solutions have an interesting geometric characterization. To explain that we need to introduce the dual of a differential module. 17

19 6. Dual Structures and Adjoint Equations Differential structures allow for a simple conceptual formulation of the adjoint equation of a linear ordinary differential equation. In this section K is a differential field and (V, D) is a differential K-module of dimension n 1. Recall that the dual space V of V is defined as the K-space of linear functionals v : V K, and the dual basis e of a basis e = {e α } of V is the basis {e α} of V satisfying e β e α = δ αβ (wherein δ αβ is the usual Kronecker delta, i.e., δ αβ := 1 if and only if α = β; otherwise δ αβ := 0). There is a dual differential structure D : V V on the dual space V naturally associated with D: the definition is (6.1) (D u )v = δ(u v) u (Dv), u V, v V. The verification that this is a differential structure is straightforward, and is left to the reader. One often sees u v written as v, u, and when this notation is used (6.1) becomes (6.2) δ v, u = Dv, u + v, D u. This is the Lagrange identity ; it implies that u v K C whenever v and u are horizontal. Proposition 6.3 : Suppose e V n is a basis of V and B = (b ij ) gl(n, K) is the defining e-matrix of D. Then the defining e -matrix of D is B τ. The proof, as are most of the proofs in this section, is a simple application of the Lagrange identity. Proof : First note from (6.2) that for any 1 i, j n we have 0 = δ ij = δ e i, e j = De i, e j + e i, D e j = k b kie k, e j + e i, D e j = k b ki e k, e j + e i, D e j = b ji + e i, D e j, 18

20 from which we see that (i) e i, D e j = b ji. However, for the defining e -matrix C = (c ij ) of D we have D e i = k c kie k, and therefore e i, D e j = e i, k c kje k = k c kj e i, e k = c ij for any 1 i, j n. The result now follows from (i). As an immediate consequence of Proposition 6.3 we see that the defining e - equation of D is (6.4) y B τ y = 0; this is the adjoint equation of (6.5) x + Bx = 0. Note from the usual identification V V and ( B τ ) τ = B that (6.5) can be viewed as the adjoint equation of (6.4). Intrinsically: the identification V V induces the additional identification D := (D ) D. Equations (6.5) and (6.4) are interchangeable in the sense that information about either one can always be obtained from information about the other. In particular, there is fundamental relationship between solutions of a linear differential equation and the solutions of the adjoint equation. This is explained in the following result, which also contains the promised geometric characterization of a fundamental matrix solution. Proposition 6.6 : Suppose e is a basis of V and (i) x + Bx = 0 is the defining e-matrix of D. Then for any M gl(n, K) the following statements are equivalent: (a) M is a fundamental matrix solution of (i); 19

21 (b) (M τ ) 1 is a fundamental matrix solution of the adjoint equation (ii) x B τ x = 0 of (i); and (c) M τ is the transition matrix from the dual basis e of e to a basis of V consisting of D -horizontal vectors. Proof : First note that (M τ ) = (M ) τ ; hence that (iii) M + BM = 0 (M τ ) + M τ B τ = 0. (a) (b) : From (iii), (M τ ) 1 = (M 1 ) τ and (M τ ) = (M ) τ we have M + MB = 0 (M ) τ + M τ B τ = 0 (a) (c) : One has (M τ ) 1 (M ) τ (M τ ) 1 + (M τ ) 1 M τ B τ (M τ ) 1 = 0 (M 1 ) τ (M ) τ (M 1 ) τ + B τ (M 1 ) τ = 0 (M 1 M M 1 ) τ + B τ (M 1 ) τ = 0 ((M 1 ) ) τ B τ (M τ ) 1 = 0 ((M τ ) 1 ) B τ (M τ ) 1 = 0. M + BM = 0 (M τ ) + M τ B τ = 0 (M τ ) (M τ ) 1 + M τ B τ (M τ ) 1 = 0 (M τ )( B τ )(M τ ) 1 (M τ ) (M τ ) 1 = 0. This last line is precisely what one obtains when A, B and P in (2.14) are replaced by 0, B τ and M τ respectively, i.e., it asserts that when M τ is regarded as a transition matrix from e to some other basis ê of V, the defining ê -matrix of D must vanish. Now simply observe from (2.4) that that the defining matrix of a basis vanishes if and only if that basis consists of horizontal vectors. Dual differential structures are useful for solving equations of the form Du = w, wherein w V is given. The fundamental result in this direction is the following. 20

22 Proposition 6.7 : Suppose (v m) n j=1 is a basis of V consisting of horizontal vectors and (v j ) n j=1 is the dual basis of V V. Then the following statements hold. (a) All v j are horizontal. (b) Suppose w V and there are elements k j K such that k j j = 1,..., n. Then the vector (i) satisfies u := j k jv j V (ii) Du = w. = w, v j for Te vector u introduced in (i) is not the unique solution of (ii): the sum of u and any horizontal vector will also satisfy that equation. Proof : (a) This can be seen as a corollary of Proposition 6.6, but a direct proof is quite easy: from v i, v j {0, 1} K C, Lagrange s identity (6.2) and the hypothesis D v j = 0 we have 0 = v i, v j = Dv i, v j + v i, D v j = Dv i, v j, and since (v j ) is a basis this forces Dv i = 0 for i = 1,..., n. (b) First note that for any v V we have (iii) v = j v, v j v j. Indeed, we can always write v in the form v = i c iv i, where c i K, and applying v j to this identity gives v, v j = i c i v i, v j = c j. From (a) and (iii) we then have Du = j D(k jv j ) = j (k jv j + k j Dv j ) = j ( w, v j v j + k j 0) = j w, v j )v j = w. 21

23 The following corollary explains the relevance of the adjoint equation for solving inhomogeneous systems. In the statement we adopt more classical notation: when k, l K satisfy l = k we write l as k, omit specific reference to l, and simply assert that k K. Moreover, we use the usual inner product y, z := j y jz j to identify col n (K) with (col n (K)), i.e., we identify the two spaces by means of the K-isomorphism v col n (K) (w col n (K) w, v K) (col n (K)). Corollary : Suppose: (a) B gl(n, K); (b) b col n (K); (c) (z j ) n j=1 is a basis of col n (K) consisting of solutions of the adjoint equation (i) x B τ x = 0 of (ii) x + Bx = 0 ; (d) b, z j K for j = 1,..., n; and (e) (y i ) n i=1 is a basis of col n (K) satisfying { 1 if i = j (iii) y i, z j = 0 otherwise. Then (y j ) n j=1 is a basis of solutions of the homogeneous equation (ii) and the vector (iv) y := j ( b, z j ) y j is a solution of the inhomogeneous system (v) x + Bx = b. 4 For a classical account of this result see, e.g., [Poole, Chapter III, 10, pp ]. In fact the treatment in this reference was the inspiration for our formulation of this corollary. 22

24 The appearance of the integrals in (iv) explains why solutions of (i) are called integrating factors of (ii) (and vice-versa, since, as already noted, (i) may be regarded as the adjoint equation of (ii)). The result is immediate from Proposition 6.7. However, it is a simple enough matter to give a direct proof, and we therefore do so. Proof : Hypothesis (d) identifies (z j ) n j=1 with the dual basis of (y i ) n i=1. In particular, it allows us to view (z j ) n j=1 as a basis of (col n (K)). To prove that the x j satisfy (ii) simply note from (iii) and (i) that 0 = y i, z j = y i, z j + y i, z j = y i, z j + y i, B τ z j = y i, z j + By i, z j = y i + By i, z j. Since (z j ) n j=1 is a basis (of (col n (K)) ) it follows that (vi) y j + By j = 0, j = 1,..., n. Next observe, as in (i) of the proof of Proposition 6.7, that for any b col n (K) condition (iii) implies b = b, z j y j. j It then follows from (vi) that y = ( ) j ( b, zj ) y j + b, z j y j = ( ) j ( b, zj ) By j + b, z j y j = B j ( b, z j ) y j + j b, z j y j = By + b. Corollary 6.8 was formulated so as to make the role of the adjoint equation evident. The following alternate formulation is easier to apply in practice. 23

25 Corollary 6.9 : Suppose B gl(n, K) and M GL(n, K) is a fundamental matrix solution of (i) x + Bx = 0. Denote the j th -columns of M and (M τ ) 1 by y j and z j respectively, and suppose b col n (K) and b, z j K for j = 1,..., n. Then (ii) y := j ( b, z j )) y j is a solution of the inhomogeneous system (iii) x + Bx = b. Proof : By Proposition 5.3 the z j form a basis of col n (K), and from (M τ ) 1 = (M 1 ) τ and M 1 M = I we see that { 1 if i = j y i, z j = 0 otherwise. The result is now evident from Corollary 6.8. Finally, consider the case of an n th -order linear equation (6.10) u (n) + l 1 u (n 1) + + l n 1 u + l n u = 0. In this instance the adjoint equation generally refers to the n th -order linear equation (6.11) ( 1) n v (n) + ( 1) n 1 (l 1 v) (n 1) + + ( 1)(l n 1 v) + l n v = 0, e.g., the adjoint equation of (6.12) u + l 1 u + l 2 u = 0 is (6.13) v l 1 v + (l 2 l 1)v = 0. 24

26 Examples 6.14 : (a) The adjoint equation of Bessel s equation is y + 1 x y + (1 ν2 x 2 ) y = 0 z 1 x z + (1 ν2 1 x 2 ) z = 0. (b) The adjoint equation of any second-order equation of the form y + l 2 y = 0 is the identical equation (despite the fact that they describe differential structures on spaces dual to one another). To understand why the adjoint terminology is used with (6.11) first convert (6.10) to the first order form (3.3) and write the corresponding adjoint equation accordingly, i.e., as (6.15) x B τ x = 0, B τ = l n l n l l 1 The use of the terminology is then explained by the following result. Proposition 6.16 : A column vector x = (x 1,..., x n ) τ col n (K) is a solution of (6.15) if any only if x n is a solution of (6.11) and... (i) j 1 x n j = ( 1) j x (j) n + ( 1) i (l j i x n ) (i) for j = 1,..., n 1. i=0 25

27 Proof : If x = (x 1,..., x n ) τ satisfies (6.15) then (ii) x j = x j 1 + l n+1 j x n for j = 1,..., n, where x 0 := 0. It follows that x n = x n 1 + l 1 x n, x n = x n 1 + (l 1 x n ) = ( x n 2 + l 2 x n ) + (l 1 x n ) = ( 1) 2 x n 2 + ( 1)l 2 x n + (l 1 x n ), x (3) n = ( 1) 2 x n 2 + ( 1)(l 2 x n ) + (l 1 x n ) and by induction (on j) that = ( 1) 2 ( x n 3 + l 3 x n ) + ( 1)(l 2 x n ) + (l 1 x n ) = ( 1) 3 x n 3 + ( 1) 2 l 3 x n + ( 1)(l 2 x n ) + (l 1 x n ), j 1 x (j) n = ( 1) j x n j + ( 1) j 1 i (l j i x n ) (i), j = 1,..., n. i=0 This is equivalent to (i), and equation (6.11) amounts to the case j = n. Conversely, suppose x n is a solution of (6.11) and that (i) holds. We must show that (iii) holds or, equivalently, that x n j = x n (j+1) + l j+1 x n for j = 1,..., n. This, however, is immediate from (i). Indeed, we have x n j = ( 1) j x (j+1) n = ( 1) j x (j+1) n + j 1 i=0 ( 1)i (l j i x n ) i+1 + j i=1 ( 1)i+1 (l j+1 i x n ) (i) = ( 1) j x (j+1) n + j i=0 ( 1)i+1 (l j+1 i x n ) (i) + l j+1 x n ( = ( 1) j+1 x (j+1) n + ) j i=0 (l j+1 ix n ) (i) + l j+1 x n = x n (j+1) + l j+1 x n. For completeness we record the n th -order formulation of Corollary

28 Proposition 6.17 ( Variation of Constants ) : Suppose l 1,..., l n K and {y 1,..., y n } col n (K) is a collection of solutions of the n th -order equation (i) u (n) + l 1 u (n 1) + + l n 1 u + l n u = 0 linearly independent over K C. Let M := y 1 y 2 y n y 1 y 2 y (2) 1 y (2) 2.. y n y (n 1) 1 y (n 1) 2 y n (n 1) and let (z 1,..., z n ) denote the n th -row of the matrix (M τ ) 1. Suppose k K is such that kz j K for j = 1,..., n. Then. (ii) y := j ( kz j ) y j is a solution of the inhomogeneous equation (iii) u (n) + l 1 u (n 1) + + l n 1 u + l n u = k. Proof : Convert (i) to a first-order system as in (3.3) and note from Propositions 1.6 and 3.4 that M is a fundamental matrix solution. Denote the j th -column of M by ŷ j and apply Corollary 6.9 with b = (0,..., 0, k) τ and y j (in that statement) replaced by ŷ j so as to achieve ŷ + Bŷ = b. Now write ŷ = (y, ŷ 2,..., ŷ n ) τ and eliminate ŷ 2,..., ŷ n in the final row of (iii) by expressing these entities in terms of y and derivatives thereof: the result is (iii). Since our formulation of Proposition 6.17 is not quite standard 5, a simple example seems warranted. 5 Cf. [C-L, Chapter 3, 6, Theorem 6.4, p. 87]. 27

29 Example 6.18 : For K = C(x) with derivation d dx second-order equation we consider the inhomogeneous (i) u + 2 x u 6 x 2 u = x3 + 4x, and for the associated homogeneous equation u + 2 x u 6 x 2 u = 0 take y 1 = x 2 and y 2 = 1/x 3 so as to satisfy the hypothesis of Proposition In the notation of that proposition we have ( ) ( ) x x 3 x M = 3, (M τ ) 1 5x = 2 5, 2x 3 x 4 and from the second matrix we see that z 1 = 1/5x, z 2 = x 4 /5. A solution to (i) is therefore given by y = ( 1 5x (x3 + 4x) ) ( x 2 + ( 1) ) x 4 5 (x3 + 4x) 1 x 3 = x x3 3, as is easily checked directly. 1 5x x4 5 28

30 7. Cyclic Vectors Throughout the section K is a differential field and (V, D) is a differential K- module of dimension n 1. Define D 0 := id V (the identity operator on V ), D 1 := D, and D k := D D k 1 for k > 1. Using (1.1) and induction on n 1 we see that for any k K and v V we have the Leibniz rule (7.1) D n (kv) = n j=0 ( n j ) k (j) D n j v. A vector v V is cyclic (w.r.t. D), or is D-cyclic, if {v, Dv,..., D n 1 v} is a basis of V. Example 7.2 : Suppose K has characteristic 0 and contains an element k such that k = 1. Moreover, assume V admits a basis {e 1,..., e n } of horizontal vectors. Then the vector v := n 1 j=0 is cyclic. Indeed, by induction one sees that D l v = n 1 j=l k j j! e j+1 k j l (j l)! e j+1 for l = 1,..., n 1, and the claim follows easily. The hypotheses of Example 7.2 are too restrictive for most applications, but we will see that cyclic vectors always exist when K has characteristic 0, the inclusion K C K is proper (i.e., the derivation on K is nontrivial), and the field of constants K C is algebraically closed. The goal of the section is a proof of this assertion, 6 but it seems preferable to first indicate why such vectors might be of interest. 6 The proof we offer is due to J. Kovacic (unpublished). However, any errors that appear are the responsibility of R. Churchill. For a constructive proof, and references to alternate proofs, see [C-K]. 29

31 Proposition 7.3 : The following assertions are equivalent: (a) D : V V admits a cyclic vector; (b) there is a basis e of V such that the defining e-equation of D has the form (i) x + and p 0 p 1 p n 2 p n 1. x = 0, (c) there is a basis representation of D which can be converted to the n th -order form (ii) y (n) + p n 1 y (n 1) + + p 1 y + p 0 y = 0. Less formally: finding a cyclic vector for D is equivalent to expressing D in terms of an n th -order homogeneous linear differential equation. Proof : (a) (b) : From the definition of a cyclic vector one sees that D admits such a vector v if and only if there is a basis (v, e 2,..., e n) of V such that the associated defining matrix has the form p p p n p n 1 Since this matrix is the negative transpose of that in (i), the asserted equivalence is evident from Proposition

32 (b) (c) : This equivalence has already been discussed: see the paragraph surrounding (3.3). We turn our attention to the existence problem. Although Proposition 7.3 is phrased in terms of cyclic vectors for D, it suffices, since D D, to concentrate on the existence of cyclic vectors for D. A D-invariant subspace W V is a differential subspace, or D-subspace, of V. As the reader can easily check, the intersection of any family of such subspaces is again such a subspace. In particular, the intersection S of those differential subspaces contained some nonempty subset S V is a D-subspace; this is the subspace differentially generated by S. When S = {s 1,..., s r } V is finite the notation {s 1,..., s r } is abbreviated to s 1,..., s r. In particular, v denotes the subspace differentially generated by a single vector v V. Proposition 7.4 : When 1 m n and W V and v W the following statements are equivalent: (a) v = W ; (b) {v, Dv,..., D m 1 v} is a basis of W. is an m-dimensional D-space Proof : (a) (b) : If (b) is false there must be scalars k j K such that m 1 j=0 k jd j v = 0. Letting p denote the maximal j m 1 satisfying k j 0 we can write this dependence relation in the form D p v = p 1 ˆk j=0 j D j v, from which we easily see that D p+s v must be in the span of {v, Dv,..., D p 1 v} for all s 0. We conclude that dim K ( v ) p < m = dim K (W ), hence that v W, and we have a contradiction. (b) (a) : Obvious. Corollary 7.5 : A vector v V is cyclic if and only if V = v. Our proof of the existence of cyclic vectors requires four lemmas. Lemma 7.6 : Suppose w, v V are non zero vectors satisfying both V = w, v and v V. Then : (a) V = w + v ; (b) w v = {0} if and only if dim K (V ) = dim K ( v ) + dim K ( w ) ; and 31

33 (c) the inclusion w v w is proper. Proof : (a) : w + v is a differential subspace of V containing the set {w, v}, and w, v = V is the intersection of all such subspaces. The equality follows. (b) and (c) : By (a) (and elementary linear algebra) we have dim K (V ) = dim K ( w ) + dim K ( v ) dim K ( w v ). Assertion (b) follows immediately, and if (c) fails the equality reduces to dim K (V ) = dim K ( v ), contradicting V v. Lemma 7.7 : Suppose K C is algebraically closed, of characteristic 0, and the inclusion K C K is proper. Let W V be a nontrivial differential subspace of V, let 1 r Z, and let w 1,..., w r W be subject only to the restriction w r 0. Then there is an element l K such that r j=0 l(j) w j 0. Without the characteristic 0 assumption the proper inclusion hypothesis on K C K can fail, e.g., when K is a finite field one has K C = K. Proof : Choose any k K\K C. It is not difficult to see that when k is algebraic over K C one has k K C, contrary to hypothesis, and we conclude that k must be transcendental over K C. In particular, the collection {1, k,..., k r } must be linearly independent over K C. Fix a basis (e 1, e 2,..., e n ) of V, write w j = n i=1 a ije i, and choose l K at random. Then from r j=0 l(j) w j = j l(j) i a ije i = i ( j a ijl (j) )e i we see that r j=0 l(j) w j = 0 j a ijl (j) = 0 for i = 1,..., n. In other words, r j=0 l(j) w j = 0 if and only if l is a solution of the system a 1r y (r) + a 1,r 1 y (r 1) + + a 10 y = 0 a 2r y (r) + a 2,r 1 y (r 1) + + a 20 y = 0. a nr y (r) + a n,r 1 y (r 1) + + a n0 y = 0. But from Corollary 4.3 we know that the solution space of any one (and therefore all) of these r th -order homogeneous linear differential equations has K C -dimension 32.

34 at most r. From the previous paragraph we conclude that for any k K\K C at least one of 1, k,..., k r cannot be a solution, and the lemma is thereby established. Lemma 7.8 : Assume K C is algebraically closed of characteristic 0 and the inclusion K C K is proper. Suppose w, v V are non-zero and satisfy w v = {0}. Then there is an element l K with the property that for ˆv := v + lw one has (a) w, ˆv = w, v and (b) dim K ( ˆv ) > dim K ( v ). Proof : (a) The equality holds for any l K. (b) Let r := dim K ( v ). Then {v, Dv,..., D r 1 v} is a basis of this space (by Proposition 7.4(b)) and the collection {v, Dv,..., D r 1 v, D r v} is therefore linearly dependent over K. In particular we can find scalars a 0,..., a r K, where w.l.o.g. a r = 1, such that r j=0 a jd j v = 0. Now set w j := ( ) r i i=j a j i D i j w, j = 0, 1,..., r, and note that w r = w 0. It follows from Lemma 7.7 that we can find an element l K such that (i) 0 r j=0 l(j) w j = r j=0 l(j) r i=j a id i j w. Define ˆv := v + lw. Suppose (b) is false. Then there is an integer 1 s r, and elements b i K for i = 0,..., s, where w.l.o.g. b s = 1, such that s i=0 b id iˆv = 0, i.e., such that 0 = s i=0 b id iˆv = s i=0 b id i v + ( s i=0 b i i i j=0 j ) l (j) D i j w, where in computing the final term we have used (7.1). The first term on the right is in v while the second is in w, and since w v = {0} it follows that both must vanish. Immediate consequences of this vanishing are: s = r (because s i=0 b id i v = 0 and b s = 1); a i = b i for i = 0, 1,..., r ; and 0 = ( ) r i=0 a i i i j=0 l j (j) D i j w = r ( ) j=0 l(j) r i i=j a j i D i j w = r j=0 l(j) w j. 33

35 This contradicts (i), and (b) is thereby established. When W V is a D-subspace the quotient space V/W inherits a differential structure in the expected way, i.e., [v] := v + W [Dv] := Dv + W. This quotient differential structure is useful for induction arguments, such as the proof of the following result. Lemma 7.9 : Suppose n = dim K (V ) > 1 and every differential K-space of lower positive dimension admits a cyclic vector. Then for any 0 w V there is a v V such that w, v = V. Proof : If w = V take v = 0; if not give V/ w the quotient differential structure and let π : V V/ w denote the quotient mapping. By assumption V/ w contains a cyclic vector [ˆv], and for any v π 1 ([ˆv]) we then have V = w, v. Theorem 7.10 (The Cyclic Vector Theorem) : Suppose K C is algebraically closed of characteristic 0 and the inclusion K C K is proper. Then V contains a cyclic vector. Proof : We argue by induction on n = dim K (V ), omitting the trivial case n = 1. We assume n > 1, and that the result holds for all non trivial differential K-spaces of dimension strictly less than n. Choose 0 w 1 V at random. If w 1 = V we are done; otherwise we invoke Lemma 7.9 (and the induction hypothesis) to guarantee the existence of a vector v 1 V such that w 1, v 1 = V. We may assume that (i) dim K ( v 1 ) > dim K (V ) dim K ( w 1 ), and (as a consequence) that (ii) w 1 v 1 {0}. Indeed, if (i) fails then w 1 v 1 = {0} by Lemma 7.6(b), and we can use Lemma 7.8 to replace v 1 with a vector ˆv 1 such that w 1, ˆv 1 = w 1, v 1 = V and dim K ( ˆv 1 ) > dim K ( v 1 ). Inequality (i) is then evident from dim( v 1 ) = dim K (V ) dim K ( w 1 ). If v 1 = V we are done. Otherwise we use (ii) to find some 0 w 2 w 1 v 1, noting from Lemma 7.6(c) that dim K ( w 2 ) < dim K ( w 1 ) (which implies that w 2 34

36 cannot be cyclic). Repeating the argument of the previous two paragraphs with w 2 replacing w 1 we then produce a vector v 2 V such that V = w 2, v 2, and dim K ( v 2 ) > dim K (V ) dim K ( w 2 ), w 2 v 2 {0}. If v 2 = V we are done; otherwise we repeat the construction once again, etc. The result is a sequence of subspaces w j with strictly decreasing dimensions and a sequence of vectors v j satisfying dim K ( v j ) > dim K (V ) dim K ( w j ). But this inequality is impossible if we reach dim K ( w j ) = 0 (because v j V ), and we conclude that the iteration terminates after finitely many steps. Since the only requirement for continuing is that v j not be cyclic, this proves the theorem. 35

37 8. Extensions of Differential Structures Here K is a differential field with derivation k k, V is a K-space (i.e., a vector space over K) of dimension n, and D : V V is a differential structure. Primes will also be used to indicate the derivation on any differential extension field L K. Recall 7 that when L K is an extension field of K (not necessarily differential) the tensor product L K V over K admits an L-space structure characterized by l (m K v) = (lm) K v. This structure will always be assumed. By means of the K-embedding (8.1) v V 1 v L K V one views V as a K-subspace of L K V when the latter is considered as a K-space. In particular, any (ordered) basis e of V can be regarded as a subset of L K V. Proposition 8.2 ( Extension of the Base ) : Assuming the notation of the previous paragraph any basis of the K-space V is also a basis of the the L-space L K V. In particular, (i) dim K V = dim L (L K V ). Proof : See, e.g., [Lang, Chapter XVI, 4, Proposition 4.1, p. 623]. Proposition 8.3 : Suppose W, ˆV and Ŵ are finite-dimensional 8 K-spaces and T : V W and ˆT : ˆV Ŵ are K-linear mappings. Then there is a K-linear mapping T K ˆT : V K ˆV W K Ŵ characterized by (i) (T K ˆT )(v K ˆv) = T v K ˆT ˆv, v K ˆv V K ˆV. Proof 9 : There is a standard characterization of the tensor product V K ˆV in terms of K-bilinear mappings of V ˆV into K-spaces Y, e.g., see [Lang, Chapter XVI, 1, p. 602]. The proposition results from considering the K-bilinear mapping (v, ˆv) V ˆV T v K ˆT ˆv W K Ŵ. 7 As a general reference for the remarks in this paragraph see, e.g., [Lang, Chapter XVI, 4, pp , particularly Example 2]. Except for references to bases, most of what we say does not require V to be finite-dimensional. 8 The finite-dimensional hypothesis is not needed; it is assumed only because this is a standing hypothesis for V. 9 We offer only a quick sketch. The result is more important for out purposes than a formal proof, and filling in all the details would lead us too far afield. 36

38 Proposition 8.4 : To any differential field extension L K there corresponds a unique differential structure D L : L K V L K V extending D : V V, and this structure is characterized by the property (i) D L (l K V ) = l K v + l K Dv, l K v L K V. Recall from (8.1) that we are viewing V as a K-subspace of L K V by identifying V with its image under the embedding v 1 K v. Assuming (i) we have D L (1 K v) = 1 K Dv Dv for any v V, and this is the meaning of D L extending D. In the proof we denote the derivation l l by δ : L L, and we also write the restriction δ K as δ. One is tempted to prove the proposition by invoking Proposition 8.3 so as to define mappings δ K id V : L K V L K V and id L K D : L K V L K V and to then set D L := δ K id V + id L K D. Unfortunately, Proposition 8.3 does not apply since D is not K-linear. Proof 10 : The way around the problem is to first use the fact that D is K C -linear; one can then conclude from Proposition 8.3 (with K replaced by K C ) that a K C - linear mapping ˆD : L KC V L KC V is defined by (ii) ˆD := δ KC id V + id KC KC D. The next step is to define Y L KC V to be the K C -subspace generated by all vectors of the form lk KC v l KC kv, where l L, k K and v V. Then from the calculation ˆD(lk KC v l KC kv) = δ(lk) KC v + lk KC Dv 10 Footnote 9 applies here also. δ(l) KC kv l KC D(kv) = lk KC v + kl KC v + lk KC Dv l KC kv l KC (k v + kdv) = lk KC v l KC k v + l k KC v l KC k v + lk KC Dv l KC kdv 37

39 we see that Y is ˆD-invariant, and ˆD therefore induces a KC -linear mapping D : (L KC V )/Y (L KC V )/Y which by (ii) satisfies (iii) D([l KC v]) = [l KC v] + [l KC Dv], where the bracket [ ] denotes the equivalence class (i.e., coset) of the accompanying element. Now observe that when L KC V is viewed as an L-space (resp. K-space), Y becomes an L-subspace (resp. a K-subspace), and it follows from (iii) that D is a differential structure when the L-space (resp. K-space) structure is assumed. In view of the K-space structure on (L KC V )/Y the K-bilinear mapping 11 (l, v) [l KC v] induces a K-linear mapping T : L K V (L KC V )/Y which one verifies to be K-isomorphism. It then follows from (iii) and (iv) that the mapping D L := T 1 D T : L K V L K V satisfies (i), and it follows that D L is a differential structure on the L-space L K V. As for uniqueness, suppose Ď : L K V L K V is any differential structure extending D, i.e., having the property Then for any l K v L K V one has Ď(1 K v) = 1 K Dv, v V. Ď(l K v) = Ď(l (1 K v)) = l (1 K v) + l Ď(1 K v) = l K v + l (1 K Dv) = l K v + l K Dv = D L (l K v), hence Ď = D L. When considered over the differential field C(z) = (C(z), d ) the linear differential dz equation y = y, has only the trivial solution, but if we allow solutions from the differential field extension C(z)(exp(z)) = (C(z) exp(z), d ) this is no longer the dz case. At the conceptual level, allowing solutions from a differential field extension simply means considering extensions of the given differential structure. However, at the computational level these extensions play no role, as one sees from the following result. 11 Which we note is not L-bilinear, since for v V the product lv is only defined when l K. 38

40 Proposition 8.5 : Suppose e is a basis of V and (i) x + Bx = 0 is the defining e-equation for D. Let L K be a differential field extension and consider e as a basis for the L-space L K V. Then the defining e-equation for the extended differential structure D L : L K V L K V is also (i). Proof : Since D L the same. extends D the e matrices of these two differential structures are A differential field extension L K has no new constants if L C = K C. (Note that L C K C is automatic.) Proposition 8.6 : Suppose e and ê are two bases of V and (i) x + Bx = 0 and (ii) x + Ax = 0 are the defining e and ê-equations of D respectively. Let L K be a no new constant differential field extension in which each equation admits a fundamental matrix solution. Then the field extensions of K generated by the entries of these fundamental matrix solutions are the same. Proof : In view of the discussion surrounding (2.14) we can assume A and B are related by A = P BP 1 P P 1, where P GL(n, K). Let M N gl(n, L), be fundamental matrix solutions of (i) and (ii) respectively and set ˆM := P M. Then from ˆM = (P M) = P M + P M = P ( BN) + P M = P BP 1 ˆM + P P 1 ˆM 39

41 we see that 0 = ˆM + (P BP 1 P P 1 ) ˆM = ˆM + A ˆM, and we conclude that ˆM GL(n, L) is also a fundamental matrix solution of (ii). By Proposition 5.5 we have P M = ˆM = NC for some C gl(n, L C ), and we can therefore write (iii) M = P 1 NC. The entries of P 1 are in K, and by the no new constant hypothesis the same holds for the entries of C. The result follows. 40

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces.

Math 350 Fall 2011 Notes about inner product spaces. In this notes we state and prove some important properties of inner product spaces. Math 350 Fall 2011 Notes about inner product spaces In this notes we state and prove some important properties of inner product spaces. First, recall the dot product on R n : if x, y R n, say x = (x 1,...,

More information

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional

More information

A linear algebra proof of the fundamental theorem of algebra

A linear algebra proof of the fundamental theorem of algebra A linear algebra proof of the fundamental theorem of algebra Andrés E. Caicedo May 18, 2010 Abstract We present a recent proof due to Harm Derksen, that any linear operator in a complex finite dimensional

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition) Vector Space Basics (Remark: these notes are highly formal and may be a useful reference to some students however I am also posting Ray Heitmann's notes to Canvas for students interested in a direct computational

More information

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS

SPRING 2006 PRELIMINARY EXAMINATION SOLUTIONS SPRING 006 PRELIMINARY EXAMINATION SOLUTIONS 1A. Let G be the subgroup of the free abelian group Z 4 consisting of all integer vectors (x, y, z, w) such that x + 3y + 5z + 7w = 0. (a) Determine a linearly

More information

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics

MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics MTH 309 Supplemental Lecture Notes Based on Robert Messer, Linear Algebra Gateway to Mathematics Ulrich Meierfrankenfeld Department of Mathematics Michigan State University East Lansing MI 48824 meier@math.msu.edu

More information

Linear Algebra I. Ronald van Luijk, 2015

Linear Algebra I. Ronald van Luijk, 2015 Linear Algebra I Ronald van Luijk, 2015 With many parts from Linear Algebra I by Michael Stoll, 2007 Contents Dependencies among sections 3 Chapter 1. Euclidean space: lines and hyperplanes 5 1.1. Definition

More information

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook.

Math 443 Differential Geometry Spring Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Math 443 Differential Geometry Spring 2013 Handout 3: Bilinear and Quadratic Forms This handout should be read just before Chapter 4 of the textbook. Endomorphisms of a Vector Space This handout discusses

More information

First we introduce the sets that are going to serve as the generalizations of the scalars.

First we introduce the sets that are going to serve as the generalizations of the scalars. Contents 1 Fields...................................... 2 2 Vector spaces.................................. 4 3 Matrices..................................... 7 4 Linear systems and matrices..........................

More information

Chapter 2 Linear Transformations

Chapter 2 Linear Transformations Chapter 2 Linear Transformations Linear Transformations Loosely speaking, a linear transformation is a function from one vector space to another that preserves the vector space operations. Let us be more

More information

Some notes on Coxeter groups

Some notes on Coxeter groups Some notes on Coxeter groups Brooks Roberts November 28, 2017 CONTENTS 1 Contents 1 Sources 2 2 Reflections 3 3 The orthogonal group 7 4 Finite subgroups in two dimensions 9 5 Finite subgroups in three

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

Groups of Prime Power Order with Derived Subgroup of Prime Order

Groups of Prime Power Order with Derived Subgroup of Prime Order Journal of Algebra 219, 625 657 (1999) Article ID jabr.1998.7909, available online at http://www.idealibrary.com on Groups of Prime Power Order with Derived Subgroup of Prime Order Simon R. Blackburn*

More information

Where is matrix multiplication locally open?

Where is matrix multiplication locally open? Linear Algebra and its Applications 517 (2017) 167 176 Contents lists available at ScienceDirect Linear Algebra and its Applications www.elsevier.com/locate/laa Where is matrix multiplication locally open?

More information

(f + g)(s) = f(s) + g(s) for f, g V, s S (cf)(s) = cf(s) for c F, f V, s S

(f + g)(s) = f(s) + g(s) for f, g V, s S (cf)(s) = cf(s) for c F, f V, s S 1 Vector spaces 1.1 Definition (Vector space) Let V be a set with a binary operation +, F a field, and (c, v) cv be a mapping from F V into V. Then V is called a vector space over F (or a linear space

More information

Spectral Theorem for Self-adjoint Linear Operators

Spectral Theorem for Self-adjoint Linear Operators Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Throughout these notes, F denotes a field (often called the scalars in this context). 1 Definition of a vector space Definition 1.1. A F -vector space or simply a vector space

More information

Math 121 Homework 4: Notes on Selected Problems

Math 121 Homework 4: Notes on Selected Problems Math 121 Homework 4: Notes on Selected Problems 11.2.9. If W is a subspace of the vector space V stable under the linear transformation (i.e., (W ) W ), show that induces linear transformations W on W

More information

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra

Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra Course 311: Michaelmas Term 2005 Part III: Topics in Commutative Algebra D. R. Wilkins Contents 3 Topics in Commutative Algebra 2 3.1 Rings and Fields......................... 2 3.2 Ideals...............................

More information

1 Fields and vector spaces

1 Fields and vector spaces 1 Fields and vector spaces In this section we revise some algebraic preliminaries and establish notation. 1.1 Division rings and fields A division ring, or skew field, is a structure F with two binary

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

0.2 Vector spaces. J.A.Beachy 1

0.2 Vector spaces. J.A.Beachy 1 J.A.Beachy 1 0.2 Vector spaces I m going to begin this section at a rather basic level, giving the definitions of a field and of a vector space in much that same detail as you would have met them in a

More information

Theorem 5.3. Let E/F, E = F (u), be a simple field extension. Then u is algebraic if and only if E/F is finite. In this case, [E : F ] = deg f u.

Theorem 5.3. Let E/F, E = F (u), be a simple field extension. Then u is algebraic if and only if E/F is finite. In this case, [E : F ] = deg f u. 5. Fields 5.1. Field extensions. Let F E be a subfield of the field E. We also describe this situation by saying that E is an extension field of F, and we write E/F to express this fact. If E/F is a field

More information

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises This document gives the solutions to all of the online exercises for OHSx XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Answers are in square brackets [. Lecture 02 ( 1.1)

More information

Math 121 Homework 5: Notes on Selected Problems

Math 121 Homework 5: Notes on Selected Problems Math 121 Homework 5: Notes on Selected Problems 12.1.2. Let M be a module over the integral domain R. (a) Assume that M has rank n and that x 1,..., x n is any maximal set of linearly independent elements

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Real representations

Real representations Real representations 1 Definition of a real representation Definition 1.1. Let V R be a finite dimensional real vector space. A real representation of a group G is a homomorphism ρ VR : G Aut V R, where

More information

Problems in Linear Algebra and Representation Theory

Problems in Linear Algebra and Representation Theory Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific

More information

Rings and groups. Ya. Sysak

Rings and groups. Ya. Sysak Rings and groups. Ya. Sysak 1 Noetherian rings Let R be a ring. A (right) R -module M is called noetherian if it satisfies the maximum condition for its submodules. In other words, if M 1... M i M i+1...

More information

LS.1 Review of Linear Algebra

LS.1 Review of Linear Algebra LS. LINEAR SYSTEMS LS.1 Review of Linear Algebra In these notes, we will investigate a way of handling a linear system of ODE s directly, instead of using elimination to reduce it to a single higher-order

More information

GEOMETRIC CONSTRUCTIONS AND ALGEBRAIC FIELD EXTENSIONS

GEOMETRIC CONSTRUCTIONS AND ALGEBRAIC FIELD EXTENSIONS GEOMETRIC CONSTRUCTIONS AND ALGEBRAIC FIELD EXTENSIONS JENNY WANG Abstract. In this paper, we study field extensions obtained by polynomial rings and maximal ideals in order to determine whether solutions

More information

x i e i ) + Q(x n e n ) + ( i<n c ij x i x j

x i e i ) + Q(x n e n ) + ( i<n c ij x i x j Math 210A. Quadratic spaces over R 1. Algebraic preliminaries Let V be a finite free module over a nonzero commutative ring F. Recall that a quadratic form on V is a map Q : V F such that Q(cv) = c 2 Q(v)

More information

Math113: Linear Algebra. Beifang Chen

Math113: Linear Algebra. Beifang Chen Math3: Linear Algebra Beifang Chen Spring 26 Contents Systems of Linear Equations 3 Systems of Linear Equations 3 Linear Systems 3 2 Geometric Interpretation 3 3 Matrices of Linear Systems 4 4 Elementary

More information

Chapter 7. Linear Algebra: Matrices, Vectors,

Chapter 7. Linear Algebra: Matrices, Vectors, Chapter 7. Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra includes the theory and application of linear systems of equations, linear transformations, and eigenvalue problems.

More information

CHAPTER 0 PRELIMINARY MATERIAL. Paul Vojta. University of California, Berkeley. 18 February 1998

CHAPTER 0 PRELIMINARY MATERIAL. Paul Vojta. University of California, Berkeley. 18 February 1998 CHAPTER 0 PRELIMINARY MATERIAL Paul Vojta University of California, Berkeley 18 February 1998 This chapter gives some preliminary material on number theory and algebraic geometry. Section 1 gives basic

More information

V (v i + W i ) (v i + W i ) is path-connected and hence is connected.

V (v i + W i ) (v i + W i ) is path-connected and hence is connected. Math 396. Connectedness of hyperplane complements Note that the complement of a point in R is disconnected and the complement of a (translated) line in R 2 is disconnected. Quite generally, we claim that

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

Definitions. Notations. Injective, Surjective and Bijective. Divides. Cartesian Product. Relations. Equivalence Relations

Definitions. Notations. Injective, Surjective and Bijective. Divides. Cartesian Product. Relations. Equivalence Relations Page 1 Definitions Tuesday, May 8, 2018 12:23 AM Notations " " means "equals, by definition" the set of all real numbers the set of integers Denote a function from a set to a set by Denote the image of

More information

Math 396. Quotient spaces

Math 396. Quotient spaces Math 396. Quotient spaces. Definition Let F be a field, V a vector space over F and W V a subspace of V. For v, v V, we say that v v mod W if and only if v v W. One can readily verify that with this definition

More information

A PRIMER ON SESQUILINEAR FORMS

A PRIMER ON SESQUILINEAR FORMS A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

Math 110, Spring 2015: Midterm Solutions

Math 110, Spring 2015: Midterm Solutions Math 11, Spring 215: Midterm Solutions These are not intended as model answers ; in many cases far more explanation is provided than would be necessary to receive full credit. The goal here is to make

More information

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS

LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS LINEAR ALGEBRA BOOT CAMP WEEK 1: THE BASICS Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F has characteristic zero. The following are facts (in

More information

Computational Approaches to Finding Irreducible Representations

Computational Approaches to Finding Irreducible Representations Computational Approaches to Finding Irreducible Representations Joseph Thomas Research Advisor: Klaus Lux May 16, 2008 Introduction Among the various branches of algebra, linear algebra has the distinctions

More information

11. Dimension. 96 Andreas Gathmann

11. Dimension. 96 Andreas Gathmann 96 Andreas Gathmann 11. Dimension We have already met several situations in this course in which it seemed to be desirable to have a notion of dimension (of a variety, or more generally of a ring): for

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Homework 4 Solutions

Homework 4 Solutions Homework 4 Solutions November 11, 2016 You were asked to do problems 3,4,7,9,10 in Chapter 7 of Lang. Problem 3. Let A be an integral domain, integrally closed in its field of fractions K. Let L be a finite

More information

Honors Algebra 4, MATH 371 Winter 2010 Assignment 4 Due Wednesday, February 17 at 08:35

Honors Algebra 4, MATH 371 Winter 2010 Assignment 4 Due Wednesday, February 17 at 08:35 Honors Algebra 4, MATH 371 Winter 2010 Assignment 4 Due Wednesday, February 17 at 08:35 1. Let R be a commutative ring with 1 0. (a) Prove that the nilradical of R is equal to the intersection of the prime

More information

NOTES (1) FOR MATH 375, FALL 2012

NOTES (1) FOR MATH 375, FALL 2012 NOTES 1) FOR MATH 375, FALL 2012 1 Vector Spaces 11 Axioms Linear algebra grows out of the problem of solving simultaneous systems of linear equations such as 3x + 2y = 5, 111) x 3y = 9, or 2x + 3y z =

More information

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form Chapter 5. Linear Algebra A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are real numbers. 1

More information

INTRODUCTION TO LIE ALGEBRAS. LECTURE 2.

INTRODUCTION TO LIE ALGEBRAS. LECTURE 2. INTRODUCTION TO LIE ALGEBRAS. LECTURE 2. 2. More examples. Ideals. Direct products. 2.1. More examples. 2.1.1. Let k = R, L = R 3. Define [x, y] = x y the cross-product. Recall that the latter is defined

More information

Some notes on linear algebra

Some notes on linear algebra Some notes on linear algebra Throughout these notes, k denotes a field (often called the scalars in this context). Recall that this means that there are two binary operations on k, denoted + and, that

More information

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory.

GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. GRE Subject test preparation Spring 2016 Topic: Abstract Algebra, Linear Algebra, Number Theory. Linear Algebra Standard matrix manipulation to compute the kernel, intersection of subspaces, column spaces,

More information

Representations of algebraic groups and their Lie algebras Jens Carsten Jantzen Lecture III

Representations of algebraic groups and their Lie algebras Jens Carsten Jantzen Lecture III Representations of algebraic groups and their Lie algebras Jens Carsten Jantzen Lecture III Lie algebras. Let K be again an algebraically closed field. For the moment let G be an arbitrary algebraic group

More information

FILTERED RINGS AND MODULES. GRADINGS AND COMPLETIONS.

FILTERED RINGS AND MODULES. GRADINGS AND COMPLETIONS. FILTERED RINGS AND MODULES. GRADINGS AND COMPLETIONS. Let A be a ring, for simplicity assumed commutative. A filtering, or filtration, of an A module M means a descending sequence of submodules M = M 0

More information

10. Smooth Varieties. 82 Andreas Gathmann

10. Smooth Varieties. 82 Andreas Gathmann 82 Andreas Gathmann 10. Smooth Varieties Let a be a point on a variety X. In the last chapter we have introduced the tangent cone C a X as a way to study X locally around a (see Construction 9.20). It

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Solution to Homework 1

Solution to Homework 1 Solution to Homework Sec 2 (a) Yes It is condition (VS 3) (b) No If x, y are both zero vectors Then by condition (VS 3) x = x + y = y (c) No Let e be the zero vector We have e = 2e (d) No It will be false

More information

1 Linear transformations; the basics

1 Linear transformations; the basics Linear Algebra Fall 2013 Linear Transformations 1 Linear transformations; the basics Definition 1 Let V, W be vector spaces over the same field F. A linear transformation (also known as linear map, or

More information

implies that if we fix a basis v of V and let M and M be the associated invertible symmetric matrices computing, and, then M = (L L)M and the

implies that if we fix a basis v of V and let M and M be the associated invertible symmetric matrices computing, and, then M = (L L)M and the Math 395. Geometric approach to signature For the amusement of the reader who knows a tiny bit about groups (enough to know the meaning of a transitive group action on a set), we now provide an alternative

More information

Cover Page. The handle holds various files of this Leiden University dissertation

Cover Page. The handle   holds various files of this Leiden University dissertation Cover Page The handle http://hdl.handle.net/1887/57796 holds various files of this Leiden University dissertation Author: Mirandola, Diego Title: On products of linear error correcting codes Date: 2017-12-06

More information

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p. LINEAR ALGEBRA Fall 203 The final exam Almost all of the problems solved Exercise Let (V, ) be a normed vector space. Prove x y x y for all x, y V. Everybody knows how to do this! Exercise 2 If V is a

More information

Chapter 2: Linear Independence and Bases

Chapter 2: Linear Independence and Bases MATH20300: Linear Algebra 2 (2016 Chapter 2: Linear Independence and Bases 1 Linear Combinations and Spans Example 11 Consider the vector v (1, 1 R 2 What is the smallest subspace of (the real vector space

More information

Diagonalisierung. Eigenwerte, Eigenvektoren, Mathematische Methoden der Physik I. Vorlesungsnotizen zu

Diagonalisierung. Eigenwerte, Eigenvektoren, Mathematische Methoden der Physik I. Vorlesungsnotizen zu Eigenwerte, Eigenvektoren, Diagonalisierung Vorlesungsnotizen zu Mathematische Methoden der Physik I J. Mark Heinzle Gravitational Physics, Faculty of Physics University of Vienna Version 5/5/2 2 version

More information

MAT 242 CHAPTER 4: SUBSPACES OF R n

MAT 242 CHAPTER 4: SUBSPACES OF R n MAT 242 CHAPTER 4: SUBSPACES OF R n JOHN QUIGG 1. Subspaces Recall that R n is the set of n 1 matrices, also called vectors, and satisfies the following properties: x + y = y + x x + (y + z) = (x + y)

More information

2 so Q[ 2] is closed under both additive and multiplicative inverses. a 2 2b 2 + b

2 so Q[ 2] is closed under both additive and multiplicative inverses. a 2 2b 2 + b . FINITE-DIMENSIONAL VECTOR SPACES.. Fields By now you ll have acquired a fair knowledge of matrices. These are a concrete embodiment of something rather more abstract. Sometimes it is easier to use matrices,

More information

Infinite-Dimensional Triangularization

Infinite-Dimensional Triangularization Infinite-Dimensional Triangularization Zachary Mesyan March 11, 2018 Abstract The goal of this paper is to generalize the theory of triangularizing matrices to linear transformations of an arbitrary vector

More information

Topics in linear algebra

Topics in linear algebra Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over

More information

Existence of a Picard-Vessiot extension

Existence of a Picard-Vessiot extension Existence of a Picard-Vessiot extension Jerald Kovacic The City College of CUNY jkovacic@verizon.net http:/mysite.verizon.net/jkovacic November 3, 2006 1 Throughout, K is an ordinary -field of characteristic

More information

Diagonalisierung. Eigenwerte, Eigenvektoren, Mathematische Methoden der Physik I. Vorlesungsnotizen zu

Diagonalisierung. Eigenwerte, Eigenvektoren, Mathematische Methoden der Physik I. Vorlesungsnotizen zu Eigenwerte, Eigenvektoren, Diagonalisierung Vorlesungsnotizen zu Mathematische Methoden der Physik I J. Mark Heinzle Gravitational Physics, Faculty of Physics University of Vienna Version /6/29 2 version

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8. Linear Algebra M1 - FIB Contents: 5 Matrices, systems of linear equations and determinants 6 Vector space 7 Linear maps 8 Diagonalization Anna de Mier Montserrat Maureso Dept Matemàtica Aplicada II Translation:

More information

LINEAR EQUATIONS WITH UNKNOWNS FROM A MULTIPLICATIVE GROUP IN A FUNCTION FIELD. To Professor Wolfgang Schmidt on his 75th birthday

LINEAR EQUATIONS WITH UNKNOWNS FROM A MULTIPLICATIVE GROUP IN A FUNCTION FIELD. To Professor Wolfgang Schmidt on his 75th birthday LINEAR EQUATIONS WITH UNKNOWNS FROM A MULTIPLICATIVE GROUP IN A FUNCTION FIELD JAN-HENDRIK EVERTSE AND UMBERTO ZANNIER To Professor Wolfgang Schmidt on his 75th birthday 1. Introduction Let K be a field

More information

SOLUTIONS TO EXERCISES FOR. MATHEMATICS 205A Part 1. I. Foundational material

SOLUTIONS TO EXERCISES FOR. MATHEMATICS 205A Part 1. I. Foundational material SOLUTIONS TO EXERCISES FOR MATHEMATICS 205A Part 1 Fall 2014 I. Foundational material I.1 : Basic set theory Problems from Munkres, 9, p. 64 2. (a (c For each of the first three parts, choose a 1 1 correspondence

More information

55 Separable Extensions

55 Separable Extensions 55 Separable Extensions In 54, we established the foundations of Galois theory, but we have no handy criterion for determining whether a given field extension is Galois or not. Even in the quite simple

More information

Algebra Workshops 10 and 11

Algebra Workshops 10 and 11 Algebra Workshops 1 and 11 Suggestion: For Workshop 1 please do questions 2,3 and 14. For the other questions, it s best to wait till the material is covered in lectures. Bilinear and Quadratic Forms on

More information

The Cyclic Decomposition of a Nilpotent Operator

The Cyclic Decomposition of a Nilpotent Operator The Cyclic Decomposition of a Nilpotent Operator 1 Introduction. J.H. Shapiro Suppose T is a linear transformation on a vector space V. Recall Exercise #3 of Chapter 8 of our text, which we restate here

More information

Structure of rings. Chapter Algebras

Structure of rings. Chapter Algebras Chapter 5 Structure of rings 5.1 Algebras It is time to introduce the notion of an algebra over a commutative ring. So let R be a commutative ring. An R-algebra is a ring A (unital as always) together

More information

2. Intersection Multiplicities

2. Intersection Multiplicities 2. Intersection Multiplicities 11 2. Intersection Multiplicities Let us start our study of curves by introducing the concept of intersection multiplicity, which will be central throughout these notes.

More information

Hilbert spaces. 1. Cauchy-Schwarz-Bunyakowsky inequality

Hilbert spaces. 1. Cauchy-Schwarz-Bunyakowsky inequality (October 29, 2016) Hilbert spaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ [This document is http://www.math.umn.edu/ garrett/m/fun/notes 2016-17/03 hsp.pdf] Hilbert spaces are

More information

SOLUTION SETS OF RECURRENCE RELATIONS

SOLUTION SETS OF RECURRENCE RELATIONS SOLUTION SETS OF RECURRENCE RELATIONS SEBASTIAN BOZLEE UNIVERSITY OF COLORADO AT BOULDER The first section of these notes describes general solutions to linear, constant-coefficient, homogeneous recurrence

More information

Liouville s Theorem on Integration in Terms of Elementary Functions

Liouville s Theorem on Integration in Terms of Elementary Functions Liouville s Theorem on Integration in Terms of Elementary Functions R.C. Churchill Prepared for the Kolchin Seminar on Differential Algebra (KSDA) Department of Mathematics, Hunter College, CUNY October,

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

18.S34 linear algebra problems (2007)

18.S34 linear algebra problems (2007) 18.S34 linear algebra problems (2007) Useful ideas for evaluating determinants 1. Row reduction, expanding by minors, or combinations thereof; sometimes these are useful in combination with an induction

More information

Topological vectorspaces

Topological vectorspaces (July 25, 2011) Topological vectorspaces Paul Garrett garrett@math.umn.edu http://www.math.umn.edu/ garrett/ Natural non-fréchet spaces Topological vector spaces Quotients and linear maps More topological

More information

ALGEBRAIC GEOMETRY COURSE NOTES, LECTURE 2: HILBERT S NULLSTELLENSATZ.

ALGEBRAIC GEOMETRY COURSE NOTES, LECTURE 2: HILBERT S NULLSTELLENSATZ. ALGEBRAIC GEOMETRY COURSE NOTES, LECTURE 2: HILBERT S NULLSTELLENSATZ. ANDREW SALCH 1. Hilbert s Nullstellensatz. The last lecture left off with the claim that, if J k[x 1,..., x n ] is an ideal, then

More information

18.312: Algebraic Combinatorics Lionel Levine. Lecture 22. Smith normal form of an integer matrix (linear algebra over Z).

18.312: Algebraic Combinatorics Lionel Levine. Lecture 22. Smith normal form of an integer matrix (linear algebra over Z). 18.312: Algebraic Combinatorics Lionel Levine Lecture date: May 3, 2011 Lecture 22 Notes by: Lou Odette This lecture: Smith normal form of an integer matrix (linear algebra over Z). 1 Review of Abelian

More information

4.6 Bases and Dimension

4.6 Bases and Dimension 46 Bases and Dimension 281 40 (a) Show that {1,x,x 2,x 3 } is linearly independent on every interval (b) If f k (x) = x k for k = 0, 1,,n, show that {f 0,f 1,,f n } is linearly independent on every interval

More information

EIGENVALUES AND EIGENVECTORS 3

EIGENVALUES AND EIGENVECTORS 3 EIGENVALUES AND EIGENVECTORS 3 1. Motivation 1.1. Diagonal matrices. Perhaps the simplest type of linear transformations are those whose matrix is diagonal (in some basis). Consider for example the matrices

More information

Math 113 Practice Final Solutions

Math 113 Practice Final Solutions Math 113 Practice Final Solutions 1 There are 9 problems; attempt all of them. Problem 9 (i) is a regular problem, but 9(ii)-(iii) are bonus problems, and they are not part of your regular score. So do

More information

Groups. 3.1 Definition of a Group. Introduction. Definition 3.1 Group

Groups. 3.1 Definition of a Group. Introduction. Definition 3.1 Group C H A P T E R t h r e E Groups Introduction Some of the standard topics in elementary group theory are treated in this chapter: subgroups, cyclic groups, isomorphisms, and homomorphisms. In the development

More information

(6) For any finite dimensional real inner product space V there is an involutive automorphism α : C (V) C (V) such that α(v) = v for all v V.

(6) For any finite dimensional real inner product space V there is an involutive automorphism α : C (V) C (V) such that α(v) = v for all v V. 1 Clifford algebras and Lie triple systems Section 1 Preliminaries Good general references for this section are [LM, chapter 1] and [FH, pp. 299-315]. We are especially indebted to D. Shapiro [S2] who

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

where m is the maximal ideal of O X,p. Note that m/m 2 is a vector space. Suppose that we are given a morphism

where m is the maximal ideal of O X,p. Note that m/m 2 is a vector space. Suppose that we are given a morphism 8. Smoothness and the Zariski tangent space We want to give an algebraic notion of the tangent space. In differential geometry, tangent vectors are equivalence classes of maps of intervals in R into the

More information

MAT 445/ INTRODUCTION TO REPRESENTATION THEORY

MAT 445/ INTRODUCTION TO REPRESENTATION THEORY MAT 445/1196 - INTRODUCTION TO REPRESENTATION THEORY CHAPTER 1 Representation Theory of Groups - Algebraic Foundations 1.1 Basic definitions, Schur s Lemma 1.2 Tensor products 1.3 Unitary representations

More information