Numerische Mathematik

Size: px
Start display at page:

Download "Numerische Mathematik"

Transcription

1 Numer. Math. DOI /s Numerische Mathematik and numerical integrators Stein Krogstad Hans Z. Munthe-Kaas Antonella Zanna Received: 18 August 2008 / Revised: 12 May 2009 Springer-Verlag 2009 Abstract Motivated by developments in numerical Lie group integrators, we introduce a family of local coordinates on Lie groups denoted generalized polar coordinates. Fast algorithms are derived for the computation of the coordinate maps, their tangent maps and the inverse tangent maps. In particular we discuss algorithms for all the classical matrix Lie groups and optimal complexity integrators for n-spheres. Mathematics Subject Classification (2000) 22E15 General properties and structure of real Lie groups 17B40 Automorphisms, derivations, other operators 57S25 Groups acting on specific manifolds 1 Introduction Lie group methods for integration of differential equations have been an active area of research over more than a decade [3,10,21,26,28,30,32,33]. Such methods are constructed by composing Lie group actions on a manifold. Due to their accuracy and structure preserving properties, these methods have found applications in many areas of computational science, such as inverse eigenvalue problems [9], image processing [7], NMR spectroscopy [18], computational mechanics [24], neural learning [4], independent component analysis [35], micromagnetics [25], Sturm Liouville problems [16], Evans functions [27], non-linear dynamical systems [38] and image registration [23]. S. Krogstad SINTEF ICT, Forskningsveien 1, Oslo, Norway Stein.Krogstad@sintef.no H. Z. Munthe-Kaas A. Zanna (B) Matematisk Institutt, University of Bergen, Johannes Brunsgt 12, 5008 Bergen, Norway Antonella.Zanna@math.uib.no H. Z. Munthe-Kaas Hans.Munthe-Kaas@math.uib.no

2 S. Krogstad et al. In many applications the efficient computation of coordinate maps on Lie groups is an important issue. A family of numerical integrators for Lie groups, based on local coordinates, was presented in [14] (see Algorithm 1). These methods are built on two fundamental components: (i) a local coordinate map from a Lie algebra to a Lie group; and (ii) the inverse tangent map of. These two maps will be the focus of the present paper. Analytic coordinate maps include the exponential map [30], the Cayley map, and more generally diagonal Padé approximants to the exponential. It is well known that for certain groups (e.g. SL(n)), the only analytic map from the algebra to the group is the exponential mapping [15]. Matrix splitting techniques yield non-analytic coordinate maps. Among these, the coordinates of the second kind, studied by Owren and Marthinsen [34], show excellent computational cost for certain groups with semisimple Lie algebras over C. Their theory is based on the concept of admissible ordered bases (AOB), and, according to the authors, should be also able to handle particular real Lie algebras over R. However, the theory does not apply for instance to the case of real skew-symmetric matrices (corresponding to the real orthogonal group, one of the most recurrent in applications), in the sense that, the natural basis of skew-symmetric matrices, E ij = e i e T j e j ei T, does not give rise to an AOB. In this paper an other family of coordinates based on matrix splittings is studied. By recursively applying generalized polar decompositions of the Lie algebra [31,37], we obtain coordinates on all the classical matrix groups, where both the coordinate maps and the (forward and inverse) tangent maps can be computed efficiently. Let us briefly review the generalized polar coordinates as defined in Sect. 3. Consider a nested sequence of Lie algebras g = g 0 g 1 g k derived from a sequence of linear maps σ i : g i g i, i = 0,...,k 1, such that the following relations hold σ 2 i = I σ i ([X, Y ]) =[σ i (X), σ i (Y )] ( ) 1 Range 2 (I + σ i) = g i+1. An arbitrary element Z g is decomposed as Z = P 0 + P P k 1 + K k, (1) where the components are computed as follows. Let K 0 = Z and for i =0,...,k 1let K i+1 = 1 2 (I + σ i)k i g i+1 P i = K i K i+1 = 1 2 (I σ i)k i.

3 The coordinate maps to be studied in this paper are of the form (Z) = exp(p 0 ) exp(p 1 ) exp(p k 1 ) exp(k k ). (2) It is assumed that the computations in the last Lie algebra g k can be done fast, either because of low dimensionality or because it has a special structure (diagonal or block diagonal matrices). The computations involving the matrices P i and the tangent maps are relying on the structure of 2-cyclic matrices to be discussed in the sequel. The paper is structured as follows. In Sect. 2 we review basic results of linear algebra, matrix Lie theory and numerical Lie group integrators. Section 3 introduces the generalized polar coordinates, and develops the theory of their tangent maps, while in Sect. 4 the classical matrix groups are discussed in detail. These two sections are the core of the paper, in which the theory for the novel coordinates, for their tangent and inverse tangent maps, is studied in details and useful computational formulas are derived. In Sect. 5 we consider numerical examples, and finally, in Sect. 6 we present some short concluding remarks. 2 Preliminaries 2.1 Coordinates in Lie group integrators In order to motivate the theory in the sequel, we will briefly review a class of numerical integrators introduced in [29,30]. We will present the theory in the concrete context of matrix Lie groups. The generalization to general Lie groups is discussed in [20]. Let g denote a matrix Lie algebra, i.e. a family of square matrices closed under linear combinations and matrix commutators, [A, B] =AB BA. Let G denote the Lie group of g, defined as the set of matrices obtained by taking matrix exponentials of g and products of these exponentials. The Lie group is closed under matrix products and matrix inversions. Let M denote a linear space and : G M M an action of G on M, defined as a map satisfying g (h y) = (gh) y, for all g, h G, y M. The action induces a product :g M T M, where T M denotes the tangents of M, via V y = d dt exp(tv) y. t=0 We consider differential equations evolving on M, written in the form [30] where y(t) M and f : M g. y (t) = f (y(t)) y, (3)

4 S. Krogstad et al. Let : g G denote a smooth mapping from a Lie algebra into a Lie group such that (0) = e, where e is the identity in G.Letd Z denote the right trivialized tangent of at a point Z g, i.e. s (Z + sδz) = d Z (δz) (Z). s=0 It follows easily that d Z is a linear map from g to itself. We assume that d 0 = I, thus d Z is invertible for Z sufficiently close to 0. The map defines a diffeomorphism between a neighbourhood of 0 g and a neighbourhood of e G. Via the translations on G, we may extend to an atlas of coordinate charts covering the whole of G. Using the action of G on M, we may also via obtain a coordinate atlas on M. For numerical algorithms based on such coordinates, it is often essential that we can compute both and the inverse tangent map d 1 Z efficiently. As an example, consider the class of numerical Lie group integrators for (3), introduced and developed in [14,30]. Given y n M, a timestep h R, and the coefficients of an s-stage Runge Kutta method, { } s a i, j i, j=1 and { } s b j j=1, we step from y n y(t n ) to y n+1 y(t n + h) in the following manner: Algorithm 1 for i = 1, s, U i = s j=1 a i, j K j K i = h f ( (U i ) y n ) K i = d 1 U i (K i ) end y n+1 = ( s j=1 b j K j ) y n Various coordinate maps have been proposed and studied in the literature. The exponential mapping is used in [30] and the Cayley map in [26]. Owren and Marthinsen [34] develop the theory of coordinates of the second kind. Given a basis {V i } for a d-dimensional Lie algebra g, an element Z = d i=1 z i V i maps to (Z) = exp(z 1 V 1 ) exp(z 2 V 2 ) exp(z d V d ). Owren and Marthinsen introduce special classes of nice bases, so-called AOBs, and show that for such bases the map (Z) and d Z 1 can be computed in O(d 3/2 ) operations, d = dim g. Using the representation theory of semisimple Lie algebras, they show that for certain semisimple Lie algebras, AOBs can be obtained from Chevalley bases. In the cases where AOBs are found, they report favourable numerical experiments indicating that the resulting numerical algorithms are between two and six times faster than corresponding algorithms based on using (Z) = exp(z), see also [5,6]. Unfortunately, it is not known if AOBs can be found for all classical matrix Lie groups. In particular there are still open problems with several of the real matrix groups, such as the real orthogonal group SO(n, R) with algebra so(n, R), consisting of real skew symmetric matrices.

5 In this paper we will introduce coordinates based on generalized polar decompositions of g, and we will show that this framework leads to fast computable coordinates for many Lie algebras, among these all the classical matrix Lie algebras. The basis for our approach is some results from the theory of symmetric spaces, as presented in [31]. We will now review some linear algebra and basic elements of the theory of symmetric spaces needed for the present purpose. 2.2 Projections, involutions and 2-cyclic matrices By a projection matrix on a vector space V we mean a linear map : V V such that 2 =. Unless explicitly stated we will not require projections to be orthogonal (i.e. T = ). By an involutive matrix we mean a linear map S : V V such that S 2 = I. These two concepts are naturally linked by the following lemma, whose proof is trivial. Lemma 1 To any projection matrix there corresponds an involutive matrix S = I 2. Conversely, to any involutive S there corresponds two projection matrices S and + S defined by S = 1 (I S) 2 (4) + S = 1 (I + S). 2 (5) These satisfy the following relations S + + S = I (6) S + S = + S S = 0 (7) S S = S (8) S + S = + S. (9) Thus V splits in the direct sum of two subspaces, the ±1 eigenspaces of S, where ± S are projections onto these. Note that if S is involutive then also S is involutive, the latter corresponding to the opposite identification of the +1 and 1 eigenspaces. Thus, at the moment there seems to be no fundamental difference between these two subspaces. We will, however, later return to involutive automorphisms on Lie algebras where the +1 and 1 eigenspaces play fundamentally different roles. Definition 1 A matrix K : V V is block diagonal with respect to an involution S on V if SKS = K, (10)

6 S. Krogstad et al. and a matrix P : V V is 2-cyclic with respect to S if SPS = P. (11) Any matrix M can be split in a sum of a 2-cyclic P and a block diagonal K, M = P + K, (12) where P = 1 (M SMS) (13) 2 K = 1 (M + SMS). (14) 2 To understand the structure of these matrices, it is convenient to represent linear operators M : V V in 2 2 block partitioned form in the following manner. The matrix M splits naturally in four parts: M = ( S + ) ( + S M S + + ) S = S M S + S M + S + + S M S + + S M + S. (15) The partitioning of M with respect to S is defined by dividing V in an upper block corresponding to the range of S and a lower block corresponding to the range of + S. Thus ( ) M M + M = M + M ++, (16) where M ij = i S M j S restricted to the appropriate subspaces. In partitioned form S becomes ( I ) 0 S = 0 I. Thus, K consists of the diagonal blocks of M, while P is the off diagonal blocks. Efficient computation of analytic functions of 2-cyclic matrices will be crucial to our algorithms. Theorem 1 Let S P S = P, where S 2 = I. Let = P 2 S. For an analytic function ψ(x) we have ψ(p) = ψ(0)i + ψ 1 ( )P + Pψ 1 ( ) + Pψ 2 ( )P + ψ 2 ( ), (17)

7 where ψ 1 (x) = 1 2 ( ) ψ( x) ψ( x) x (18) ψ 2 (x) = 1 ( ) ψ( x) + ψ( x) 2ψ(0). 2x (19) Proof Involutivity of S implies SP = PS hence ± S P = P S. By induction it is now easy to verify that for any k 0wehave P 2k+1 = k P + P k, P 2k+2 = k+1 + P k P. Letting ψ(x) = i=0 α i x i, we find from these formulae that ψ(p)=α 0 I + α 2k+1 k P + P α 2k+1 k + α 2k+2 k+1 + P α 2k+2 k P. k=0 k=0 We define ψ 1 (x) = k=0 α 2k+1 x k and ψ 2 (x) = k=0 α 2k+2 x k and derive (18) and (19) by straightforward manipulation of the series for ψ(x). Note that if P is partitioned with respect to S as P = k=0 k=0 ( ) 0 B, (20) A 0 then Thus ( ) = P 2 BA 0 S =. (21) 0 0 ψ i ( ) = ( ) ψi (BA) 0. 0 ψ i (0)I Since BAis a p p matrix, p = Rank( S ), we are mainly interested in cases where p is small, and the case p = 1 is particularly important. Corollary 1 If p = 1 then = θ S, for a scalar θ, and where ψ 1 and ψ 2 are given in (18) (19). ψ(p) = ψ(0)i + ψ 1 (θ)p + ψ 2 (θ)p 2, (22)

8 S. Krogstad et al. Proof If = θ S then ψ 1 ( )P + Pψ 1 ( ) = ψ 1 (θ) ( S P + ) P S = ψ1 (θ)p. Similarly ( Pψ 2 ( )P + ψ 2 ( ) = ψ 2 (θ) P S P + S P2) = ψ 2 (θ)p Involutive automorphisms on Lie algebras An involutive automorphism on a Lie algebra g is an involutive map σ : g g such that σ ([U, V ]) = [σ(u), σ (V )]. (23) Corresponding to the automorphism σ on g there is an automorphism on the Lie group G, which plays an important role in the theory of symmetric spaces. In this paper we will, however, not need this automorphism on G and we omit this from the discussion. Note that if σ is an automorphism, then σ is not an automorphism. Thus in this case we can clearly distinguish between the +1 and 1 eigenspaces of σ,theyplay a fundamentally different role in the theory. Let ± σ be projections on ± eigenspaces of σ,givenin(4) (5). Denote these spaces by ( 1 p = Range (I σ) 2 ( 1 k = Range (I + σ) 2 ) = { P g σ(p) = P } ) = { K g σ(k ) = K }. The subspace k is a Lie subalgebra of g, while p forms a Lie triple system, meaning that it is closed under double brackets [P 1, [P 2, P 3 ]] p for all P 1, P 2, P 3 p. More generally, the spaces p and k satisfy the following odd even parity rules under brackets (compare to multiplication table of -1 and 1): [ ] k, k k (24) [ ] p, k p (25) [ ] p, p k. (26) The decomposition we have introduced is thus closely related to the Cartan decomposition, see[17]. However, the Cartan decomposition requires σ to be a Cartan involution, whose definition involves non-degeneracy of a bilinear form derived from the

9 Cartan-Killing form. For the applications in this paper the Cartan property of σ is not needed, thus we have considerable freedom in choosing a suitable σ. Corresponding to the additive splitting g = p k there exists a multiplicative splitting of G. Any element g G sufficiently close to I can be written as a product g = exp(p) exp(k ), where P p and K k. The element exp(p) G belongs to a symmetric space, while exp(k ) belongs to a Lie subgroup of G. For example, if G = GL(n) and σ(z) = Z T, then the splitting g = p + k corresponds to writing a general matrix as the sum of a symmetric and a skew matrix. The corresponding product splitting of G is the polar decomposition, where a matrix is written as a product of a symmetric positive definite matrix and an orthogonal matrix. In general, if σ is any involutive automorphism, we will refer to the decomposition as a generalized polar decomposition, see[31] for more details. 3 Generalized polar coordinates (GPC) The coordinates we are studying in this paper are recursively defined as follows. Definition 2 GPC denotes any coordinate map from a Lie algebra to a Lie group obtained by the following recursion: The exponential map exp : g G is GPC on G. Let σ be an involutive automorphism on g and let ± σ be defined by (4) (5). Let k = Range( + σ ) g and Gσ G be the corresponding sub-algebra and subgroup. If : k G σ is GPC on G σ then a map : g G defined as (Z) = exp ( σ Z) ( + σ Z), (27) is GPC on G. The coordinates are completely determined by a sequence of involutive automorphisms {σ i } i=0 k 1, giving rise to a sequence of subalgebras g = g 0 g 1 g k, where g i+1 = Range( + σ i ) and σ i : g i g i. By unfolding the recursion, we obtain the equivalent form of the coordinate map given in (1) (2). Note that we let the 1 eigenspace appear on the left and the +1 on the right in (27). This is important if we want to compute right trivialized tangents. If we instead want to work with left trivializations, we must reverse the definition and let the +1 eigenspace appear on the left. As a trivial example consider G = C (the multiplicative group of nonzero complex numbers), let σ(z) = z (complex conjugation) and let (K ) = exp(k ). If Z = X + iy then σ Z = X, + σ Z = iy and (Z) = exp(x) exp(iy) = r exp(iθ), where r = exp(x) and θ = Y. This yields (classical) polar coordinates on C.

10 S. Krogstad et al. 3.1 The coordinate map To obtain efficient algorithms for computing the coordinate map, we assume that σ is a low rank inner automorphism, defined as follows. Definition 3 An automorphism σ : g g of the form σ(z) = SZS, where S 2 = I (28) is called an inner automorphism. 1 By the rank of an inner automorphism we mean p = Rank( S ). If σ is given by (28) then P = σ (Z) = 2 1 (Z SZS) is 2-cyclic with respect to S, i.e. SPS = P. Theorem 1 yields the following formula for computing exp(p). Theorem 2 Let S P S = P, where S 2 = I. Let = P 2 S, then where exp(p) = I + ψ 1 ( )P + Pψ 1 ( ) + Pψ 2 ( )P + ψ 2 ( ), (29) { sinh( x) x = sin( x) x for x = 0 ψ 1 (x) = 1 for x = 0 { ψ 2 (x) = 2 sinh2 ( x/2) x = 2 sin2 ( x/2) x for x = for x = 0 (30) (31) Proof From (18) we find ψ 1 (x) = (exp( x) exp( x))/(2 x) = sinh( x)/ x. Similarly from (19) ψ2 (x) = (cosh( x) 1)/x, the equivalent form (31) being numerically better for small x. See notes after Theorem 1 for a discussion of the structure of. To compute ψ i ( ), practical algorithms can be devised using the theory of minimal polynomials of the underlying linear operator; these will be discussed in the sequel. For the rank 1 case we establish explicit formulae. Corollary 2 Let S P S = P, where p = Rank( S ) = 1. Let the scalar θ be given as S P2 = θ S. Then I + P P2 if θ = 0 exp(p) = I + sin( θ) θ P + 2 sin2 ( θ/2) θ P 2 if R θ<0 I + sinh( θ) P + 2 sinh2 ( θ/2) θ θ P 2 if R θ>0or if θ is complex. (32) 1 If G is a matrix group, then σ induces the group automorphism G g SgS G. Although it is not necessary that S G, it must be an element of some larger group containing G as a subgroup, and on this larger group the automorphism is properly of inner type. We stick to the name inner also when S G.

11 It should be noted that this is exactly the same formula as the Rodrigues formula for skew 3 3 matrices. Indeed, let 0 x 3 x 2 P = x = x 3 0 x 1 x 2 x 1 0 and let S = I 2zz T /z T z, where z is a vector such that z T x = 0. Then SPS = P. One finds that θ = x T x, from which Rodrigues formula follows. 3.2 The tangent map In this section we will develop the formulae for computing the tangent map of and its inverse. Whereas the theory of the previous section regarded elements of g as matrices (linear operators in R n n acting on R n ), we are now concerned with d Z and d 1 Z which belong to the space of all linear operators from g to itself, denoted End(g). Ifg is represented as n n matrices, then End(g) could be represented as n 2 n 2 matrices. Inversion of such linear operators by Gaussian elimination would cost O(n 6 ) operations, but we are seeking algorithms of complexity at most O(n 3 ). To achieve this, we can not rely on a matrix representation of End(g), but rather work directly with operators (projections, involutions) as outlined in Sect The theory of this section relies on the odd even splitting of the subspaces g = p k in (24) (26). We will develop several formulae for the tangent map of and its inverse. In the following, let σ be an involutive automorphism on g, and let P p g, i.e. σ(p) = P. Define the linear operator ad P : g g as ad P (Z) =[P, Z]. (33) Theorem 3 Let Z = P + K = σ Z + + σ Z. The right trivialized tangent of and its inverse are given as where u = ad P. d Z = d exp p σ + Ad exp(p)d K + σ ( ) exp(u) 1 ( ) = σ u + exp(u) + σ σ + d K + σ ) d 1 Z = ( σ + d 1 K + σ (34) ( ) 1 + (u 1) cosh(u) σ sinh(u) + (1 u) + σ, (35)

12 S. Krogstad et al. Proof The first form of (34) follows from d Z (δz) (Z) = s exp(p + sδ P) (K + sδk ) s=0 = d exp P (δ P) exp(p) (K ) + exp(p)d K (δk ) (K ) ( ) = d exp P (δ P) + exp(p)d K (δk ) exp( P) (Z). Let u = ad P.Usingd exp P = (exp(u) 1)/u,Ad exp(p) = exp(ad P ), 2 σ = σ and + σ σ = 0, we obtain the second form of (34). To verify (35), we observe from (24)to (26) that if ψ(x) is an analytic function with odd and even parts ψ(x) = ψ o (x)+ψ e (x), ψ o (x) = 1 (ψ(x) ψ( x)), 2 ψe (x) = 1 (ψ(x) + ψ( x)), 2 then σ ψ(u) σ = ψe (u) σ, + σ ψ(u) σ = ψo (u) σ σ ψ(u) + σ = ψo (u) + σ, + σ ψ(u) + σ = ψe (u) + σ. Thus σ + σ exp(u) 1 σ u = sinh(u) σ u exp(u) 1 σ u = cosh(u) 1 σ u σ exp(u) + σ = sinh(u) + σ + σ exp(u) + σ = cosh(u) + σ. Using these formulae, we find that ( )( ) 1+(u 1) cosh(u) exp(u) 1 σ sinh u +(1 u) + σ σ u +exp(u) + σ = σ + + σ = I. Since is acting only on the subalgebra k we have σ d k + σ = 0 and + σ d k + σ = d K + σ from which we get )( ) ( σ + d 1 K + σ σ + d K + σ = σ + + σ = I, thus (35) is verified. Using the same odd even parity argument as in the proof of Theorem 3 we obtain the following result:

13 Corollary 3 The partitioning of d Z and d 1 Z with respect to σ is ( sinh(u) )( ) u sinh(u) I 0 d Z = (36) cosh(u) 1 u cosh(u) 0 d K ( )( I 0 u cosh(u) ) d 1 sinh(u) u Z = 0 d 1, (37) K 1 cosh(u) sinh(u) 1 where u = ad P and P = σ Z restricted to the appropriate subspace. For efficient computation of the tangent map, it is essential to develop fast algorithms for computing analytic functions of ad P. First we use the theory of 2-cyclic matrices to simplify (35). Lemma 2 If σ(p) = P, then ad P is 2-cyclic with respect to σ, i.e. Proof Let Z g be arbitrary. We have σ ad P σ(z) = σ ([P,σ(Z)]) = σ ad P σ = ad P. (38) [ ] σ(p), σ 2 (Z) = [ P, Z] = ad P (Z). Theorem 4 Let σ be an arbitrary involutive automorphism. Let Z = P + K where P = σ Z, let u = ad p and = ad 2 P σ. Then where d Z 1 = ( ) σ + d 1 K + (I ( σ + u ψ1 ( ) σ ) + σ + ψ2 ( ) ) σ, (39) ψ 1 (x) = ψ 2 (x) = { tanh( x/2) x = tan( x/2) x for x = for x = 0 { x tanh( x) 1 = x tan( 1 for x = 0 x) 0 for x = 0. (40) (41) Proof Starting from (35), we use Lemma 2 with Theorem 1. Since u σ = + σ u, and + σ σ = 0, all terms of the form ψ i ( ) u σ vanish, and the result follows by a straightforward symbolic computation. To compute terms of the type ψ i ( )W, good complexity algorithms can be obtained using the theory of minimal polynomials and the knowledge of the eigenspace structure of. We commence by mentioning the following result, that is an immediate consequence of Theorem 5 below.

14 S. Krogstad et al. Corollary 4 Let σ(p) = SPS = P, where Rank( S ) = p. Let = P2 S be nondefect of rank p, with d p distinct non-zero eigenvalues {µ i } d i=1. Then = ad 2 P σ is non-defect with the 1 + d + d2 distinct eigenvalues 0, {µ i } i, { µi µ j }i> j, { µi + } µ j i j. (42) Thus, the minimal polynomial of is the monic (1 + d + d 2 )-degree polynomial with zeros in the points (42). In the rank-1 case, we can obtain explicit formulae. If the scalar θ given by P 2 S = θ S is nonzero, then the minimal polynomial for = ad2 p σ becomes From (39) we obtain: q(x) = x(x θ)(x 4θ). Corollary 5 Let σ be a rank-1 involutive automorphism. Let Z = P + K where P = σ Z, let u = ad p and let θ be the scalar P 2 S = θ S, as in Corollary 2. If θ = 0 then )( ( ) d 1 Z = ( σ + d 1 K + σ I + u (ψ 1 (θ) + ψ 2 (θ)u 2 ) σ + σ ( ψ 3 (θ)u 2 + ψ 4 (θ)u 4) ) σ (43) where ψ 1 (θ) = 8 tanh( θ/2) + tanh( θ) 6 θ ψ 2 (θ) = 2 tanh( θ/2) tanh( θ) 6θ 3 2 ψ 3 (θ) = 15 + θ(tanh( θ) 15 coth( θ)) 12θ ψ 4 (θ) = 3 + θ( tanh( θ)+ 3 coth( θ)) 12θ 2. It should be noted that u 2 and u 4 act on p = Range ( ) σ, yielding also a result in p. Ifg R n n then the cost of computing the terms u 2 and u 4 is O(n). Themain work is the computation of the single u acting on p k, with a cost of O(n 2 ). Now we return to the general rank p case and the computation of ψ( ) via eigenspace projections. Let P = σ Z and = ad2 P S. For our present purpose it is not important whether or not µ i are distinct, so we just assume that is non-defect with p eigenvalues µ i for i = 1,...,p. Let the corresponding left and right eigenvectors

15 be denoted y T i and x i, yi T = µ i yi T x i = µ i x i normalized such that yi T x j = δ i, j. The following statement is verified by straightforward computation. Lemma 3 P has 2p nonzero left and right eigenvectors given as v ±i = 1 2 ( xi ± P x i / µ i ) w T ±i = 1 2 ( y T i ± y T i P/ µ i ) for i = 1,...,p (44) for i = 1,...,p, (45) satisfying Pv ±i =± µ i v ±i w T ±i P =± µ i w T ±i w T j v k = δ j,k for j, k {±1,...,±p}. Lemma 4 If = P 2 S non-defect and of rank p, then P is non-defect. Proof In Lemma 3, we found 2p independent right eigenvectors v ±i. Referring to the partitioned form of P in (20), we have that B is a p (n p) matrix with linearly independent rows. Thus there are n 2p zero right eigenvectors of P of the form ( 0 x T ) T, where x Ker(B). We have found n independent right eigenvectors of P. For i = 1,...,d let the eigenspace projections of P be given as ±i = v ±i w T ±i (46) 0 = I d ( i + i ). (47) i=1 Now we are ready to formulate the main theorem describing the eigenspace structure of. Note that k ker( ) and (p) p, thus we can restrict the discussion to the action of on p. Theorem 5 Let σ be a rank p inner automorphism, and let σ(p) = P. If = P 2 S is non-defect and of rank p, then = ad2 P σ is non-defect with a complete

16 S. Krogstad et al. list of eigenspace projections i, j : p p expressed in terms of an arbitrary W p as 0,0 (W ) = 0 W 0 (48) i,0 (W ) = ( i + i ) W W ( i + i ) for i = 1,...,p (49) i, j (W ) = i W j + i W j for i, j = 1,...,p and i > j (50) i, j (W ) = i W j + i W j for i, j = 1,...,p and i j. (51) The corresponding eigenvalues are given via Proof For i, j we compute 0,0 (W ) = 0 (52) i,0 (W ) = µ i i,0 (W ) (53) i, j (W ) = ( µ i µ j ) 2 i, j (W ) (54) i, j (W ) = ( µ i + µ j ) 2 i, j (W ). (55) ad P ( i W j ) = P i W j i W j P = ( µ i µ j ) i W j. Hence under the action of ad 2 P the two projections W i W j and W i W j correspond to the same eigenvalue ( µi ) 2. µ j These can be combined into the single projection i, j (W ) = i W j + i W j. Now, since P ±i =± µ i ±i and σ(p) = P, we find that σ( ±i ) = i.if W p we have σ(w ) = W and hence σ( i, j (W )) = σ( i W j + i W j ) = i ( W ) j + i ( W ) j = i, j (W ). Thus W p i, j (W ) p. We conclude that σ i, j(w ) = i, j (W ), which yields i, j (W ) = ( µ i µ j ) 2 i, j (W ). The others follow by a similar computation. To check that we have a complete list of eigenspace projections, we use p j= p j = I to decompose W as W = p j j= p W p k k= p each term j W k is appearing exactly once in (48) (51).,

17 The computation of these projections should be done with care in order to keep a favourable complexity. It should be noted that for W p then i, j (W ) = i W j + i W j = 2 σ ( i W j ). If g consists of n n matrices, then when i, j = 0 the computation of i, j (W ) is O(np). The 0-projections 0 are typically high rank matrices, and their computation must be done indirectly. After having computed i, j (W ) and i, j (W ) for non-zero i, j, we find the remaining projections as W = W i> j i, j (W ) i j i, j (W ) k,0 (W ) = ( k + k ) W 0,k (W ) = W ( k + k ) p ( 0,0 (W ) = W k,0 (W ) + 0,k (W ) ). k=1 The total complexity of computing ψ i ( )W becomes O(np 3 ), which is a minor contribution to the total cost as long as p < n. If the matrix P is close to singular or defect, a Schur decomposition can be used instead. 4 GPCs for matrix groups and symmetric spaces For the classical matrix groups we can obtain GPCs with the computational complexity O(n 3 ) by recursively applying the splitting (27). For symmetric spaces our techniques may yield algorithms with optimal complexity, e.g. O(n) for problems on n-spheres. We illustrate this by some examples. In Definition 2 and the following discussion, we assumed the existence of a nested family of algebras g g 1 g k and automorphisms σ i : g i g i. For matrix algebras g gl(n) it may be convenient to define σ i on g rather than on g i, in which case we must make sure that {σ i } define a nested sequence of Lie algebras. Lemma 5 Let g gl(n) be a Lie algebra of n n matrices. Let S i gl(n), i = 1,...,k be involutive matrices such that σ i (Z) = S i ZS i are involutive automorphisms on g and {S i } i=1 k commute S i S j = S j S i for i, j {1,...,k}. (56) Then the projections + σ i = 1 2 (I + σ i) define a nested sequence of Lie algebras g g 1 g k where g i = + σ i 1 + σ i 2 + σ 1 (g) such that σ i g i are automorphisms on g i.

18 S. Krogstad et al. Proof If {S i } i commute, then also { σi }i commute. Hence σ i σ j σi = σ j σi, and we conclude that g i g i 1. The rest of the lemma is obvious. Important examples of such nested sequences are S i = I 2e i e T i, which defines automorphisms for the Lie algebras gl(n) and so(n), and S i = I 2(e i e T i + e i+m e T i+m ), which yields automorphisms on the symplectic algebra sp(2m). 4.1 Low rank GPC for matrix Lie groups In this section we give explicit formulae for the tangent maps of some low rank generalized polar coordinate maps, and comment on their arithmetical complexity. For details on the implementation of the coordinate map we refer to [37]. The simplest approach is to use rank-1 splittings, in particular the one resulting from the inner automorphisms σ i (M) = S i MS i with S i = I 2e i ei T. We thus obtain a splitting of Z g: Z = P 1 + P P n 1 + K n 1, where for K 0 := Z and for i = 1,...,n 1wehave P i = σ i (K i 1 ) and K i = + σ i (K i 1 ). We thus use the map (Z) = exp(p 1 ) exp(p 2 ) exp(p n 1 ) exp(k n 1 ). (57) Note, however, that this approach does not work for every Lie group, i.e. one needs to make sure that the summands P i and K i both lie in the Lie algebra, such that the coordinate map (Z) resides in the Lie group. As we shall see in the sequel, the symplectic group requires a rank two splitting in order for the coordinate map to reside in the Lie group. For simplicity we consider the first step in the recursive splitting above. Let Z, Ẑ g, and denote their splittings Z = P + K and Ẑ = P + K, and further ( ) 0 b T P = a 0 ( ) 0 d T P =. (58) c 0

19 Then by Theorem 3, for the coordinate map (Z) = exp(p) (K ), its inverse tangent applied to Ẑ, splits as follows: p part : (I + ψ 2 (ad 2 P )) P ad P K (59) k part : d 1 K ( K + ad P (ψ 1 (ad 2 P ) P)), (60) where ψ 1 and ψ 2 are given in (40) (41). Using the eigenspace projections of Theorem 5 for = ad 2 P σ, we obtain the following expression for ψ i(ad 2 P ) P: where ( ) ψ i (ad 2 0 y P ) T P = x 0 (61) x = ψ i (θ)c + ( 2φ i (4θ)(b T c d T a) φ i (θ)b T c)a (62) y = ψ i (θ)d + (2φ i (4θ)(b T c d T a) φ i (θ)d T a)b. (63) Here θ = b T a and φ i, i = 1, 2 are the functions φ i (x) = (ψ i (x) ψ i (0))/x. (φ i (0) = ψ i (0) is defined since ψ i is analytic.) Letting [x, y] =fad2(ψ, a, b, c, d) denote the function giving the vectors x and y above, we get the following algorithm for the tangent map (in pseudo-code). Algorithm 2 % In: n n matrices Z and Ẑ % Out: Ẑ overwritten as d 1 Z Ẑ for k = 1 : n 1 a := Z(k + 1 : n, k); b := Z(k, k + 1 : n) T ; c := Ẑ(k + 1 : n, k); d := Ẑ(k, k + 1 : n) T ; [x 1, y 1 ]:=fad2(ψ 1, a, b, c, d); [x 2, y 2 ]:=fad2(ψ 2, a, b, c, d); Ẑ(k + 1 : n, k) := c + x 2 aẑ(k, k) + Ẑ(k + 1 : n, k + 1 : n)a; Ẑ(k, k + 1 : n) := d T + y2 T bt Ẑ(k + 1 : n, k + 1 : n) + Ẑ(k, k)b T ; Ẑ(k, k) := Ẑ(k, k) + b T x 1 y1 T a; Ẑ(k + 1 : n, k + 1 : n) := Ẑ(k + 1 : n, k + 1 : n) + ay1 T x 1b T ; end The general linear group and the special linear group Recall that the general linear group GL(n) consists of the n n matrices with nonzero determinant, and that the special linear group SL(n) consists of the matrices with determinant equal to one. Their corresponding Lie algebras gl(n) and sl(n) consist of the n n matrices and the n n matrices with zero trace respectively. For both Lie algebras it is easy to see that for a rank one splitting Z = P + K with S as above, both P and K lie in the Lie algebra, and thus also the coordinate map resides in the Lie group. By studying Algorithm 2 one sees that at stage k, we perform two matrix vector products, two outer products of vectors and two matrix additions which amounts to

20 S. Krogstad et al. a complexity of 8(n k) 2. Moreover the operation count of order n k can with a small effort at least be reduced to 19. Summing from k = 1ton, the overall cost of computing the tangent map is 8 3 n n2 + O(n) The orthogonal group For the orthogonal group O(n), and its corresponding Lie algebra so(n), the formulas above simplifies considerably. Let P and P be the matrices (58), but with b and d replaced with a and c respectively. Then ψ i (ad 2 P ) P have the form: ( ) ψ i (ad 2 0 x P ) T P = x 0 (64) with x = ψ i (θ)c + φ i (θ)(a T c)a. (65) Also the overall algorithm simplifies. By exploiting the skew symmetry of the output matrix, one sees that at stage k one needs one matrix vector product, one vector outer product, and two half matrix additions, i.e. 4(n k) 2 operations. The (n k) coefficient can be reduced to 8, and summing up we achieve the computational cost 4 3 n 3 + 2n 2 + O(n) Upper triangular group We also include the upper triangular group in this discussion, although the formulas turn out rather trivial. By setting the vectors a and c in the matrices (58) equal to zero, it is clear that ad P P = 0, and thus ψ i (ad P ) P = P. Atstepk in the algorithm there is only one triangular matrix vector product which may be carried out in (n k) 2 operations. The rest of the computation requires about 3(n k) operations, and overall we end up with cost 1 3 n n2 + O(n) The symplectic group Recall that the symplectic group SP(2m) are the matrices X satisfying the property XJX T = J, J = ( ) 0 I. I 0 Its Lie algebra sp(2m) is characterized by the matrices Z which obeys ZJ+ JZ T = 0. Note that a rank-1 splitting of the matrix Z will not fulfill this property, i.e. for Z = P + K we will in general have PJ JP = 0. Instead consider the involutive S = I 2e 1 e1 T 2e m+1em+1 T. It is easily seen that the splitting Z = P + K according to S has the desired property, and that by applying suitable permutation matrices

21 (exchanging row/column 2 and m + 1), the matrix P has the form ( ) 0 B T P =, A 0 where A and B are (2m 2) 2 matrices. Note that, because of the symmetries in P, the matrix B is completely determined by A. Furthermore one sees that B T A is just a scalar times the 2 2 identity matrix, i.e. B T A = θ I, and thus ψ(ad P ) P is given where ( ) ψ(ad 2 0 Y P ) T P =, (66) X 0 X = ψ i (θ)c + A( 2φ i (4θ)(B T C D T A) φ i (θ)b T C) (67) Y = ψ i (θ)d + B(2φ i (4θ)(B T C D T A) φ i (θ)d T A). (68) Omitting the details we can also in this case exploit the symmetry of the matrices involved, and obtain the same computational cost for the tangent map as for the orthogonal group, i.e. 4 3 n3 + O(n 2 ) for n = 2m. Summarizing we obtain the following table for the leading term of the computational complexity for the maps and d Z 1 applied to the most common real matrix Lie algebras. sl(n), gl(n) o(n) tu(n) sp(n) Tangent map Coordinate map Vector Matrix 8 3 n3 3n 2 2n n 3 3n 2 2n n3 1 2 n3 4 3 n 3 2n 3 5 Numerical experiments 5.1 Finding a solution of the Schur Horn majorization problem As an application, we consider an inverse eigenvalue problem with prescribed entries along the main diagonal [8] which arises in conjunction with the Schur Horn theorem in linear algebra. Before presenting the theorem, we recall that a vector a R n is said to majorize λ R n if, assuming the ordering a j1 a j2 a jn, λ m1 λ m2 λ mn,

22 S. Krogstad et al. the following relationship holds: k λ mi i=1 and with equality for k = n [19]. k a ji, for k = 1, 2,...,n, (69) i=1 Theorem 6 (Schur Horn [19]) An Hermitian matrix H with eigenvalues λ and diagonal elements a exists if and only if a majorizes λ. Moreover, if a majorizes λ the matrix H can be chosen to be symmetric. Given λ and a majorizing λ, we wish to find such a symmetric matrix H. One possibility is to construct an isospectral flow (a matrix flow which leaves the eigenvalues of the initial matrix unchanged) which converges to a target symmetric matrix H, similar to diag(λ) and with a as diagonal elements. Setting = diag(λ), onthe isospectral manifold M λ ={X : X = U U T, UU T = I } of symmetric matrices with eigenvalues λ the problem can be reformulated as an optimization problem: find amatrixh that minimizes the quadratic function φ(x) = (x 1,1 a j1 ) 2 + +(x n,n a jn ) 2. (70) Following [36], we consider φ as a function of U and notice that φ(u) = tr[(x A)diag(X A)], where, for convenience, we have set A = diag(a). By direct computation, φ U = 2[X, diag(x A)]U, (here the square bracket denotes the usual matrix commutator, which leads to the double-bracket isospectral flow X = 2[[X, diag(x A)], X]. (71) It is not a good idea to solve (71) numerically directly, since it is well known that standard ODE methods cannot preserve isospectrality [2]. Instead, assuming that X n M λ is known, in each interval [t n, t n+1 ] one solves for the matrix U, obeying the Lie-group differential equation U = 2[X, diag(x A)]U, U(t n ) = I, (72) in tandem with the transformation X = U(t)X n U(t) T. As long as the solution of (72) is orthogonal (or skew-hermitian, in the complex setting), the numerical approximation X n has eigenvalues λ and it converges to a solution H minimizing (70). In the numerical experiments we chose λ and a from random symmetric matrices with distribution N (0, 1) obtained from the Matlab function randn. The

23 diag(x n ) a Frequencies Iterations Iterations Fig. 1 Left: Convergence of the isospectral matrix X n = U n X n 1 Un T to a prescribed diagonal matrix for five randomly chosen initial data λ, a R 25. The orthogonal matrix U n is computed solving the ODE (72) with the GPC method. Right: Histogram of the iterations for convergence for the same problem, for a sample of 1,000 initial data, (λ, a) i= initial value X 0,issettoX 0 = Q T Q where Q is a randomly chosen orthogonal matrix. Since we are interested in convergence to a fixed point of (72), the local error is not of concern, and thus a first order method works as well as a higher order method. We have used the Lie-Euler method with constant stepsize h = We use the GPC coordinate map (57) and also the matrix exponential for comparisons. Running the two methods on a set {λ, a} i, i = 1, 2,...,1,000, we found that the difference in rate of convergence for using the GPC map and the exponential map was negligible. In Fig. 1 on the right the histogram plot for the 1,000 trials are presented. Among the trials there were two cases outside the region of the plot, with a maximum number of 550 iterations until convergence. In this particular case with matrices the evaluation of the exponential map applied to a matrix requires more than six times the number of operations required for the GPC map, and thus considerable savings are obtained. 5.2 Computing all Lyapunov exponents for a ring of oscillators As a numerical example where the tangent map is needed, we consider computing all Lyapunov exponents of the following system ( ) ÿ = α y 2 1 ẏ ω 2 y ẍ 1 = d 1 ẋ 1 β [ V (x 1 x n ) V (x 2 x 1 ) ] + σ y ẍ i = d i ẋ i β [ V (x i x i 1 ) V (x i+1 x i ) ], i = 2,...,n. It describes a ring of n damped oscillators with amplitudes x i with periodic boundary conditions x n+1 = x 1. The ring is forced externally by y(t), the periodic space coordinate of the limit cycle of a van der Pol oscillator. The parameters α, β, ω, σ and d i are chosen as in [1] to obtain several positive exponents, i.e α = 1, β = 1, ω = 1.82 and σ = 4. The damping parameters are set to d i = for i odd, and d i = for i even. The potential function V is given V (x) = x 2 /2 + x 4 /4. The experiment is done with n = 5. The Lyapunov exponents give the rates of exponential divergence

24 S. Krogstad et al. or convergence of initial nearby orbits, and can be found by considering the system ẋ = f (x) linearized about a trajectory x(t): Ẏ = A(t)Y, Y (0) = Y 0, (73) where A(t) = df(x(t)). The Lyapunov exponents are now given as the logarithms of the eigenvalues of the Oseledec-matrix x = lim t (Y (t)t Y (t)) 1 2t. (74) A technique using continuous QR-factorization [11,12] attempts to calculate the orthonormal factor Q(t) in the QR-decomposition of Y (t). It can be shown that Q(t) obeys the (Lie group) differential equation Q = QH(t, Q), H(t, Q) = tril(q T AQ) tril(q T AQ) T. (75) Here tril(m) denotes the function setting the upper triangular part of M to zero. Since the columns of Q(t) are drawn towards the direction of the largest Lyapunov exponents it is crucial that the numerical solution stays orthonormal, and using standard methods the numerical solution will typically blow up after some time. Thus it is a good idea to use methods which conserves the orthogonality automatically. Given Q(t), it can be shown that the Lyapunov exponents can be obtained from the diagonal elements of the limit matrix 1 t lim Q(τ) T A(τ)Q(τ)dτ. (76) t t 0 In numerical computations it is necessary to truncate the above expression at some finite time T, to obtain approximations to the exponents. We have used the trapezoidal rule to approximate the integral (76). Since the tangent vectors in (75) are represented by an element in the Lie algebra multiplied from the left rather than the right, we use a left version of the coordinate map and the corresponding left trivialized tangent map. Letting be the map (57), we define for Z so(n) (Z) = ( Z) 1 = exp K n 1 exp(p n 1 ) exp(p 1 ). (77) The left trivialized tangent d Z of the map is then given by the relation (Z)d Z (δz) = s (Z + sδz) = s=0 s ( Z sδz) 1 (78) s=0 = ( Z) 1 d Z ( δz) ( Z) ( Z) 1 (79) = (Z)d Z (δz),

25 Time Fig. 2 Lyapunov exponents for a ring of five damped oscillators where in step (78) (79) wehaveused d dt (M 1 ) = M 1 ( d dt M)M 1. Thus the left trivialized tangent of the map is simply given as d Z = d Z. Also by noting that ad 2 P = ad2 P, we see that d Z 1 is given by the formula in Theorem 4 with just a single sign change in front of u. This is similar to the left trivialized tangent of the exponential map given as d exp Z. We use the classical fourth order Runge Kutta method both for the computation of the trajectory and as the underlying method for the Lie group integrator for solving (75) with step sizes h = and h = 0.01 respectively. A randomly chosen x 0 is used as initial value, and the system is integrated from t = 0tot = 4,000. For our choice of damping parameters d i, the sum of the exponents should add up to 2n k=1 λ k = div f (x) = n d j. j=1 In the numerical computations the error of the sum is of order 10 8.Moreoveritis shown in [13] that for constant d, the exponents are distributed symmetrically around d. In our case one can also show that the exponents come in pairs (λ i, λ i c i ) i = 1,...,n, where each c i satisfies d min c i d max. This property is clearly seen in Fig. 2. We also performed the experiment using the matrix exponential and its left trivialized tangent giving similar qualitative results. However the cost of the overall algorithm increased dramatically. Counting floating point operations, the overall cost when using the GPC-map was flops while for the exponential map flops. For comparison we also implemented the Cayley map (see [20]), and obtained flops.

26 S. Krogstad et al. In this example we computed all exponents of our system. There has been a lot of work concerning computing the few largest Lyapunov exponents of dynamical systems [12]. This is possible by considering a more complicated form of the Eq. (75)on the Stiefel manifold. In [22] the GPC approach is adapted to equations on the Stiefel manifold in such a manner that favorable complexity is achieved. 6 Concluding remarks We have presented a new general theory of splitting methods for obtaining coordinates on Lie groups (generalized polar coordinates). These splittings (at the algebra level) are derived using a sequence of nested sub-algebras obtained as the range of appropriate projection operators derived from involutive algebra automorphisms. Through the exponential map, the nested algebra-splitting defines a new family of coordinate maps on the underlying Lie group G. We also derive the formulas for the tangent coordinate maps and their inverses, necessary when solving ordinary differential equations on the group. We have shown how the coordinate maps, tangent, and inverse tangent maps, can be computed efficiently for some low-rank splittings. The presented framework has the advantage of being more general than other known coordinates on Lie groups based on splitting (for instance, AOB), and for this reason, it allows for applications to larger classes of cases, in particular, those with a real semi-simple algebra, that could not be directly handled by AOB methods. References 1. Bridges, T.J., Reich, S.: Computing Lyapunov exponents on a Stiefel manifold. Physica D 156, (2001) 2. Calvo, M.P., Iserles, A., Zanna, A.: Numerical Solution of Isospectral Flows. Math. Comput. 66, (1997) 3. Casas, F., Owren, B.: Cost efficient Lie group integrators in the RKMK class. BIT Numer. Math. 43(4), (2003) 4. Celledoni, E., Fiori, S.: Neural learning by geometric integration of reduced rigid-body equations. J. Comput. Appl. Math. 172(2), (2004) 5. Celledoni, E., Owren, B.: A class of intrinsic schemes for orthogonal integration. SIAM J. Numer. Anal. 40(6), (2002) 6. Celledoni, E., Owren, B.: On the implementation of Lie group methods on the Stiefel manifold. Numer. Algorithms 32(2 4), (2003) 7. Chefd Hotel, C., Tschumperle, D., Deriche, R., Faugeras, O.: Regularizing flows for constrained matrix-valued images. J. Math. Imaging Vis. 20(1), (2004) 8. Chu, M.T.: Inverse eigenvalue problems: theory and applications. A series of lectures presented at IRMA, CRN, Bari, Italy, June Chu M.T., Golub G.H.: Structured inverse eigenvalue problems. Acta Numer (2003) 10. Crouch, P.E., Grossman, R.: Numerical integration of ordinary differential equations on manifolds. J. Nonlinear Sci. 3, 1 33 (1993) 11. Dieci, L., Russel, R.D., Van Vleck, E.S.: On the computation of Lyapunov exponents for continuous dynamical systems. SIAM J. Numer. Anal. 34, (1997) 12. Dieci, L., Van Vleck, E.S.: Computation of a few Lyapunov exponents for continuous and discrete dynamical systems. Appl. Numer. Math. 17, (1995) 13. Dressler, U.: Symmetry properties of the Lyapunov spectra of a class of dissipative dynamical systems with viscous damping. Phys. Rev. A 39(4), (1988) 14. Engø, K.: On the construction of geometric integrators in the RKMK class. BIT 40(1), (2000)

27 15. Feng, K., Shang, Z.-j.: Volume-preserving algorithms for source-free dynamical systems. Numer. Math. 71(4), (1995) 16. Fulton, C., Pearson, D., Pruess, S.: Computing the spectral function for singular Sturm Liouville problems. J. Comput. Appl. Math. 176(1), (2005) 17. Helgason, S.: Differential Geometry, Lie Groups and Symmetric Spaces. Academic Press, New York (1978) 18. Helmke, U., Hüper, K., Moore, J.B., Schulte-Herbrüggen, T.: Gradient flows computing the C-numerical range with applications in NMR spectroscopy. J. Glob. Optim. 23(3), (2002) 19. Horn R.A., Johnson C.R.: Matrix Analysis. Cambridge University Press, Cambridge (1985) 20. Iserles, A., Munthe-Kaas, H.Z., Nørsett, S.P., Zanna, A.: Lie-group methods. Acta Numer. 9, (2000) 21. Iserles, A., Nørsett, S.P.: On the solution of linear differential equations in Lie groups. Phil. Trans. R. Soc. A 357, (1999) 22. Krogstad, S.: A low complexity Lie group method on the Stiefel manifold. BIT 43(1), (2003) 23. Lee, S., Choi, M., Kim, H., Park, F.C.: Geometric direct search algorithms for image registration. IEEE Trans. Image Process. 16(9), 2215 (2007) 24. Lee, T., McClamroch, N.H., Leok, M.: A Lie group variational integrator for the attitude dynamics of a rigid body with applications to the 3D pendulum. In: Control Applications, CCA Proceedings of 2005 IEEE Conference, pp (2005) 25. Lewis, D., Nigam, N.: Geometric integration on spheres and some interesting applications. J. Comput. Appl. Math. 151(1), (2003) 26. Lewis, D., Simo, J.C.: Conserving algorithms for the dynamics of Hamiltonian systems of Lie groups. J. Nonlinear. Sci. 4, (1994) 27. Malham, S., Niesen, J.: Evaluating the Evans function: order reduction in numerical methods. Math. Comput. 77(261), 159 (2008) 28. Malham, S.J.A., Wiese, A.: Stochastic Lie group integrators. SIAM J. Sci. Comput. 30(2), (2008) 29. Munthe-Kaas, H.: Runge Kutta methods on Lie groups. BIT 38(1), (1998) 30. Munthe-Kaas, H.: High order Runge Kutta methods on manifolds. Appl. Numer. Math. 29, (1999) 31. Munthe-Kaas, H., Quispel, G.R.W., Zanna, A.: Generalized polar decompositions on Lie groups with involutive automorphisms. Found. Comput. Math. 1(3), (2001) 32. Owren, B.: Order conditions for commutator-free Lie group methods. J. Phys. A 39(19), (2006) 33. Owren, B., Marthinsen, A.: Runge Kutta methods adapted to manifolds and based on rigid frames. BIT 39(1), (1999) 34. Owren, B., Marthinsen, A.: Integration methods based on canonical coordinates of the second kind. Numer. Math. 87(4), (2001) 35. Plumbley, M.D.: Geometrical methods for non-negative ICA: manifolds, Lie groups and toral subalgebras. Neurocomputing 67, (2005) 36. Smith, S.T.: Optimization techniques on Riemaniann manifolds. In: Bloch, A. (ed.) Hamiltonian and Gradient Flows, Algorithms and Control, volume 3 of Fields Institute Communications, pp AMS (1994) 37. Zanna, A., Munthe-Kaas, H.Z.: Generalized polar decompositions for the approximation of the matrix exponential. SIAM J. Matrix Anal. Appl. 23(3), (2002) 38. Zhang, S.Y., Deng, Z.C.: Group preserving schemes for nonlinear dynamic system based on RKMK methods. Appl. Math. Comput. 175(1), (2006)

QUATERNIONS AND ROTATIONS

QUATERNIONS AND ROTATIONS QUATERNIONS AND ROTATIONS SVANTE JANSON 1. Introduction The purpose of this note is to show some well-known relations between quaternions and the Lie groups SO(3) and SO(4) (rotations in R 3 and R 4 )

More information

Numerical Algorithms as Dynamical Systems

Numerical Algorithms as Dynamical Systems A Study on Numerical Algorithms as Dynamical Systems Moody Chu North Carolina State University What This Study Is About? To recast many numerical algorithms as special dynamical systems, whence to derive

More information

LECTURE 25-26: CARTAN S THEOREM OF MAXIMAL TORI. 1. Maximal Tori

LECTURE 25-26: CARTAN S THEOREM OF MAXIMAL TORI. 1. Maximal Tori LECTURE 25-26: CARTAN S THEOREM OF MAXIMAL TORI 1. Maximal Tori By a torus we mean a compact connected abelian Lie group, so a torus is a Lie group that is isomorphic to T n = R n /Z n. Definition 1.1.

More information

SYMMETRIC PROJECTION METHODS FOR DIFFERENTIAL EQUATIONS ON MANIFOLDS

SYMMETRIC PROJECTION METHODS FOR DIFFERENTIAL EQUATIONS ON MANIFOLDS BIT 0006-3835/00/4004-0726 $15.00 2000, Vol. 40, No. 4, pp. 726 734 c Swets & Zeitlinger SYMMETRIC PROJECTION METHODS FOR DIFFERENTIAL EQUATIONS ON MANIFOLDS E. HAIRER Section de mathématiques, Université

More information

SEMISIMPLE LIE GROUPS

SEMISIMPLE LIE GROUPS SEMISIMPLE LIE GROUPS BRIAN COLLIER 1. Outiline The goal is to talk about semisimple Lie groups, mainly noncompact real semisimple Lie groups. This is a very broad subject so we will do our best to be

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

Chapter 2 The Group U(1) and its Representations

Chapter 2 The Group U(1) and its Representations Chapter 2 The Group U(1) and its Representations The simplest example of a Lie group is the group of rotations of the plane, with elements parametrized by a single number, the angle of rotation θ. It is

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

GROUP THEORY PRIMER. New terms: so(2n), so(2n+1), symplectic algebra sp(2n)

GROUP THEORY PRIMER. New terms: so(2n), so(2n+1), symplectic algebra sp(2n) GROUP THEORY PRIMER New terms: so(2n), so(2n+1), symplectic algebra sp(2n) 1. Some examples of semi-simple Lie algebras In the previous chapter, we developed the idea of understanding semi-simple Lie algebras

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors /88 Chia-Ping Chen Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Eigenvalue Problem /88 Eigenvalue Equation By definition, the eigenvalue equation for matrix

More information

THE EULER CHARACTERISTIC OF A LIE GROUP

THE EULER CHARACTERISTIC OF A LIE GROUP THE EULER CHARACTERISTIC OF A LIE GROUP JAY TAYLOR 1 Examples of Lie Groups The following is adapted from [2] We begin with the basic definition and some core examples Definition A Lie group is a smooth

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator

More information

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA

ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA ALGEBRA QUALIFYING EXAM PROBLEMS LINEAR ALGEBRA Kent State University Department of Mathematical Sciences Compiled and Maintained by Donald L. White Version: August 29, 2017 CONTENTS LINEAR ALGEBRA AND

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

MATHEMATICS 217 NOTES

MATHEMATICS 217 NOTES MATHEMATICS 27 NOTES PART I THE JORDAN CANONICAL FORM The characteristic polynomial of an n n matrix A is the polynomial χ A (λ) = det(λi A), a monic polynomial of degree n; a monic polynomial in the variable

More information

Solving Orthogonal Matrix Differential Systems in Mathematica

Solving Orthogonal Matrix Differential Systems in Mathematica Solving Orthogonal Matrix Differential Systems in Mathematica Mark Sofroniou 1 and Giulia Spaletta 2 1 Wolfram Research, Champaign, Illinois, USA. marks@wolfram.com 2 Mathematics Department, Bologna University,

More information

A PRIMER ON SESQUILINEAR FORMS

A PRIMER ON SESQUILINEAR FORMS A PRIMER ON SESQUILINEAR FORMS BRIAN OSSERMAN This is an alternative presentation of most of the material from 8., 8.2, 8.3, 8.4, 8.5 and 8.8 of Artin s book. Any terminology (such as sesquilinear form

More information

L(C G (x) 0 ) c g (x). Proof. Recall C G (x) = {g G xgx 1 = g} and c g (x) = {X g Ad xx = X}. In general, it is obvious that

L(C G (x) 0 ) c g (x). Proof. Recall C G (x) = {g G xgx 1 = g} and c g (x) = {X g Ad xx = X}. In general, it is obvious that ALGEBRAIC GROUPS 61 5. Root systems and semisimple Lie algebras 5.1. Characteristic 0 theory. Assume in this subsection that chark = 0. Let me recall a couple of definitions made earlier: G is called reductive

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

SOME PROPERTIES OF SYMPLECTIC RUNGE-KUTTA METHODS

SOME PROPERTIES OF SYMPLECTIC RUNGE-KUTTA METHODS SOME PROPERTIES OF SYMPLECTIC RUNGE-KUTTA METHODS ERNST HAIRER AND PIERRE LEONE Abstract. We prove that to every rational function R(z) satisfying R( z)r(z) = 1, there exists a symplectic Runge-Kutta method

More information

Symmetric Spaces Toolkit

Symmetric Spaces Toolkit Symmetric Spaces Toolkit SFB/TR12 Langeoog, Nov. 1st 7th 2007 H. Sebert, S. Mandt Contents 1 Lie Groups and Lie Algebras 2 1.1 Matrix Lie Groups........................ 2 1.2 Lie Group Homomorphisms...................

More information

October 25, 2013 INNER PRODUCT SPACES

October 25, 2013 INNER PRODUCT SPACES October 25, 2013 INNER PRODUCT SPACES RODICA D. COSTIN Contents 1. Inner product 2 1.1. Inner product 2 1.2. Inner product spaces 4 2. Orthogonal bases 5 2.1. Existence of an orthogonal basis 7 2.2. Orthogonal

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

Clifford Algebras and Spin Groups

Clifford Algebras and Spin Groups Clifford Algebras and Spin Groups Math G4344, Spring 2012 We ll now turn from the general theory to examine a specific class class of groups: the orthogonal groups. Recall that O(n, R) is the group of

More information

Diagonalizing Matrices

Diagonalizing Matrices Diagonalizing Matrices Massoud Malek A A Let A = A k be an n n non-singular matrix and let B = A = [B, B,, B k,, B n ] Then A n A B = A A 0 0 A k [B, B,, B k,, B n ] = 0 0 = I n 0 A n Notice that A i B

More information

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Jim Lambers MAT 610 Summer Session Lecture 1 Notes Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra

More information

7. Baker-Campbell-Hausdorff formula

7. Baker-Campbell-Hausdorff formula 7. Baker-Campbell-Hausdorff formula 7.1. Formulation. Let G GL(n,R) be a matrix Lie group and let g = Lie(G). The exponential map is an analytic diffeomorphim of a neighborhood of 0 in g with a neighborhood

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

A matrix over a field F is a rectangular array of elements from F. The symbol

A matrix over a field F is a rectangular array of elements from F. The symbol Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted

More information

NEW NUMERICAL INTEGRATORS BASED ON SOLVABILITY AND SPLITTING Fernando Casas Universitat Jaume I, Castellón, Spain Fernando.Casas@uji.es (on sabbatical leave at DAMTP, University of Cambridge) Edinburgh,

More information

4.2. ORTHOGONALITY 161

4.2. ORTHOGONALITY 161 4.2. ORTHOGONALITY 161 Definition 4.2.9 An affine space (E, E ) is a Euclidean affine space iff its underlying vector space E is a Euclidean vector space. Given any two points a, b E, we define the distance

More information

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.) 4 Vector fields Last updated: November 26, 2009. (Under construction.) 4.1 Tangent vectors as derivations After we have introduced topological notions, we can come back to analysis on manifolds. Let M

More information

SOLVING ODE s NUMERICALLY WHILE PRESERVING ALL FIRST INTEGRALS

SOLVING ODE s NUMERICALLY WHILE PRESERVING ALL FIRST INTEGRALS SOLVING ODE s NUMERICALLY WHILE PRESERVING ALL FIRST INTEGRALS G.R.W. QUISPEL 1,2 and H.W. CAPEL 3 Abstract. Using Discrete Gradient Methods (Quispel & Turner, J. Phys. A29 (1996) L341-L349) we construct

More information

CLASSICAL GROUPS DAVID VOGAN

CLASSICAL GROUPS DAVID VOGAN CLASSICAL GROUPS DAVID VOGAN 1. Orthogonal groups These notes are about classical groups. That term is used in various ways by various people; I ll try to say a little about that as I go along. Basically

More information

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS

THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS THE MINIMAL POLYNOMIAL AND SOME APPLICATIONS KEITH CONRAD. Introduction The easiest matrices to compute with are the diagonal ones. The sum and product of diagonal matrices can be computed componentwise

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 16: Eigenvalue Problems; Similarity Transformations Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical Analysis I 1 / 18 Eigenvalue

More information

Representations of Sp(6,R) and SU(3) carried by homogeneous polynomials

Representations of Sp(6,R) and SU(3) carried by homogeneous polynomials Representations of Sp(6,R) and SU(3) carried by homogeneous polynomials Govindan Rangarajan a) Department of Mathematics and Centre for Theoretical Studies, Indian Institute of Science, Bangalore 560 012,

More information

REPORTS IN INFORMATICS

REPORTS IN INFORMATICS REPORTS IN INFORMATICS ISSN 0333-3590 On the BCH-formula in so(3) Kenth Engø REPORT NO 201 September 2000 Department of Informatics UNIVERSITY OF BERGEN Bergen, Norway This report has URL http://www.ii.uib.no/publikasjoner/texrap/pdf/2000-201.pdf

More information

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction NONCOMMUTATIVE POLYNOMIAL EQUATIONS Edward S Letzter Introduction My aim in these notes is twofold: First, to briefly review some linear algebra Second, to provide you with some new tools and techniques

More information

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators. Adjoint operator and adjoint matrix Given a linear operator L on an inner product space V, the adjoint of L is a transformation

More information

GEOMETRIC INTEGRATION OF ORDINARY DIFFERENTIAL EQUATIONS ON MANIFOLDS

GEOMETRIC INTEGRATION OF ORDINARY DIFFERENTIAL EQUATIONS ON MANIFOLDS BIT 0006-3835/01/4105-0996 $16.00 2001, Vol. 41, No. 5, pp. 996 1007 c Swets & Zeitlinger GEOMETRIC INTEGRATION OF ORDINARY DIFFERENTIAL EQUATIONS ON MANIFOLDS E. HAIRER Section de mathématiques, Université

More information

Problems in Linear Algebra and Representation Theory

Problems in Linear Algebra and Representation Theory Problems in Linear Algebra and Representation Theory (Most of these were provided by Victor Ginzburg) The problems appearing below have varying level of difficulty. They are not listed in any specific

More information

. = V c = V [x]v (5.1) c 1. c k

. = V c = V [x]v (5.1) c 1. c k Chapter 5 Linear Algebra It can be argued that all of linear algebra can be understood using the four fundamental subspaces associated with a matrix Because they form the foundation on which we later work,

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

Page 404. Lecture 22: Simple Harmonic Oscillator: Energy Basis Date Given: 2008/11/19 Date Revised: 2008/11/19

Page 404. Lecture 22: Simple Harmonic Oscillator: Energy Basis Date Given: 2008/11/19 Date Revised: 2008/11/19 Page 404 Lecture : Simple Harmonic Oscillator: Energy Basis Date Given: 008/11/19 Date Revised: 008/11/19 Coordinate Basis Section 6. The One-Dimensional Simple Harmonic Oscillator: Coordinate Basis Page

More information

Bare-bones outline of eigenvalue theory and the Jordan canonical form

Bare-bones outline of eigenvalue theory and the Jordan canonical form Bare-bones outline of eigenvalue theory and the Jordan canonical form April 3, 2007 N.B.: You should also consult the text/class notes for worked examples. Let F be a field, let V be a finite-dimensional

More information

ECE 275A Homework #3 Solutions

ECE 275A Homework #3 Solutions ECE 75A Homework #3 Solutions. Proof of (a). Obviously Ax = 0 y, Ax = 0 for all y. To show sufficiency, note that if y, Ax = 0 for all y, then it must certainly be true for the particular value of y =

More information

Lie Groups for 2D and 3D Transformations

Lie Groups for 2D and 3D Transformations Lie Groups for 2D and 3D Transformations Ethan Eade Updated May 20, 2017 * 1 Introduction This document derives useful formulae for working with the Lie groups that represent transformations in 2D and

More information

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,

More information

Let us recall in a nutshell the definition of some important algebraic structure, increasingly more refined than that of group.

Let us recall in a nutshell the definition of some important algebraic structure, increasingly more refined than that of group. Chapter 1 SOME MATHEMATICAL TOOLS 1.1 Some definitions in algebra Let us recall in a nutshell the definition of some important algebraic structure, increasingly more refined than that of group. Ring A

More information

Lie-Poisson pencils related to semisimple Lie agebras: towards classification

Lie-Poisson pencils related to semisimple Lie agebras: towards classification Lie-Poisson pencils related to semisimple Lie agebras: towards classification Integrability in Dynamical Systems and Control, INSA de Rouen, 14-16.11.2012 Andriy Panasyuk Faculty of Mathematics and Computer

More information

The Jordan canonical form

The Jordan canonical form The Jordan canonical form Francisco Javier Sayas University of Delaware November 22, 213 The contents of these notes have been translated and slightly modified from a previous version in Spanish. Part

More information

LECTURE 16: LIE GROUPS AND THEIR LIE ALGEBRAS. 1. Lie groups

LECTURE 16: LIE GROUPS AND THEIR LIE ALGEBRAS. 1. Lie groups LECTURE 16: LIE GROUPS AND THEIR LIE ALGEBRAS 1. Lie groups A Lie group is a special smooth manifold on which there is a group structure, and moreover, the two structures are compatible. Lie groups are

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

Problem set 2. Math 212b February 8, 2001 due Feb. 27

Problem set 2. Math 212b February 8, 2001 due Feb. 27 Problem set 2 Math 212b February 8, 2001 due Feb. 27 Contents 1 The L 2 Euler operator 1 2 Symplectic vector spaces. 2 2.1 Special kinds of subspaces....................... 3 2.2 Normal forms..............................

More information

Notes on nilpotent orbits Computational Theory of Real Reductive Groups Workshop. Eric Sommers

Notes on nilpotent orbits Computational Theory of Real Reductive Groups Workshop. Eric Sommers Notes on nilpotent orbits Computational Theory of Real Reductive Groups Workshop Eric Sommers 17 July 2009 2 Contents 1 Background 5 1.1 Linear algebra......................................... 5 1.1.1

More information

Real symmetric matrices/1. 1 Eigenvalues and eigenvectors

Real symmetric matrices/1. 1 Eigenvalues and eigenvectors Real symmetric matrices 1 Eigenvalues and eigenvectors We use the convention that vectors are row vectors and matrices act on the right. Let A be a square matrix with entries in a field F; suppose that

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

Lie algebraic quantum control mechanism analysis

Lie algebraic quantum control mechanism analysis Lie algebraic quantum control mechanism analysis June 4, 2 Existing methods for quantum control mechanism identification do not exploit analytic features of control systems to delineate mechanism (or,

More information

CALCULUS ON MANIFOLDS. 1. Riemannian manifolds Recall that for any smooth manifold M, dim M = n, the union T M =

CALCULUS ON MANIFOLDS. 1. Riemannian manifolds Recall that for any smooth manifold M, dim M = n, the union T M = CALCULUS ON MANIFOLDS 1. Riemannian manifolds Recall that for any smooth manifold M, dim M = n, the union T M = a M T am, called the tangent bundle, is itself a smooth manifold, dim T M = 2n. Example 1.

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Chemistry 5.76 Revised February, 1982 NOTES ON MATRIX METHODS

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Chemistry 5.76 Revised February, 1982 NOTES ON MATRIX METHODS MASSACHUSETTS INSTITUTE OF TECHNOLOGY Chemistry 5.76 Revised February, 198 NOTES ON MATRIX METHODS 1. Matrix Algebra Margenau and Murphy, The Mathematics of Physics and Chemistry, Chapter 10, give almost

More information

On the singular elements of a semisimple Lie algebra and the generalized Amitsur-Levitski Theorem

On the singular elements of a semisimple Lie algebra and the generalized Amitsur-Levitski Theorem On the singular elements of a semisimple Lie algebra and the generalized Amitsur-Levitski Theorem Bertram Kostant, MIT Conference on Representations of Reductive Groups Salt Lake City, Utah July 10, 2013

More information

Numerical Analysis Lecture Notes

Numerical Analysis Lecture Notes Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to

More information

Lecture 4 Eigenvalue problems

Lecture 4 Eigenvalue problems Lecture 4 Eigenvalue problems Weinan E 1,2 and Tiejun Li 2 1 Department of Mathematics, Princeton University, weinan@princeton.edu 2 School of Mathematical Sciences, Peking University, tieli@pku.edu.cn

More information

Supplementary Notes March 23, The subgroup Ω for orthogonal groups

Supplementary Notes March 23, The subgroup Ω for orthogonal groups The subgroup Ω for orthogonal groups 18.704 Supplementary Notes March 23, 2005 In the case of the linear group, it is shown in the text that P SL(n, F ) (that is, the group SL(n) of determinant one matrices,

More information

A 2 G 2 A 1 A 1. (3) A double edge pointing from α i to α j if α i, α j are not perpendicular and α i 2 = 2 α j 2

A 2 G 2 A 1 A 1. (3) A double edge pointing from α i to α j if α i, α j are not perpendicular and α i 2 = 2 α j 2 46 MATH 223A NOTES 2011 LIE ALGEBRAS 11. Classification of semisimple Lie algebras I will explain how the Cartan matrix and Dynkin diagrams describe root systems. Then I will go through the classification

More information

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M.

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M. 5 Vector fields Last updated: March 12, 2012. 5.1 Definition and general properties We first need to define what a vector field is. Definition 5.1. A vector field v on a manifold M is map M T M such that

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear

More information

SYMPLECTIC GEOMETRY: LECTURE 5

SYMPLECTIC GEOMETRY: LECTURE 5 SYMPLECTIC GEOMETRY: LECTURE 5 LIAT KESSLER Let (M, ω) be a connected compact symplectic manifold, T a torus, T M M a Hamiltonian action of T on M, and Φ: M t the assoaciated moment map. Theorem 0.1 (The

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

7 Planar systems of linear ODE

7 Planar systems of linear ODE 7 Planar systems of linear ODE Here I restrict my attention to a very special class of autonomous ODE: linear ODE with constant coefficients This is arguably the only class of ODE for which explicit solution

More information

THE SEMISIMPLE SUBALGEBRAS OF EXCEPTIONAL LIE ALGEBRAS

THE SEMISIMPLE SUBALGEBRAS OF EXCEPTIONAL LIE ALGEBRAS Trudy Moskov. Matem. Obw. Trans. Moscow Math. Soc. Tom 67 (2006) 2006, Pages 225 259 S 0077-1554(06)00156-7 Article electronically published on December 27, 2006 THE SEMISIMPLE SUBALGEBRAS OF EXCEPTIONAL

More information

7. Symmetric Matrices and Quadratic Forms

7. Symmetric Matrices and Quadratic Forms Linear Algebra 7. Symmetric Matrices and Quadratic Forms CSIE NCU 1 7. Symmetric Matrices and Quadratic Forms 7.1 Diagonalization of symmetric matrices 2 7.2 Quadratic forms.. 9 7.4 The singular value

More information

SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE

SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE SCHUR IDEALS AND HOMOMORPHISMS OF THE SEMIDEFINITE CONE BABHRU JOSHI AND M. SEETHARAMA GOWDA Abstract. We consider the semidefinite cone K n consisting of all n n real symmetric positive semidefinite matrices.

More information

Topics in Representation Theory: Cultural Background

Topics in Representation Theory: Cultural Background Topics in Representation Theory: Cultural Background This semester we will be covering various topics in representation theory, see the separate syllabus for a detailed list of topics, including some that

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

16. Local theory of regular singular points and applications

16. Local theory of regular singular points and applications 16. Local theory of regular singular points and applications 265 16. Local theory of regular singular points and applications In this section we consider linear systems defined by the germs of meromorphic

More information

Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms

Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms Krylov Subspace Methods for the Evaluation of Matrix Functions. Applications and Algorithms 2. First Results and Algorithms Michael Eiermann Institut für Numerische Mathematik und Optimierung Technische

More information

The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix

The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix The Eigenvalue Shift Technique and Its Eigenstructure Analysis of a Matrix Chun-Yueh Chiang Center for General Education, National Formosa University, Huwei 632, Taiwan. Matthew M. Lin 2, Department of

More information

PHYS 705: Classical Mechanics. Rigid Body Motion Introduction + Math Review

PHYS 705: Classical Mechanics. Rigid Body Motion Introduction + Math Review 1 PHYS 705: Classical Mechanics Rigid Body Motion Introduction + Math Review 2 How to describe a rigid body? Rigid Body - a system of point particles fixed in space i r ij j subject to a holonomic constraint:

More information

A SYMBOLIC-NUMERIC APPROACH TO THE SOLUTION OF THE BUTCHER EQUATIONS

A SYMBOLIC-NUMERIC APPROACH TO THE SOLUTION OF THE BUTCHER EQUATIONS CANADIAN APPLIED MATHEMATICS QUARTERLY Volume 17, Number 3, Fall 2009 A SYMBOLIC-NUMERIC APPROACH TO THE SOLUTION OF THE BUTCHER EQUATIONS SERGEY KHASHIN ABSTRACT. A new approach based on the use of new

More information

8.1 Bifurcations of Equilibria

8.1 Bifurcations of Equilibria 1 81 Bifurcations of Equilibria Bifurcation theory studies qualitative changes in solutions as a parameter varies In general one could study the bifurcation theory of ODEs PDEs integro-differential equations

More information

Degenerate Perturbation Theory. 1 General framework and strategy

Degenerate Perturbation Theory. 1 General framework and strategy Physics G6037 Professor Christ 12/22/2015 Degenerate Perturbation Theory The treatment of degenerate perturbation theory presented in class is written out here in detail. The appendix presents the underlying

More information

Representations of algebraic groups and their Lie algebras Jens Carsten Jantzen Lecture III

Representations of algebraic groups and their Lie algebras Jens Carsten Jantzen Lecture III Representations of algebraic groups and their Lie algebras Jens Carsten Jantzen Lecture III Lie algebras. Let K be again an algebraically closed field. For the moment let G be an arbitrary algebraic group

More information

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM 33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM (UPDATED MARCH 17, 2018) The final exam will be cumulative, with a bit more weight on more recent material. This outline covers the what we ve done since the

More information

LECTURE 2: SYMPLECTIC VECTOR BUNDLES

LECTURE 2: SYMPLECTIC VECTOR BUNDLES LECTURE 2: SYMPLECTIC VECTOR BUNDLES WEIMIN CHEN, UMASS, SPRING 07 1. Symplectic Vector Spaces Definition 1.1. A symplectic vector space is a pair (V, ω) where V is a finite dimensional vector space (over

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

ELA

ELA LEFT EIGENVALUES OF 2 2 SYMPLECTIC MATRICES E. MACíAS-VIRGÓS AND M.J. PEREIRA-SÁEZ Abstract. A complete characterization is obtained of the 2 2 symplectic matrices that have an infinite number of left

More information

A connection between number theory and linear algebra

A connection between number theory and linear algebra A connection between number theory and linear algebra Mark Steinberger Contents 1. Some basics 1 2. Rational canonical form 2 3. Prime factorization in F[x] 4 4. Units and order 5 5. Finite fields 7 6.

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM

CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM CANONICAL LOSSLESS STATE-SPACE SYSTEMS: STAIRCASE FORMS AND THE SCHUR ALGORITHM Ralf L.M. Peeters Bernard Hanzon Martine Olivi Dept. Mathematics, Universiteit Maastricht, P.O. Box 616, 6200 MD Maastricht,

More information

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane. Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 2018 8 Lecture 8 8.1 Matrices July 22, 2018 We shall study

More information

Linear Algebra Lecture Notes-II

Linear Algebra Lecture Notes-II Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered

More information