AN EFFICIENT METHOD FOR COMPUTING THE SIMPLEST NORMAL FORMS OF VECTOR FIELDS

Size: px
Start display at page:

Download "AN EFFICIENT METHOD FOR COMPUTING THE SIMPLEST NORMAL FORMS OF VECTOR FIELDS"

Transcription

1 International Journal of Bifurcation and Chaos, Vol. 3, No. (2003) 9 46 c World Scientific Publishing Company AN EFFICIENT METHOD FOR COMPUTING THE SIMPLEST NORMAL FORMS OF VECTOR FIELDS P. YU and Y. YUAN Department of Applied Mathematics, University of Western Ontario, London, Ontario, Canada N6A 5B7 pyu@pyu.apmaths.uwo.ca Received February, 200; Revised November 4, 200 A computationally efficient method is proposed for computing the simplest normal forms of vector fields. A simple, explicit recursive formula is obtained for general differential equations. The most important feature of the approach is to obtain the simplest formula which reduces the computation demand to minimum. At each order of the normal form computation, the formula generates a set of algebraic equations for computing the normal form and nonlinear transformation. Moreover, the new recursive method is not required for solving large matrix equations, instead it solves linear algebraic equations one by one. Thus the new method is computationally efficient. In addition, unlike the conventional normal form theory which uses separate nonlinear transformations at each order, this approach uses a consistent nonlinear transformation through all order computations. This enables one to obtain a convenient, one step transformation between the original system and the simplest normal form. The new method can treat general differential equations which are not necessarily assumed in a conventional normal form. The method is applied to consider Hopf and Bogdanov Takens singularities, with examples to show the computation efficiency. Maple programs have been developed to provide an automatic procedure for applications. Keywords: CNF; SNF; efficient computation; recursive algorithm; Hopf bifurcation; B T singularity; computer algebra; Maple.. Introduction Normal form theory for differential equations can be traced back to the original work of one hundred years ago, and most credit should be given to Poincaré [879]. The theory has been proved useful to transform a system to its simpler form in order to make it easy for studying the complex dynamical behavior of the system. The simple form is qualitatively equivalent to the original system in the vicinity of a fixed point, and thus greatly simplifies the dynamical analysis (e.g. see [Guckenheimer & Holmes, 993; Elphick et al., 987; Nayfeh, 993; Chow et al., 994]). The general procedure of the conventional (classical) normal form (CNF) theory is using the linear singularity of a system at an equilibrium to form a Lie bracket operator and then repeatedly employing the operator to remove higher order nonlinear terms as many as possible. However, it has been found that CNF is not the simplest form and may be further simplified using a similar near-identity nonlinear transformation (e.g. see [Cushman & Sanders, 986, 988; Chua & Kokubu, 988a, 988b]). The difference between the CNF and the simplest normal form (SNF) can be roughly explained as follows: Suppose the vector field has been expanded into vector homogeneous polynomials according to the order of nonlinear terms, and the normal form computation is carried out order by Author for correspondence. 9

2 20 P. Yu & Y. Yuan order. Let the integer k 2 denote a general order, then with CNF theory the coefficients of the kth order nonlinear transformation are used to only eliminate the terms in the kth degree homogeneous polynomial, while by the SNF the coefficients of the kth order nonlinear transformation are used not only to eliminate the terms in the kth degree polynomial, but also to remove the terms in higher degree polynomials. Since the computation for the SNF is much more involved than that of CNF, computer algebra systems such as Maple, Mathematics, Reduce, etc. have been used (e.g. see [Algaba, 998; Yu, 999; Yu & Yuan, 200; Yuan & Yu, 200]). The automatic Maple programs developed in [Yu, 999; Yu & Yuan, 200; Yuan & Yu, 200] can be applied to find the SNFs associated with the singularities such as Hopf and generalized Hopf, zero and Hopf, and a double zero. These programs require very little effort from a user to prepare an input file. The two main steps involved in computing the kth order CNF are: () to determine the form of the normal form; and (2) to compute the coefficients of the normal form and the associated nonlinear transform. The form of a normal form may be found using Takens normal form theory [Takens, 974], while the computation of the coefficients requires deriving a set of algebraic equations from each order. Many researchers have developed algorithms for determining the algebraic equations (e.g. see [Chua & Kokubu, 988a, 988b; Algaba, 998; Yu, 999; Kokubu et al., 996; Wang et al., 2000; Yuan & Yu, 200; Yu & Yuan, 200]). However, the approaches presented in these references have a common shortcoming: A basic procedure in the symbolic computation of normal forms is to substitute obtained lower order (<k) normal form and nonlinear transformation to the original differential equation to yield an expression for the kth order computation, which contains not only the kth order terms, but also lower order (<k) and higher order (>k) terms. One must extract the kth order terms from the expression to obtain the kth order algebraic equation. This unnecessarily increases the computation burden and takes too much computer memory, especially when computing higher order normal forms. Therefore, removing the unnecessary lower and higher order terms from the kth order computation becomes essential in order to reduce the computation time and memory requirement. This is particularly important for computing the SNF since it needs more memory than that required for computing the CNF. This paper presents an efficient approach to compute the kth order (k 2, an arbitrary integer) algebraic equation which only contains the terms belonging to the kth order equation. Based on the Lie bracket operator, a recursive formula is derived which can be applied to consider any singularities. Moreover, the new method is not required for solving large matrix equations, instead it solves linear algebraic equations one by one, and is therefore computationally efficient. In addition, unlike CNF theory which uses separate nonlinear transformations at each order, the new approach uses a consistent nonlinear transformation through all order computations. This provides a convenient, one step transformation between the original system and the SNF, which is particularly useful in real applications. Furthermore, this method can treat general differential equations which are not necessarily described by CNF. With the aid of the efficient recursive formula, Hopf and the Bogdonov Takens singularities are considered, showing that the method is indeed computationally efficient and can be applied to find higher order SNF. In the next section, the efficient computation method is presented and the general explicit recursive formula is derived. Section 3 applies the new approach to find the SNFs of Hopf and the Bogdonov Takens singularities. Symbolic computation is also discussed in this section. Numerical examples are given in Sec. 4 to show the computation efficiency of the method, and finally, conclusions are drawn in Sec. 5. Maple source codes, and sample input data and computer output are listed in Appendices A and B for references. 2. An Efficient Computation Method In this section, we first derive the general formula for the efficient computation method, which is summarized in a theorem, and then discuss the symbolic computation. 2.. Theoretical analysis Consider the general system, described by ẋ = Lx + f(x) v + f 2 (x)+f 3 (x)+ + f k (x)+, () where x R n, v = Lx represents the linear part and L is the Jacobian matrix and assumed, without

3 An Efficient Method for Computing the SNF 2 loss of generality, in a standard Jordan canonical form. (Usually J is used to denote the Jacobian matrix. Here L is used, consistent with the Lie bracket notation.) It is assumed that all eigenvalues of L have zero real parts, implying that the dynamics of system () is described on an n-dimensional center manifold. f k (x) denotes the kth order vector homogeneous polynomials of x. It is further assumed that system () has an equilibrium at the origin x =0. The basic idea of normal form theory is to find a near-identity nonlinear transformation x = y + h(y) y + h 2 (y)+h 3 (y)+ + h k (y)+ (2) such that the resulting system ẏ = Ly + g(y) Ly + g 2 (y)+g 3 (y)+ + g k (y)+ (3) becomesassimpleaspossible. Hereh k (y)andg k (y) denote the kth order vector homogeneous polynomials of y. To apply normal form theory, first define an operator as follows: L k : H k H k, U k H k L k (U k )=[U k,v ] H k, (4) where H n denotes a linear vector space consisting of the kth-order vector homogeneous polynomials. The operator [U k,v ] is called Lie bracket, defined as [U k,v ]=Dv U k DU k v. (5) Next, define the space R k as the range of L k,and the complementary space of R k as K k. Thus, H k = R k K k, (6) and we can then choose the basis for R k and K k. Consequently, a vector homogeneous polynomial f k H k can be split into two parts: one is spanned on the basis of R k and the other on that of K k.normal form theory shows that the part of f k belonging to R k can be eliminated while the part belonging to K k must be retained, which is called normal form. By applying Takens normal form theory [Takens, 974], one can find the kth order normal form g k (y), while the part belonging to R k can be removed by appropriately choosing the coefficients of the nonlinear transformation h k (y). The form of the normal form g k (y) depends upon the basis of the complementary space K k, which is determined by the linear vector v.wemay apply the matrix method [Guckenheimer & Holmes, 993] to find the basis of the space R k and then determine the basis of the complementary space K k. Once the basis of K k is chosen, the form of g k (y)can be determined, which actually represents the CNF. The idea of further reduction of the CNF is to find an appropriate h k (y) suchthatsomecoefficientsof g k (y) can be further eliminated, leading to the SNF. The fundamental difference between the CNF and the SNF is explained as follows. Roughly speaking, finding the coefficients of the normal form and nonlinear transformation is to solve a set of linear algebraic equations at each order. Since in general the number of the variables the coefficients of the nonlinear transformation is larger than the number of the algebraic equations, some coefficients of the nonlinear transformation cannot be determined. In CNF theory, the undetermined coefficients are set zero at each order and therefore, the nonlinear transformation is simplified. However, in order to further simplify the normal form, the undetermined coefficients should not be set zero, but are carried over to higher order equations so that they may be used to eliminate nonlinear terms in higher order normal forms. This is the key idea in computing the SNF. In general, when one applies normal form theory (e.g. Takens normal form theory) to a system, he can find the form of the normal form (i.e. the basis of the complementary space K k ), but not the explicit expressions. However, in practical applications, the solutions for the normal form and the nonlinear transformation need to be found explicitly. To achieve this, one may assume a general form of the nonlinear transformation and substitute it back into the differential equation, with the aid of normal form theory, to obtain the kth order algebraic equations by balancing the coefficients of the homogeneous polynomial terms. These algebraic equations are then used to determine the coefficients of the normal form and the nonlinear transformation. Thus, the key step in the computation of the kth order normal form is to find the kth order algebraic equation, which takes the most of the computation time and computer memory. Since the solution procedure for finding kth order algebraic equation in most of normal form computation methods contains lower order (<k) and higher order (>k) terms, it extensively demands more computer memory and computation time. Therefore, from the computation point of view, it is crucial to derive the kth

4 22 P. Yu & Y. Yuan order algebraic equation which only contains the kth order terms. The following theorem summarizes the results for the new recursive and computationally efficient approach, which can be used to compute the kth order normal form and the associated nonlinear transformation. Theorem. The recursive formula for computing the coefficients of the simplest normal form and the nonlinear transformation is given by k g k = f k +[h k,v ]+ {[h k i+,f i] i=2 + Dh i(f k i+ g k i+ )} + [ k 2 ] m=2 k m D m f i m! i=m l + l l m = k (i m) 2 l,l 2,,l m k (i m) 2(m ) h l h l2 h lm (7) for k = 2, 3,..., where f k, h k, and g k are the kth order vector homogeneous polynomials of y (where y has been dropped for simplicity). f k represents the kth order terms of the original system, h k is the kth order nonlinear transformation, and g k denotes the kth order normal form. Remark. The notation D m f i h l h l2 h lm denotes the mth order terms of the Taylor expansion of f i (y + h(y)) about y. More precisely, D m f i (y + h) = D(D( D((Df i )h l )h l2 ) h lm )h lm, (8) where each differential operator D affects only function f i,noth lj (i.e. h lj is treated as a constant vector in the process of the differentiation), and thus m i. Note that at each level of the differentiation, the D operator is actually a Frechét derivative, giving rise a matrix, which is multiplied with a vector to generate another vector, and then to another level of Frechét derivative, and so on. Proof. First, differentiating Eq. (2) results in ẋ =ẏ + Dh(y)ẏ =(I + Dh(y))ẏ. (9) Then, substituting Eqs. () and (3) into Eq. (9) yields Lx + f 2 (x)+f 3 (x)+ + f k (x)+ =(I + Dh(y))(Ly + g 2 (y) + g 3 (y)+ + g k (y)+ ). (0) Next, substituting Eq. (2) into Eq. (0) and rearranging the resulting equation gives g 2 (y)+g 3 (y)+ + g k (y)+ =Lh(y) Dh(y)Ly Dh(y)g 2 (y) Dh(y)g 3 (y) Dh(y)g k (y) + f 2 (y + h(y)) + f 3 (y + h(y)) + + f k (y + h(y)) + () which can be rewritten, using Taylor expansion about y, as g 2 (y)+g 3 (y)+ + g k (y)+ = f 2 (y)+f 3 (y)+ + f k (y)+ + {Lh i (y) Dh i (y)ly} i=2 + Dh(y){f i (y) g i (y)} + {Df i (y)h(y) Dh(y)f i (y)} i=2 i=2 + 2! {D2 f 2 (y)h 2 (y)+d 2 f 3 (y)h 2 (y)+ } + 3! {D3 f 3 (y)h 3 (y)+d 3 f 4 (y)h 3 (y)+ }+ + k! {Dk f k (y)h k (y)+d k f k+ (y)h k (y)+ }+ (2)

5 An Efficient Method for Computing the SNF 23 Further, one uses the Lie bracket notation and rewrites the Taylor expansion in a component form according to the order of the terms: g i (y) = f i (y)+ [h i (y), v (y)] + Dh j (y){f i (y) g i (y)} i=2 i=2 + + p=4 m=2 i=2 i + j = p i, j 2 m! i=m p=4 i + j = p i, j 2 {Df i (y)h j (y) Dh j (y)f i (y)} D m f i (y){h 2 (y)+h 3 (y)+ } m. (3) Finally, we may round off Eq. (3) up to kth order, which is enough for the proof, and put it in the ascending order: k k k k j g i (y) = f i (y)+ [h i (y), v (y)] + [h j i+ (y), f i (y)] i=2 i=2 i=2 j=3 i=2 + + k j j=3 i=2 k [ j 2 ] j=4 m=2 Dh i (y){f k i+ (y) g k i+ (y)} m! j m i=m D m f i (y) where the property of the Lie bracket, l + l l m = j (i m) 2 l,l 2,...,l m j (i m) 2(m ) h l (y)h l2 (y) h lm (y), (4) [X i,y j ] H i+j for X i H i and Y j H j, (5) has been used. Now by taking the terms in Eq. (4) according to their order one obtains g 2 = f 2 +[h 2,v ], g 3 = f 3 +[h 3,v ]+[h 2,f 2 ]+Dh 2 (f 2 g 2 ), (6) g 4 = f 4 +[h 4,v ]+[h 3,f 2 ]+[h 2,f 3 ]+Dh 2 (f 3 g 3 )+Dh 3 (f 2 g 2 )+ 2 D2 f 2 h 2 2, etc., where the variable y has been dropped for simplicity. For a general k, wehave k g k = f k +[h k,v ]+ {[h i,f k i+ ]+Dh i (f k i+ g k i+ )} + [ k 2 ] m=2 m! k m i=m i=2 D m f i l + l l m = k (i m) 2 l,l 2,...,l m k (i m) 2(m ) h l h l2 h lm which is Eq. (7) and the proof is thus completed.

6 24 P. Yu & Y. Yuan It has been observed from Eq. (7) that (i) The only operation appeared in the formula is the Frechét derivative, involved in Dh i, D m f i and the Lie bracket [, ]. This operation can be easily implemented on computers using a computer algebra system. (ii) The kth order equation contains all the kthorder and only the kth-order terms. The equation is given in a recursive form. (iii) The kth order equation depends upon the known vector homogeneous polynomials v, f 2, f 3,..., f k, and upon h 2, h 3,..., h k, as well as g 2, g 3,..., g k which have been explicitly determined from the lower order equations. (iv) The equation involves the coefficients of the nonlinear transformation h and the coefficients of the kth order normal form g k.ifthejth order (j < k)coefficientsofh j are completely determined from the jth order equation, then the only unknown coefficients in the kth order equation are h k and g k, which yield the CNF. (v) If the kth order equation contains lower order coefficients of h j (j < k) which are undeterminedinthelowerorder(<k) equations, they may be used to eliminate some coefficients of g k, and thus the CNF can be further simplified. (vi) For most of the approaches in computing the SNF (e.g. see [Chua & Kokubu, 988a, 988b; Algaba, 998; Wang et al., 2000; Yu, 999; Yuan & Yu, 200]), the nonlinear vector field f(x) given in Eq. () is assumed to be a CNF in order to simplify symbolic computations. All the approaches described in the above mentioned references generate the kth order algebraic equation which contains lower order (<k) as well as higher order (>k) terms. This is extremely time consuming in symbolic computations and it also takes too much computer memory. With the use of the new approach proposed in this paper, the kth order equation exactly contains the kth order terms, which greatly saves the computer memory and the computational time. Therefore, for our approach, the vector field f(x) can be assumed as a general analytic function, not necessarily acnf. Now, we use Eq. (6) to explain the idea of computing the SNF. Consider the first equation of Eq. (6) for g 2, we split f 2 H 2 into two parts, one of them belongs to R 2 and the other to K 2. Obviously, the part belonging to R 2 can be eliminated using the part [h 2,v ]withanappropriateh 2. The remaining part of f 2 belonging to K 2 is the second order normal form g 2. However, it is seen that since h 2 H 2, it must have some coefficients which are not needed for eliminating the part of f 2 belonging to R 2. Setting these unnecessary coefficients zero results in the next equation of Eq. (6) for g 3 inthesamesituation: that is, it only needs [h 3,v ]toremovethe part of f 3 belonging to R 3, since the other two terms [h 2,f 2 ]anddh 2 (f 2 g 2 ) are known expressions. This procedure can be carried out to any higher order equations, resulting in the CNF. However, if when g 2 is solved, let the unnecessary coefficients of h 2 be carried over to the next step equation, then it is clear to see from the second equation of Eq. (6) that the two terms [h 2,f 2 ]anddh 2 (f 2 g 2 )contain the unnecessary coefficients which can be used to possibly further eliminate a portion or the whole part of f 3 which belongs to K 3. If the unnecessary coefficients are not used at this step, they can be carried over further to higher order equations and may simplify higher order normal forms Symbolic computation The recursive formula given in Eq. (7) has been directly used to develop a symbolic computation program based on Maple. The main operation involved in the computation is the multiplication of a matrix with a vector (a Lie bracket operator consists of two such multiplications). The computations for the second and the third terms in Eq. (7) (i.e. the Lie bracket and the first summation) are straightforward, while the last summation in Eq. (7) needs a careful consideration in order to obtain a minimum number of operations. First, it should be noted that the variable used in functions h l, h l2,..., h lm must be different (i.e. not the y variable) from that of f i so that the operator D can treat them as constant in the processing of differentiation [see Eq. (8)]. When the differentiations are completed, the variables in these h functions should be changed back to the original variable y. Secondly, in order to obtain the minimum number of operations for the last summation, notice that many terms in the summation are actually the same, due to that the indices l, l 2,..., l m may be equal, and due to the following fact: D((Df i )h )h 2 = D((Df i )h 2 )h (7)

7 An Efficient Method for Computing the SNF 25 which can be proved by a direct calculation as follows. f f 2 f n h f 2 f 22 f 2n h 2 D((Df i )h )h 2 = D h f n f n2 f nn h n n f j h j j= n h 2 f 2j h 2j h 22 = D j=.. h 2n n f nj h nj j= n n n f j h j f j2 h j f jn h j j= j= j= n n n h 2 f 2j h j f 2j2 h j f 2jn h j = j= j= j= h h 2n n n n f nj h j f nj2 h j f njn h j j= j= j= n n f jl h j h 2l l= j= n n f 2jl h j h 2l = l= j=. n n f njl h j h 2l l= j= ( n n ) f jl h 2l h j j= l= ( n n ) f 2jl h j h j = j= l=. ( n n ) f njl h j h j j= l= = D((Df i )h 2 )h, (8)

8 26 P. Yu & Y. Yuan where the fact that h lj is not affected by the operator D has been used. Thus, the order of the differentiation in Eq. (8) with respect to h lj has no influence. Then, the last summation in Eq. (7) can be rewritten as [ k 2 ] m=2 m! = k m i=m [ k 2 ] m=2 D m f i m! k m i=m l + l l m = k (i m) 2 l,l 2,...,l m k (i m) 2(m ) D m f i h l h l2 h lm q l + q 2 l q pl p = k (i m) 2 l p <l p < l (k (i m))/m m! q!q 2! q p! hq l h q 2 l 2 h qp l p = [ k 2 ] m=2 k m i=m D m f i q l + q 2 l q pl p = k (i m) 2 l p <l p < l (k (i m))/m h q l h q 2 l 2 h qp l p, (9) q!q 2! q p! where q j, j =, 2,..., p, are nonzero positive integers. Based on Eqs. (7) and (9), Maple programs have been developed, which only require a simple preparation for an input file from a user. The program is outlined below, and the source codes and sample input and output files can be found in Appendices A and B, respectively. () Read a prepared input file. The input file lists the order of the SNF to be computed, ord, and the case of singularity to be considered. (2) Separate the different order terms from the given input differential equations. (3) A procedure for computing the Lie bracket operator. (4) A procedure for computing the Jacobian matrix and the multiplication of a matrix with a vector. (5) A procedure for solving a linear algebraic equation. (6) To obtain the ordered terms for functions F [k],g[k] and H[k]. (7) For order k, recursively compute the algebraic equations which will be used for finding the coefficients of the SNF and the corresponding nonlinear transformation (NT). (a) Compute the Lie bracket term [h k,v ]. (b) Compute the summation term k i=2 {[h k i+, f i ]+Dh i (f k i+ g k i+ )}. (c) Find the indices q, q 2,..., q p for the term h q l h q 2 l 2 h qp l p, where 2 l p l p l (k (i m))/m, satisfying q l + q 2 l q p l p = k (i m), which results in the coefficient (m!/i!i 2!,i p!) for this term. (8) Call the subroutine for a case study of given singularity. (a) Obtain the kth order algebraic equation from the main program, which is stored in thevariablecof[i, j, k]. (b) For complex analysis, get the real and imaginary parts from the coefficient cof[i, j, k]. (c) For a suborder k, determine the b coefficients and then solve the relative c coefficients. (d) Return to the main program. (9) Write the SNF into the output file Nform. 3. Applications of the Method In this section, we apply the results obtained in the previous section to compute the SNFs for Hopf bifurcation and the Bogdanov Takens singularity. The computation of the SNFs for these two singularities have been considered before in [Yu, 999] and [Yuan & Yu, 200], respectively. Symbolic programs using Maple have also been developed in the two references. However, the methods presented in the above two mentioned references require that the original system be described by CNF. Moreover, the methods presented in the two references produce the kth-order algebraic equations which contain lower order (<k) aswellashigherorder (>k) terms, which extensively increase the computational time and take too much computer memory

9 An Efficient Method for Computing the SNF 27 when computing higher order normal forms. In order to give a real comparison, we reconsider the two singularities using the new approach and will use examples presented in Sec. 4 to demonstrate the efficiency of the new method. 3.. The SNF for Hopf bifurcation In [Yu, 999] the SNFs for Hopf and generalized Hopf bifurcations are obtained, where real analysis and real solution procedure are used. In this section, we only present Hopf bifurcation for a comparison. For Hopf singularity, the linear part of system () is in a general form v =(ωx 2, ωx ) T, i.e. the Jacobian of system () has a pair of purely imaginary eigenvalues, ±ωi. Without loss of generality, we may use a time scaling to put it in a normalized form v =(x 2, x ) T. Thus, the Jacobian evaluated at the equilibrium x = 0 is in the form of [ ] 0 L =. (20) 0 When the eigenvalues of a Jacobian involve one or more pure imaginary pairs, a complex analysis may simplify the solution procedure. It has actually been noted that the real analysis given in [Yu, 999] yields the coupled algebraic equations, while it will be seen that the complex analysis can decouple the algebraic equations. Thus, introduce the linear transformation x = (z + z), 2 x 2 = i i.e. (z z); 2 { z = x ix 2, z = x + ix 2 (2) where i is the unit of imaginary number, satisfying i 2 =, and z is the complex conjugate of z. Then, the operator (/z) and(/z) can be obtained by the chain rule as follows: z = z = x x z + x x z + x 2 x 2 z = ( + i 2 x x 2 x 2 z = ( i 2 x x 2 ), ), x 2 (22) and so the linear part of system (), v, becomes v = x 2 ( ) x2 x x x 2 x = i 2 (z z) (z + z) x 2 = 2 iz ( x + i x 2 ) 2 iz ( x i x 2 x 2 = iz z iz ( ) iz z. iz (23) Indeed, applying the transformation (2) into system () yields dz dt dz dt = iz + f(z, z), = iz + f(z, z), ) (24) where f is a polynomial in z and z starting from the second order terms, and f is the complex conjugate of f. Here, ( for ) convenience we use the same notation f = for the complex analysis. To find f f the CNF of Hopf singularity, one may use Takens normal form theory to determine the basis, g k,for the complementary space of K k, or Poincaré normal form theory to determine the so-called resonant terms. It is well known that the resonant terms are given in the form of z j z j (e.g. see [Guckenheimer & Holmes, 993]), and the kth-order normal form is given by ( (bk + ib 2k )z (k+)/2 z (k )/2 ) g k (z, z) = (b k ib 2k )z (k+)/2 z (k )/2, (25) where b k and b 2k are real coefficients to be determined. It is obvious to see from Eq. (25) that the normal form contains odd order terms only, as expected. In the CNF computation, the two kth order coefficients b k and b 2k should, in general, be retained in the normal form. In the SNF computation, however, one wants to eliminate one or both of the coefficients by using the nonlinear transformation. Next, assume that f k (z, z) and h k (z, z) are given in the general forms of f k (z, z) =(f k, f k ) T

10 28 P. Yu & Y. Yuan and h k (z, z) =(h k, h k ) T with where A jkl s are known expressions, given by =(a k0 + ia 2k0 )z k +(a (k ) + ia 2(k ) )z k z + +(a 0k + ia 20k )z k, h k =(c k0 + ic 2k0 )z k +(c (k ) + ic 2(k ) )z k z + +(c 0k + ic 20k )z k. (26) Now, by applying the formula g 2 = f 2 +[h 2,v ], one obtains the following second order (i.e. k =2) complex algebraic equations: A 30 =4a 220 a 20 3 a a a 2a 02, A 230 = 2a a a a 02 3 a 2a 202, A 2 = a 20 a 2 + a a 220, A 22 = a 2 + a2 2 a 20a + a 220 a a a2 202, A 2 = a 20 a 2 a a 220 2a a 202 c 220 a 20 + i(c 20 a 220 )=0, +2a 2 a +2a 2 a 02 c 2 a i(c + a 2 )=0, (27) a 202a a 02a 220, (30) 3c 202 a 02 i(3c 02 + a 202 )=0, A 22 = a 2 + a2 2 a 20a a 220 a 2 which do not involve the b coefficients, and the six c coefficients can be used to eliminate all the second order terms, as expected. Solving Eq. (27) yields c 20 = a 220, c = a 2, c 02 = 3 a 202, c 220 = a 20, c 2 = a, c 202 = 3 a 02. (28) For k = 3, applying the formula g 3 = f 3 + [h 3,v ]+[h 2,f 2 ]+Dh 2 (f 2 g 2 ) results in 2c 230 a 30 + i(2c 30 a 230 )=A 30 + ia 230, b 3 + a 2 i(b 23 a 22 )=A 2 + ia 22, 2c 22 + a 2 + i(2c 2 + a 22 )=A 2 + ia 22, 4c a 03 + i(4c 03 + a 203 )=A 03 + ia 203, (29) +2a a 02 +2a 2 a a 20a a 220a 202, A 03 =2a 202 a 20 2a 02 a a 2a a a 202 A 203 = 2a 20 a 02 2a 220 a a a a 2a 202. Note that the four equations given in Eq. (29) are obtained from balancing the coefficients of the terms z 3, z 2 z, zz 2 and z 3, respectively. It is seen from Eq. (29) that the second equation must use the two b coefficients to solve the equation since it does not involve any c coefficients (the coefficients c 2 and c 22 do not appear in the equation), while the other three equations can be solved using c coefficients. The solution to Eq. (29) is obtained below: b 3 = a 2 A 2, b 23 = a 22 A 22, c 230 = 2 (a 30 + A 30 ), c 30 = 2 (a A 230 ), c 22 = 2 (a 2 A 2 ), c 2 = 2 (a 22 A 22 ), (3) c 203 = 4 (a 03 A 03 ), c 03 = 4 (a 203 A 203 ).

11 An Efficient Method for Computing the SNF 29 Similarly, we can obtain the fourth-order equations (k = 4) which are solved using the fourth-order c coefficients. In general, when k is an even number, one may find (k )c 2k0 a k0 + i((k )c k0 a 2k0 )=A k0 + ia 2k0, (k 3)c 2(k ) a (k ) + i((k 3)c (k ) + a 2(k ) )=A (k ) + ia 2(k ),. (k )c 2(k ) + a (k ) + i((k )c (k ) + a 2(k ) )=A (k ) + ia 2(k ), (k +)c 20k + a 0k + i((k +)c 0k + a 20k )=A 0k + ia 20k, where A jkl s are known expressions obtained from previous step equations. This indicates that all even order terms in the original system can be removed using a nonlinear transformation. This certainly agrees with CNF theory. Next, consider k = 5. One can apply the formula given in Eq. (7) to find the following algebraic equations: 4c 250 a 50 + i(4c 50 a 250 )=A 50 + ia 250, 2c 24 a 4 + i(2c 4 a 24 )+α 4 c 2 + β 4 c 22 = A 4 + ia 24, b 5 + a 32 i(b 25 a 232 )+2iα 32 c 2 2iβ 32 c 22 = A 32 + ia 232, (33) 2c a 23 + i(2c 23 + a 223 )+α 23 c 2 + β 23 c 22 = A 23 + ia 223, 4c 24 + a 4 + i(4c 4 + a 24 )+α 4 c 2 + β 4 c 22 = A 4 + ia 24, 6c a 05 + i(6c 05 + a 205 )=A 05 + ia 205, where A jkl s, α jk s and β jk s are known expressions, and in particular, α 32 = a 22 a 2 a2 2 + a 20a a 220 a a a2 202, (34) β 32 = a 2 a 20 a 2 a 220 a. It is seen from Eq. (33) that except for the third equation, all equations can be solved using c coefficients, while for the third equation, b 5 must be used, and b 25 may be eliminated if at least one of the two coefficients α 32 and β 32 is nonzero, which gives the case of Hopf bifurcation. Therefore, we have the following solutions: (32) If β 32 0, then b 25 =0, b 5 = a 32 A 32, c 2 =0, c 22 = a 232 A 232 2β 32 ; If α 32 0, then b 25 =0, b 5 = a 32 A 32, c 22 =0, c 2 = A 232 a 232 2α 32. When α 32 = β 32 = 0, it is a case of generalized Hopf bifurcations. For k = 7, similarly, we use the formula given in Eq. (7) to obtain 6c 270 a 70 + i(6c 70 a 270 )=A 70 + ia 270, 4c 26 a 6 + i(4c 6 a 26 )=A 6 + ia 26, 2c 252 a 52 + i(2c 52 a 252 )+α 52 c 32 + β 52 c 232 = A 52 + ia 252, b 7 + a 43 i(b 27 a 243 )+2(α 43 + iα 32 )c 32 2iβ 43 c 232 = A 43 + ia 243, 2c a 34 + i(2c 34 + a 234 )+α 34 c 32 + β 34 c 232 = A 34 + ia 234, 4c a 25 + i(4c 25 + a 225 )+α 25 c 32 + β 25 c 232 = A 25 + ia 225, 6c 26 + a 6 + i(6c 6 + a 26 )=A 6 + ia 26, 8c a 07 + i(8c 07 + a 207 )=A 07 + ia 207, (35) (36)

12 30 P. Yu & Y. Yuan where A jkl s, α jk s and β jk s are known expressions, andinparticular, α 43 = a 20 a 2 + a 220 a, β 43 = β 32. (37) Equation (36) clearly shows that if both α 43 and β 43 are nonzero, then both b 7 and b 27 can be set zero, and the two nonlinear coefficients c 32 and c 232 are used to solve the fourth equation of Eq. (36). It is easy to see from Eq. (36) that except for the fourth equation, which is obtained from the coefficient of the term z 4 z 3, all other equations can be solved using c coefficients. For the fourth equation, if the following conditions β 32 0 and α 43 0, (38) are satisfied, then one can set both b 7 and b 27 zero, and thus c 32 and c 232 are uniquely determined. Therefore, under condition (38), we have b 7 =0, b 27 =0, c 32 = A 32 a 43, c 232 = a α 32 c 32 A 232. (39) 2α 43 2β 32 Combining the conditions given in Eqs. (35) and (38) shows that β 32 0 is a necessary condition for obtaining the SNF of Hopf bifurcation. Continuing the above procedure to nineth, eleventh, etc., odd order equations, one may find that all the higher order coefficients b k and b 2k (k 7) can be eliminated using the coefficients c m(m ) and c 2m(m ) where m =(k )/2. The detailed proof can be found in the reference [Yu, 999] where generalized Hopf bifurcations are also discussed. Consequently, the SNF of Hopf bifurcation is obtained in the complex form: u = iu +(β 32 + iα 32 )u 2 u + b 5 u 3 u 2, (40) up to any order, where u is a complex conjugate of u, and the equation for u can be directly obtained from Eq. (40). Here, β 32 and α 32 are given in Eq. (34), and b 5 is explicitly given in terms of a coefficients as = a 32 A 32 = a 32 + [a 2 + a (a202 + a 2202)+a ] 20 a a 220 a 2 β 32 +(a 20 a 2 + a 220 a )α 32 + [a 2 + a22 23 ] (a202 + a2202 ) (a 20 a 2 + a 220 a ) 4 3 (a 40a a 240 a 02 ) + a 3 (a 220 2a 2 ) a 23 (a 20 +2a ) a 22 a 2 + a 222 a 3 (a 3a 202 a 23 a 02 ) [ a 30 a 22 + a (a 20 +2a 5 ) ( 3 a 02 + a 2 a 220 2a 2 5 ) 3 a ] 3 (a 20a 02 a 220 a 202 ) [ a 230 a 2 a 2 (a 20 +2a + 5 ) ( 3 a 02 + a a 220 2a ) 3 a ] 3 (a 220a 02 + a 20 a 202 ) + 3 a 03[a 02 (a +2a 20 ) a 202 (a 2 2a 220 )] + 3 a 203[a 202 (a +2a 20 )+a 02 (a 2 2a 220 )] + a 2 [a (a 20 + a 02 )+a 2 (a a 202 )+ 2 ] 3 (a 20a 02 a 220 a 202 ) + a 22 [a 2 (a 20 a 02 ) a (a 220 a 202 )+ 2 ] 3 (a 20a a 220 a 02 ) (a2 20 a2 220 )(a 2a 02 a a 202 ) 4 3 a 20a 220 (a 2 a a a 02 ) +(a 2 a 2 2)(a 20 a a 220 a 02 ) 2a a 2 (a 20 a 02 a 220 a 202 ) (a3 a 202 3a 2 a 2 a 02 3a a 2 2a a 3 2a 02 ). (4)

13 Equation (40) may be directly put in the real palor form by taking the real and imaginary parts: Ṙ = β 32 R 3 + b 5 R 5, Θ =+α 32 R 2. (42) The symbolic computation for the SNF of Hopf bifurcation can be easily obtained from the above solution procedure. The computation involves finding the normal form coefficients b k, b 2k and the c coefficients from the kth order algebraic equation. The program has been coded using Maple, listed in Appendix A. An Efficient Method for Computing the SNF The SNF for the Bogdanov Takens singularity Now, we turn to the Bogdanov Takens singularity. Suppose the system is described by Eq. (), and the Jacobian matrix of the system evaluated at the equilibrium x = 0 has a double zero eigenvalue, i.e. L is given by L = [ ] (43) which is called non-semisimple case. Then, the general form of the kth order vector homogeneous polynomial f k (y) (k 2) can be written as ( ak0 y k f k (y) = + a (k )y k y a (k ) y y2 k + a 0k y2 k ) a 2k0 y k + a 2(k )y k y a 2(k ) y y2 k + a 20k y2 k, (44) where y =(y,y 2 ) T,andthea ijk s are known coefficients. Similarly, the kth-order nonlinear transformation h k (y) (k 2) can be put in the form ( ck0 y k h k (y) = + c (k )y k y c (k ) y y2 k + c 0k y2 k ) c 2k0 y k + c 2(k )y k y c 2(k ) y y2 k + c 20k y2 k, (45) where c ijk s are unknown coefficients to be determined. The procedure of finding the normal form of the Bogdanov Takens singularity is, similar to that for Hopf bifurcation, to solve the coefficients c ijk and determine the coefficients b k and b k2. To find the form of g k (y) for the Bogdanov Takens singularity, note that the 2k + 2 basis for the space H k are y k y, y k y 2 y, y k y 2 y 2, y y k 2 y y k 2 y, y k 2 y 2, y k 2 y, y k y 2, y 2, and the basis for the space R k can be obtained from [h k,v ], where v =(y 2, 0) T. A direct calculation shows [h k,v ]=Dv h k Dh k v ( c2k0 y k = +(c 2(k ) kc k0 )y k y 2 + +(c 20k c (k ) )y2 k ) kc 2k0 y k y 2 (k )c 2(k ) y k 2 y2 2 c 2(k )y2 k, (47) which clearly indicates that y k(/y 2) must be a basis for the space K k. Further, notice that c k0, c (k ),..., c (k ) and c 2(k ), c 2(k 2)2,..., c 2(k ) can be chosen arbitrarily, while y k(/y )and y k y 2 (/y 2 ) have the same coefficient c 2k0, thus the second basis for K k is either y k(/y ) or y k y 2 (/y 2 ). Therefore, the CNF for the Bogdanov Takens singularity is given either by n n ẏ = y + b j y j, ẏ 2 = b j2 y j ; (48) or by j=2 j=2 (46) ẏ = y 2, n ẏ 2 = (b j y j + b j2x j y 2 ). (49) j=2

14 32 P. Yu & Y. Yuan We shall use the second form (49) in this paper, i.e. choose the basis y k(/y 2)andy k y 2 (/y 2 ) for the space K k.thus,thekth-order vector g k (y) (k 2) can be written as ( ) 0 g k (y) = b k y k + b k2y k, (50) y 2 where the coefficients b k and b k2 are to be determined. Moreover, it has been noted from Eq. (47) that only the 2k coefficients c k0, c (k ),..., c 2k0, c 2(k ),..., c 2(k ) areusedinthekth order equations for eliminating the part of f k belonging to R k, while the two coefficients c 0k and c 20k are not used. Setting these two coefficients zero yields the CNF. To obtain the SNF, we let them be carried over to higher order equations to possibly eliminate coefficients b p and/or b p2 (p>k). # #2 #3 #4 #5 # We follow a similar procedure used in the previous subsection for Hopf bifurcation to compute the SNF of the Bogdanov Takens singularity. That is, for each order k, one applies Eq. (7) to obtain the kth order algebraic equations and then solve these equations one by one for the coefficients of normal form and the nonlinear transformation. This case, for which we must use real analysis, needs a careful consideration in solving the algebraic equations. Since the SNF for this singularity has been studied in [Yuan & Yu, 200], which was based on the CNF, we will outline the solution procedure below and only present the part which is different from that discussed before. The proof is similar to that given in [Yuan & Yu, 200] and a reader is referred to the reference for more details. For k = 2, the formula g 2 = f 2 +[h 2,v ] straightforwardly leads to the following algebraic equations, given in the matrix form: c 20 c c 02 c 220 c 2 c 202 a 20 a a 02 =, (5) a 220 b 2 a 2 b 22 a 202 where #, #2,..., #6 indicate the number of the equations. For example, # represents the first equation of Eq. (5), etc. The above six linear algebraic equations can be solved in the following order: (a) first note that c 02 does not appear in the above equations since the third column of the matrix is a zero column; (b) b 2 is uniquely determined from #4 equation which does not involve any c coefficients; (c) c 220 is uniquely determined from # equation; (d) b 22 is uniquely determined from #5 equation; (e) The remaining three equations (#2, #3 and #6) have four c coefficients and c 202 can be chosen arbitrarily. Moreover, note that only two equations, #2 and #6, of the three equations are coupled while #3 equation gives the relation between c and c 202. Thus, from Eq. (5) we obtain the following results: b 2 = a 220, b 22 = a 2 +2a 20, c 20 = 2 (a + a 202 ), (52) c = c a 02, c 220 = a 20, c 2 = a 202, which show that the second order CNF cannot be further simplified since, in general, b 2 0,b 22 0,as expected. It is also noted that c 202 is undetermined at this order while c 02 does not appear in the solution. Similarly, for k = 3, one may apply the formula g 3 = f 3 +[h 3,v ]+[h 2,f 2 ]+Dh 2 (f 2 g 2 )toobtain the following eight third-order equations: # c 30 A 30 a 220 c 202 # c 2 A 2 a 2 c 202 2a 220 c 02 # c 2 A 2 +2a c 202 2(a 20 + a 2 )c 02 # c #5 03 A = 03 +2a 02 c a c 02, (53) c 230 A 230 b 3 # c 22 A 22 b 32 # c 22 A 22 4a 20 c a 220 c 02 # c 203 A a 202 c a 2 c 02

15 An Efficient Method for Computing the SNF 33 where A ijl s are given explicitly in terms of the second and third order a coefficients as follows: A 30 = a 30 + a 20 a 202 a 02 a 220, A 2 = a 2 2a 20 a a a 202 a 02 a a2, A 2 = a 2 + a a 02 +2a 02 a 202, A 03 = a 03, A 230 = a a a 220 a 20 a 2, A 22 = a 22 +2a 02 a a a a 2a 202 4a 20 a 202, A 22 = a 22 + a 02 a 2 +2a 2 202, A 203 = a 203. (54) First, it is observed from Eq. (53) that the matrix has a similar form as that of Eq. (5), and c 03 does not appear in the eight equations. Second, we may follow a similar order as that used for solving Eq. (5) to solve Eq. (53): From #5 equation (solving b 3 ) # equation (solving c 230 ) #6 equation (to determine b 32 ). Then using #4 equation to find c 2 given in terms of c 203 and other coefficients. Finally, from #8 equation (solving c 22 ) #7 equation (solving c 22 ) #3 equation (solving c 2 ) #2 equation (solving c 30 ). The difference between solving Eqs. (53) and (5) is that Eq. (53) involves the undetermined second-order coefficients c 202 and c 02,whichmay be used to eliminate the normal form coefficients. Since #5 equation does not involve c coefficients (A 230 is purely expressed in terms of a jkl s), thus b 3 cannot be eliminated. Consider b 32 involved in #6 equation, one can find that because the c 230 solved from # equation contains the coefficient c 202,thus if a we can choose appropriate c 202 to set b 32 = 0. It should be noted that here we must use c 202 rather than c 02 to remove b 32. c 02 is still undetermined and may be used in next order equations to eliminate higher order normal form coefficients. In fact, it will be seen that c 02 is used in the fourthorder equations. Consequently, we obtain and b 3 = A 230, c 230 = A 30 + a 220 c 202, c 202 = ( A 30 + ) a A 22 b 32 =0, c 2 = c A 03 +2a 02 c a c 02, (55) c 30 c 2 c 22 c 22 ( A 2 A 30 + ) 3 A a2 22 a 220 ( A 2 +2 A 30 + ) = 3 A a 22 a 220 ( A 22 4 A 30 + ) + 3 A a20 22 a 220 ( A A 30 + ) 3 A a a 220 2a 220 2(a 20 + a 2 ) 2a 220 a 2 c 02. (56) The notation in Eq. (55) means that b 32 can be set zero by appropriately choosing the coefficient c 202. It should be pointed out that although the above four equations given in Eq. (56) are written in a matrix form, we do not need to solve matrix equations. The four coefficients can be found by solving the four equations one by one in reverse order. The above procedure can be applied to higher order equations. For k = 4, we apply Eq. (7) to find c 240 = A 40 2a 20 c 30 + a 220 c 2, c 3 = c A 04 +2a 02 c a c 03, as well as the following two groups of equations, called key equations. (57) The first group contains two

16 34 P. Yu & Y. Yuan equations, given by ( ) [ A240 b 4 2a a A 23 b 42 8a 20 + a 2 2a 220 2a 20 2a 220 ] c 30 c 2 c 22 c 22 = ( ) 0, (58) 0 which indeed does not involve the fourth-order c coefficients, and the second group has six equations, written in the matrix form: c c c c c c 23 = + A 3 A 22 A 3 A 222 A 23 A a 220 2(A a a 220 ) 3a 30 2a (a 20 + a 2 )+a 20 a 202 ( a 2 + a a a 2 +2a 20 {a 02 + A 30 + } 3 A 22 )/a a a 220 (3a + a 202 ) a 20 a c 02 2 ( 2 A 30 + ) 3 A 22 +2(a 22 + a 02 a a 22 a 202 )+a a 2 a 22 + a 2 (A 30 + )/ 3 A 22 a 220 c a 220 2(a 20 + a 2 ) 2a a 220 (6a 20 + a 2 ) 2a 202 2a 220 a 2 a 0 0 a 2a 02 a a 02 0 a 2 2a 202 (4a 20 + a 20 ) a c 30 c 2 c 22 c a 220 (4a c a 2 ) c 03, (59) 0 2a 220 a 2 where A ijl s (j +l = 4) are known expressions given in terms of the coefficients a ijk s. It is seen from Eqs. (56) and (58) that only one c coefficient, c 02, can be used to eliminate the normal form coefficients. Further, it can be shown that either b 4 or b 42 can be removed as long as a (which has been required in solving the third-order equations) and a 2 +2a For consistency with the pattern of the third-order equation where b 3 0andb 32 =0,letb 42 =0, then b 4 and c 02 are uniquely determined from the first and second equations of Eq. (58), respectively. Having solved c 02, one can then uniquely determine the remaining six fourth-order coefficients c 40, c 3, c 22, c 23, c 222 and c 23 from Eq. (59) in terms of the coefficients c 203 and c 03. Similarly,

17 An Efficient Method for Computing the SNF 35 these two coefficients c 203 and c 03 are carried over to higher order equations to eliminate higher order normal form coefficients. The above solution procedure is summarized in Table, where the notation has the similar meaning as that given in Eq. (55). The general rule is described as follows: For k =3m +, c 02m b k2 =0, For k =3m +2, c 20(2m+) b k2 =0, For k =3m +3, c 20(2m+2) b k2 =0, c 0(2m+) b k =0. (60) In general, for the kth order equation we find the following matrix equation: 0 0 k 0 k k 0 k 0 k c k0 c (k ) c (k 2)2. c 0k c 2k0 c 2(k ) c 2(k 2)2 c 2(k 3)3. c 20k = v (6) Table. Elimination of N.F. coefficients. N.T. N.F. Coefficient Coefficient Order c 202 b 32 =0 3rd c 02 b 42 =0 4th c 203 b 52 =0 5th c 03 b 6 =0 6th c 204 b 62 =0 6th c 04 b 72 =0 7th c 205 b 82 =0 8th c 05 b 9 =0 9th c 206 b 92 =0 9th.... where the 2(k+)-dimensional vector v contains the expressions which are functions of the coefficients, some of which have been obtained in the previous steps, while one or two are determined in the current order, and others will be determined in higher order equations. The same solution procedure is observed from Eq. (6): the coefficient c 0k does not appear in the equations; the coefficient c 2k0 is solved first from the first equation; the coefficient c 20k is only involved in the (k + )th order equation and can thus be chosen arbitrarily; the remaining 2k equations can be divided into two groups. The first group consists of the (k + 2)th and (k + 3)th equations which contain the two coefficients b k and b k2 ; and the second group has the remaining (2k 2) equations which are used to determine the remaining (2k 2) c coefficients: c k0, c (k ),..., c 2(k 2) and c 2(k ), c 2(k 2)2,..., c 2(k ). The key step in the procedure is to solve the two equations in the first group, since the two coefficients b k and b k2 play the major rule in determining the SNF of this order. There are three cases which are similar to that studied in [Yuan & Yu, 200], and thus we omit the detailed derivations but briefly outline the formulae below. (i) When k =3m +(m ), one finds ( ) A2 k0 b k +F C +F 2 C 2 + +F m C m = D, A 2(k ) b k2 (62) where D represents the known expressions obtained from the previous steps, and

18 36 P. Yu & Y. Yuan k 4 k 4 F = 2a {}}{{}}{ a , (n 2)a a k i 3 k i 3 F i = {}}{{}}{ 0 0 b (i+) (i =2, 3,..., m), 0 0 2b (i+) 0 0 c (k i)0. c C i = 2(k i 2) (i =, 2,..., m) c 2(k i ). c 2(k i ) (63) in which F i s are 2 2(k i ) matrices and C i s are 2(k i ) vectors. Once the above two equations are solved for determining the b k, b k2 as well as the undetermined coefficients c 0m and c 20m from the previous order equations, the second group of equations are then used to find the remaining c coefficients. It can be shown that Eq. (62) may be reduced to ( ) ( ) A2k0 b k + c 02m A 2(k ) b k2 = D 2, (64) where D 2 and denote the known expressions. Equation (64) clearly indicates that we can set b k2 = 0 by explicitly choosing a unique c 02m in terms of the known coefficients, and then b k is also uniquely determined. The procedure to obtain Eq. (64) and the proof is similar to that given in the reference [Yuan & Yu, 200], so the detail is not presented. Similarly, one can show that (ii) When k =3m +2(m ), the equation is ( ) ( ) A2k0 b k + c 202m+ A 2(k ) b k2 = D 3, (65) and thus one may set b k2 = 0 by explicitly solving a unique c 20(2m+) from the above equation in terms of known coefficients. Then b k is uniquely determined. Finally, (iii) When k =3m +3(m ), one obtains ( A2k0 b k A 2(k ) b k2 + ) + ( ) c 02m+ ( ) 0 c 202m+2 = D 4. (66) Unlike the cases (i) and (ii), we may set b k = b k2 = 0 for this case by uniquely determining c 202(m+2) and c 0(2m+) from Eq. (66), given in terms of known coefficients. From the above discussion it is easy to find the pattern of the coefficients of the SNF: when k =3m +or 3m +2, b k2 =0,b k 0, while when n =3k +3, b n = b n2 = 0. The procedure of solving the kth order linear algebraic equation to determine the b and c coefficients is described in Table 2, where COF ij and COF 2ij (i + j = k, 0 i, j k) denote the 2k + 2 coefficients of the binomial y i yj 2. These coefficients are obtained by using the formula g k given in Eq. (7) and balancing the binomial terms y i yj 2. Note in Table 2 that step I has several choices, only one of them is applicable for a given order k. It should be noted that the pattern shown above is same as that obtained in [Yuan & Yu, 200], as expected. However, in [Yuan & Yu, 200] the computation of the SNF is based on the CNF, while here the SNF is derived from a general differential equation. The program for the above solution procedure has been coded in Maple, which is given found in Appendix A.

19 An Efficient Method for Computing the SNF 37 Table 2. Solution procedure. Coefficient Coefficients Order Step of Binomial Solved Zero b ij s k COF 2 b 22 2 COF 220 b 2 COF 2(k ) c 02m b k2 =0 3and3m +(m ) COF 2k0 b k I COF 2(k ) c 02m+ b k2 =0 3m +2(m ) COF 2k0 b k COF 2(k ) c 02m+2 b k2 =0 3m +3(m ) COF 2k0 c 02m+ b k =0 COF 20k c 2(k ) COF 2(k ) c 22(k 2) II k 2 COF 2(k 2)2 c 2k0 COF (k ) c 2(k 2) COF 2(k 2) c 3(k 3) III k 2 COF (k ) c k0 4. Examples In this section, we shall apply the results presented in the previous section and the Maple programs developed in this paper to compute the SNFs of four examples. The first two examples are for Hopf bifurcation and the other two for the Bogdonov Takens singularity. 4.. Example Consider the following general system with randomly chosen coefficients up to seventh order: ẋ = x 2 + x x x 2 +2x x x2 x x x x3 2 +5x x3 x 2 5x 2 x x x x4 2 2x5 +5x4 x x3 x2 2 + x2 x x x x x x5 x 2 x 4 x x3 x3 2 +2x2 x x x 5 2 2x6 2 +2x7 + x6 x 2 5x 5 x x4 x x 3 x x2 x x x x 7 2, ẋ 2 = x +3x x x 2 +5x x3 +3x 2 x 2 +0x x x x4 2 3 x3 x 2 (67) +0x 2 x2 2 +3x x x4 2 +7x5 3 5 x4 x 2 +7x 2 x x x x5 2 2x6 +5x 5 x x4 x2 2 +7x3 x3 2 +4x2 x4 2 5 x x x6 2 + x7 +5x6 x x5 x x4 x 3 2 3x 3 x x 2 x x x x 7 2.

A matching pursuit technique for computing the simplest normal forms of vector fields

A matching pursuit technique for computing the simplest normal forms of vector fields Journal of Symbolic Computation 35 (2003) 59 65 www.elsevier.com/locate/jsc A matching pursuit technique for computing the simplest normal forms of vector fields Pei Yu, Yuan Yuan Department of Applied

More information

Computation of the simplest normal forms with perturbation parameters based on Lie transform and rescaling

Computation of the simplest normal forms with perturbation parameters based on Lie transform and rescaling Journal of Computational and Applied Mathematics 144 (2002) 359 373 wwwelseviercom/locate/cam Computation of the simplest normal forms with perturbation parameters based on Lie transform and rescaling

More information

COMPUTATION OF SIMPLEST NORMAL FORMS OF DIFFERENTIAL EQUATIONS ASSOCIATED WITH A DOUBLE-ZERO EIGENVALUE

COMPUTATION OF SIMPLEST NORMAL FORMS OF DIFFERENTIAL EQUATIONS ASSOCIATED WITH A DOUBLE-ZERO EIGENVALUE International Journal of Bifurcation and Chaos, Vol, No 5 (2) 37 33 c World Scientific Publishing Company COMPUTATION OF SIMPLEST NORMAL FORMS OF DIFFERENTIAL EQUATIONS ASSOCIATED WITH A DOUBLE-ZERO EIGENVALUE

More information

TWELVE LIMIT CYCLES IN A CUBIC ORDER PLANAR SYSTEM WITH Z 2 -SYMMETRY. P. Yu 1,2 and M. Han 1

TWELVE LIMIT CYCLES IN A CUBIC ORDER PLANAR SYSTEM WITH Z 2 -SYMMETRY. P. Yu 1,2 and M. Han 1 COMMUNICATIONS ON Website: http://aimsciences.org PURE AND APPLIED ANALYSIS Volume 3, Number 3, September 2004 pp. 515 526 TWELVE LIMIT CYCLES IN A CUBIC ORDER PLANAR SYSTEM WITH Z 2 -SYMMETRY P. Yu 1,2

More information

Chapter 2 Hopf Bifurcation and Normal Form Computation

Chapter 2 Hopf Bifurcation and Normal Form Computation Chapter 2 Hopf Bifurcation and Normal Form Computation In this chapter, we discuss the computation of normal forms. First we present a general approach which combines center manifold theory with computation

More information

Physics 116A Determinants

Physics 116A Determinants Physics 116A Determinants Peter Young (Dated: February 5, 2014) I. DEFINITION The determinant is an important property of a square n n matrix. Such a matrix, A say, has n 2 elements, a ij, and is written

More information

8.1 Bifurcations of Equilibria

8.1 Bifurcations of Equilibria 1 81 Bifurcations of Equilibria Bifurcation theory studies qualitative changes in solutions as a parameter varies In general one could study the bifurcation theory of ODEs PDEs integro-differential equations

More information

Representations of Sp(6,R) and SU(3) carried by homogeneous polynomials

Representations of Sp(6,R) and SU(3) carried by homogeneous polynomials Representations of Sp(6,R) and SU(3) carried by homogeneous polynomials Govindan Rangarajan a) Department of Mathematics and Centre for Theoretical Studies, Indian Institute of Science, Bangalore 560 012,

More information

On the positivity of linear weights in WENO approximations. Abstract

On the positivity of linear weights in WENO approximations. Abstract On the positivity of linear weights in WENO approximations Yuanyuan Liu, Chi-Wang Shu and Mengping Zhang 3 Abstract High order accurate weighted essentially non-oscillatory (WENO) schemes have been used

More information

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.

Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane. Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 2018 8 Lecture 8 8.1 Matrices July 22, 2018 We shall study

More information

CENTER MANIFOLD AND NORMAL FORM THEORIES

CENTER MANIFOLD AND NORMAL FORM THEORIES 3 rd Sperlonga Summer School on Mechanics and Engineering Sciences 3-7 September 013 SPERLONGA CENTER MANIFOLD AND NORMAL FORM THEORIES ANGELO LUONGO 1 THE CENTER MANIFOLD METHOD Existence of an invariant

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Chapter 7. Linear Algebra: Matrices, Vectors,

Chapter 7. Linear Algebra: Matrices, Vectors, Chapter 7. Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra includes the theory and application of linear systems of equations, linear transformations, and eigenvalue problems.

More information

Notes on basis changes and matrix diagonalization

Notes on basis changes and matrix diagonalization Notes on basis changes and matrix diagonalization Howard E Haber Santa Cruz Institute for Particle Physics, University of California, Santa Cruz, CA 95064 April 17, 2017 1 Coordinates of vectors and matrix

More information

On the convergence of normal forms for analytic control systems

On the convergence of normal forms for analytic control systems Chapter 1 On the convergence of normal forms for analytic control systems Wei Kang Department of Mathematics, Naval Postgraduate School Monterey, CA 93943 wkang@nps.navy.mil Arthur J. Krener Department

More information

Degenerate Perturbation Theory. 1 General framework and strategy

Degenerate Perturbation Theory. 1 General framework and strategy Physics G6037 Professor Christ 12/22/2015 Degenerate Perturbation Theory The treatment of degenerate perturbation theory presented in class is written out here in detail. The appendix presents the underlying

More information

Solving Systems of Polynomial Equations

Solving Systems of Polynomial Equations Solving Systems of Polynomial Equations David Eberly, Geometric Tools, Redmond WA 98052 https://www.geometrictools.com/ This work is licensed under the Creative Commons Attribution 4.0 International License.

More information

APPPHYS217 Tuesday 25 May 2010

APPPHYS217 Tuesday 25 May 2010 APPPHYS7 Tuesday 5 May Our aim today is to take a brief tour of some topics in nonlinear dynamics. Some good references include: [Perko] Lawrence Perko Differential Equations and Dynamical Systems (Springer-Verlag

More information

Abstract. + n) variables with a rank of ( n2 (n+1) )+n 2 ] equations in ( n2 (n+1)

Abstract. + n) variables with a rank of ( n2 (n+1) )+n 2 ] equations in ( n2 (n+1) SOLUTION OF SECOND ORDER LINEARIZATION R.Devanathan School of Electrical and Electronic Engineering (Block S) Nanyang Technological University Singapore 6998 E-mail:edevan@ntu.edu.sg Abstract For a nonlinear

More information

Checking Consistency. Chapter Introduction Support of a Consistent Family

Checking Consistency. Chapter Introduction Support of a Consistent Family Chapter 11 Checking Consistency 11.1 Introduction The conditions which define a consistent family of histories were stated in Ch. 10. The sample space must consist of a collection of mutually orthogonal

More information

2 Lecture 2: Amplitude equations and Hopf bifurcations

2 Lecture 2: Amplitude equations and Hopf bifurcations Lecture : Amplitude equations and Hopf bifurcations This lecture completes the brief discussion of steady-state bifurcations by discussing vector fields that describe the dynamics near a bifurcation. From

More information

NONLINEAR EQUATIONS AND TAYLOR S THEOREM

NONLINEAR EQUATIONS AND TAYLOR S THEOREM APPENDIX C NONLINEAR EQUATIONS AND TAYLOR S THEOREM C.1 INTRODUCTION In adjustment computations it is frequently necessary to deal with nonlinear equations. For example, some observation equations relate

More information

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Chemistry 5.76 Revised February, 1982 NOTES ON MATRIX METHODS

MASSACHUSETTS INSTITUTE OF TECHNOLOGY Chemistry 5.76 Revised February, 1982 NOTES ON MATRIX METHODS MASSACHUSETTS INSTITUTE OF TECHNOLOGY Chemistry 5.76 Revised February, 198 NOTES ON MATRIX METHODS 1. Matrix Algebra Margenau and Murphy, The Mathematics of Physics and Chemistry, Chapter 10, give almost

More information

11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly.

11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly. C PROPERTIES OF MATRICES 697 to whether the permutation i 1 i 2 i N is even or odd, respectively Note that I =1 Thus, for a 2 2 matrix, the determinant takes the form A = a 11 a 12 = a a 21 a 11 a 22 a

More information

Complex Numbers and Quaternions for Calc III

Complex Numbers and Quaternions for Calc III Complex Numbers and Quaternions for Calc III Taylor Dupuy September, 009 Contents 1 Introduction 1 Two Ways of Looking at Complex Numbers 1 3 Geometry of Complex Numbers 4 Quaternions 5 4.1 Connection

More information

10.1 Complex Arithmetic Argand Diagrams and the Polar Form The Exponential Form of a Complex Number De Moivre s Theorem 29

10.1 Complex Arithmetic Argand Diagrams and the Polar Form The Exponential Form of a Complex Number De Moivre s Theorem 29 10 Contents Complex Numbers 10.1 Complex Arithmetic 2 10.2 Argand Diagrams and the Polar Form 12 10.3 The Exponential Form of a Complex Number 20 10.4 De Moivre s Theorem 29 Learning outcomes In this Workbook

More information

Eigenspaces in Recursive Sequences

Eigenspaces in Recursive Sequences Eigenspaces in Recursive Sequences Ben Galin September 5, 005 One of the areas of study in discrete mathematics deals with sequences, in particular, infinite sequences An infinite sequence can be defined

More information

INVARIANT CURVES AND FOCAL POINTS IN A LYNESS ITERATIVE PROCESS

INVARIANT CURVES AND FOCAL POINTS IN A LYNESS ITERATIVE PROCESS International Journal of Bifurcation and Chaos, Vol. 13, No. 7 (2003) 1841 1852 c World Scientific Publishing Company INVARIANT CURVES AND FOCAL POINTS IN A LYNESS ITERATIVE PROCESS L. GARDINI and G. I.

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

Section 8.2 : Homogeneous Linear Systems

Section 8.2 : Homogeneous Linear Systems Section 8.2 : Homogeneous Linear Systems Review: Eigenvalues and Eigenvectors Let A be an n n matrix with constant real components a ij. An eigenvector of A is a nonzero n 1 column vector v such that Av

More information

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling

Examples include: (a) the Lorenz system for climate and weather modeling (b) the Hodgkin-Huxley system for neuron modeling 1 Introduction Many natural processes can be viewed as dynamical systems, where the system is represented by a set of state variables and its evolution governed by a set of differential equations. Examples

More information

8 Velocity Kinematics

8 Velocity Kinematics 8 Velocity Kinematics Velocity analysis of a robot is divided into forward and inverse velocity kinematics. Having the time rate of joint variables and determination of the Cartesian velocity of end-effector

More information

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T

The goal of this chapter is to study linear systems of ordinary differential equations: dt,..., dx ) T 1 1 Linear Systems The goal of this chapter is to study linear systems of ordinary differential equations: ẋ = Ax, x(0) = x 0, (1) where x R n, A is an n n matrix and ẋ = dx ( dt = dx1 dt,..., dx ) T n.

More information

Putzer s Algorithm. Norman Lebovitz. September 8, 2016

Putzer s Algorithm. Norman Lebovitz. September 8, 2016 Putzer s Algorithm Norman Lebovitz September 8, 2016 1 Putzer s algorithm The differential equation dx = Ax, (1) dt where A is an n n matrix of constants, possesses the fundamental matrix solution exp(at),

More information

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX

Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX Introduction to Quantitative Techniques for MSc Programmes SCHOOL OF ECONOMICS, MATHEMATICS AND STATISTICS MALET STREET LONDON WC1E 7HX September 2007 MSc Sep Intro QT 1 Who are these course for? The September

More information

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in

18.06 Problem Set 8 - Solutions Due Wednesday, 14 November 2007 at 4 pm in 806 Problem Set 8 - Solutions Due Wednesday, 4 November 2007 at 4 pm in 2-06 08 03 Problem : 205+5+5+5 Consider the matrix A 02 07 a Check that A is a positive Markov matrix, and find its steady state

More information

Solutions to Dynamical Systems 2010 exam. Each question is worth 25 marks.

Solutions to Dynamical Systems 2010 exam. Each question is worth 25 marks. Solutions to Dynamical Systems exam Each question is worth marks [Unseen] Consider the following st order differential equation: dy dt Xy yy 4 a Find and classify all the fixed points of Hence draw the

More information

MA22S3 Summary Sheet: Ordinary Differential Equations

MA22S3 Summary Sheet: Ordinary Differential Equations MA22S3 Summary Sheet: Ordinary Differential Equations December 14, 2017 Kreyszig s textbook is a suitable guide for this part of the module. Contents 1 Terminology 1 2 First order separable 2 2.1 Separable

More information

Analysis-3 lecture schemes

Analysis-3 lecture schemes Analysis-3 lecture schemes (with Homeworks) 1 Csörgő István November, 2015 1 A jegyzet az ELTE Informatikai Kar 2015. évi Jegyzetpályázatának támogatásával készült Contents 1. Lesson 1 4 1.1. The Space

More information

Physics 221A Fall 1996 Notes 14 Coupling of Angular Momenta

Physics 221A Fall 1996 Notes 14 Coupling of Angular Momenta Physics 1A Fall 1996 Notes 14 Coupling of Angular Momenta In these notes we will discuss the problem of the coupling or addition of angular momenta. It is assumed that you have all had experience with

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Counting Roots of the Characteristic Equation for Linear Delay-Differential Systems

Counting Roots of the Characteristic Equation for Linear Delay-Differential Systems journal of differential equations 136, 222235 (1997) article no. DE963127 Counting Roots of the Characteristic Equation for Linear Delay-Differential Systems B. D. Hassard* Department of Mathematics, SUNY

More information

Polynomial Solutions of the Laguerre Equation and Other Differential Equations Near a Singular

Polynomial Solutions of the Laguerre Equation and Other Differential Equations Near a Singular Polynomial Solutions of the Laguerre Equation and Other Differential Equations Near a Singular Point Abstract Lawrence E. Levine Ray Maleh Department of Mathematical Sciences Stevens Institute of Technology

More information

JUST THE MATHS UNIT NUMBER DIFFERENTIATION APPLICATIONS 5 (Maclaurin s and Taylor s series) A.J.Hobson

JUST THE MATHS UNIT NUMBER DIFFERENTIATION APPLICATIONS 5 (Maclaurin s and Taylor s series) A.J.Hobson JUST THE MATHS UNIT NUMBER.5 DIFFERENTIATION APPLICATIONS 5 (Maclaurin s and Taylor s series) by A.J.Hobson.5. Maclaurin s series.5. Standard series.5.3 Taylor s series.5.4 Exercises.5.5 Answers to exercises

More information

Generalized eigenvector - Wikipedia, the free encyclopedia

Generalized eigenvector - Wikipedia, the free encyclopedia 1 of 30 18/03/2013 20:00 Generalized eigenvector From Wikipedia, the free encyclopedia In linear algebra, for a matrix A, there may not always exist a full set of linearly independent eigenvectors that

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Introduction to Matrices

Introduction to Matrices 214 Analysis and Design of Feedback Control Systems Introduction to Matrices Derek Rowell October 2002 Modern system dynamics is based upon a matrix representation of the dynamic equations governing the

More information

arxiv: v1 [hep-ph] 30 Dec 2015

arxiv: v1 [hep-ph] 30 Dec 2015 June 3, 8 Derivation of functional equations for Feynman integrals from algebraic relations arxiv:5.94v [hep-ph] 3 Dec 5 O.V. Tarasov II. Institut für Theoretische Physik, Universität Hamburg, Luruper

More information

2nd-Order Linear Equations

2nd-Order Linear Equations 4 2nd-Order Linear Equations 4.1 Linear Independence of Functions In linear algebra the notion of linear independence arises frequently in the context of vector spaces. If V is a vector space over the

More information

Additive resonances of a controlled van der Pol-Duffing oscillator

Additive resonances of a controlled van der Pol-Duffing oscillator Additive resonances of a controlled van der Pol-Duffing oscillator This paper has been published in Journal of Sound and Vibration vol. 5 issue - 8 pp.-. J.C. Ji N. Zhang Faculty of Engineering University

More information

5.2.2 Planar Andronov-Hopf bifurcation

5.2.2 Planar Andronov-Hopf bifurcation 138 CHAPTER 5. LOCAL BIFURCATION THEORY 5.. Planar Andronov-Hopf bifurcation What happens if a planar system has an equilibrium x = x 0 at some parameter value α = α 0 with eigenvalues λ 1, = ±iω 0, ω

More information

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information

Linear Algebra: Lecture notes from Kolman and Hill 9th edition.

Linear Algebra: Lecture notes from Kolman and Hill 9th edition. Linear Algebra: Lecture notes from Kolman and Hill 9th edition Taylan Şengül March 20, 2019 Please let me know of any mistakes in these notes Contents Week 1 1 11 Systems of Linear Equations 1 12 Matrices

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M.

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M. 5 Vector fields Last updated: March 12, 2012. 5.1 Definition and general properties We first need to define what a vector field is. Definition 5.1. A vector field v on a manifold M is map M T M such that

More information

Absolute value equations

Absolute value equations Linear Algebra and its Applications 419 (2006) 359 367 www.elsevier.com/locate/laa Absolute value equations O.L. Mangasarian, R.R. Meyer Computer Sciences Department, University of Wisconsin, 1210 West

More information

Assignment 8. [η j, η k ] = J jk

Assignment 8. [η j, η k ] = J jk Assignment 8 Goldstein 9.8 Prove directly that the transformation is canonical and find a generating function. Q 1 = q 1, P 1 = p 1 p Q = p, P = q 1 q We can establish that the transformation is canonical

More information

88 CHAPTER 3. SYMMETRIES

88 CHAPTER 3. SYMMETRIES 88 CHAPTER 3 SYMMETRIES 31 Linear Algebra Start with a field F (this will be the field of scalars) Definition: A vector space over F is a set V with a vector addition and scalar multiplication ( scalars

More information

8. Diagonalization.

8. Diagonalization. 8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard

More information

B. Differential Equations A differential equation is an equation of the form

B. Differential Equations A differential equation is an equation of the form B Differential Equations A differential equation is an equation of the form ( n) F[ t; x(, xʹ (, x ʹ ʹ (, x ( ; α] = 0 dx d x ( n) d x where x ʹ ( =, x ʹ ʹ ( =,, x ( = n A differential equation describes

More information

ON THE MATRIX EQUATION XA AX = X P

ON THE MATRIX EQUATION XA AX = X P ON THE MATRIX EQUATION XA AX = X P DIETRICH BURDE Abstract We study the matrix equation XA AX = X p in M n (K) for 1 < p < n It is shown that every matrix solution X is nilpotent and that the generalized

More information

Stat 206: Linear algebra

Stat 206: Linear algebra Stat 206: Linear algebra James Johndrow (adapted from Iain Johnstone s notes) 2016-11-02 Vectors We have already been working with vectors, but let s review a few more concepts. The inner product of two

More information

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.) 4 Vector fields Last updated: November 26, 2009. (Under construction.) 4.1 Tangent vectors as derivations After we have introduced topological notions, we can come back to analysis on manifolds. Let M

More information

WARPED PRODUCTS PETER PETERSEN

WARPED PRODUCTS PETER PETERSEN WARPED PRODUCTS PETER PETERSEN. Definitions We shall define as few concepts as possible. A tangent vector always has the local coordinate expansion v dx i (v) and a function the differential df f dxi We

More information

Intermediate Jacobians and Abel-Jacobi Maps

Intermediate Jacobians and Abel-Jacobi Maps Intermediate Jacobians and Abel-Jacobi Maps Patrick Walls April 28, 2012 Introduction Let X be a smooth projective complex variety. Introduction Let X be a smooth projective complex variety. Intermediate

More information

MATH 205C: STATIONARY PHASE LEMMA

MATH 205C: STATIONARY PHASE LEMMA MATH 205C: STATIONARY PHASE LEMMA For ω, consider an integral of the form I(ω) = e iωf(x) u(x) dx, where u Cc (R n ) complex valued, with support in a compact set K, and f C (R n ) real valued. Thus, I(ω)

More information

CHARACTERISTIC CLASSES

CHARACTERISTIC CLASSES 1 CHARACTERISTIC CLASSES Andrew Ranicki Index theory seminar 14th February, 2011 2 The Index Theorem identifies Introduction analytic index = topological index for a differential operator on a compact

More information

Introduction to Group Theory

Introduction to Group Theory Chapter 10 Introduction to Group Theory Since symmetries described by groups play such an important role in modern physics, we will take a little time to introduce the basic structure (as seen by a physicist)

More information

2018 Mathematics. Advanced Higher. Finalised Marking Instructions

2018 Mathematics. Advanced Higher. Finalised Marking Instructions National Qualifications 08 08 Mathematics Advanced Higher Finalised Marking Instructions Scottish Qualifications Authority 08 The information in this publication may be reproduced to support SQA qualifications

More information

Repeated Eigenvalues and Symmetric Matrices

Repeated Eigenvalues and Symmetric Matrices Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information

REVIEW OF DIFFERENTIAL CALCULUS

REVIEW OF DIFFERENTIAL CALCULUS REVIEW OF DIFFERENTIAL CALCULUS DONU ARAPURA 1. Limits and continuity To simplify the statements, we will often stick to two variables, but everything holds with any number of variables. Let f(x, y) be

More information

7.5 Partial Fractions and Integration

7.5 Partial Fractions and Integration 650 CHPTER 7. DVNCED INTEGRTION TECHNIQUES 7.5 Partial Fractions and Integration In this section we are interested in techniques for computing integrals of the form P(x) dx, (7.49) Q(x) where P(x) and

More information

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real

More information

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A =

What is A + B? What is A B? What is AB? What is BA? What is A 2? and B = QUESTION 2. What is the reduced row echelon matrix of A = STUDENT S COMPANIONS IN BASIC MATH: THE ELEVENTH Matrix Reloaded by Block Buster Presumably you know the first part of matrix story, including its basic operations (addition and multiplication) and row

More information

Lecture Notes 6: Dynamic Equations Part A: First-Order Difference Equations in One Variable

Lecture Notes 6: Dynamic Equations Part A: First-Order Difference Equations in One Variable University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 54 Lecture Notes 6: Dynamic Equations Part A: First-Order Difference Equations in One Variable Peter J. Hammond latest revision 2017

More information

ON THE EXISTENCE OF HOPF BIFURCATION IN AN OPEN ECONOMY MODEL

ON THE EXISTENCE OF HOPF BIFURCATION IN AN OPEN ECONOMY MODEL Proceedings of Equadiff-11 005, pp. 9 36 ISBN 978-80-7-64-5 ON THE EXISTENCE OF HOPF BIFURCATION IN AN OPEN ECONOMY MODEL KATARÍNA MAKOVÍNYIOVÁ AND RUDOLF ZIMKA Abstract. In the paper a four dimensional

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K. R. MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND Second Online Version, December 1998 Comments to the author at krm@maths.uq.edu.au Contents 1 LINEAR EQUATIONS

More information

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms.

Vector Spaces. Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. Vector Spaces Vector space, ν, over the field of complex numbers, C, is a set of elements a, b,..., satisfying the following axioms. For each two vectors a, b ν there exists a summation procedure: a +

More information

CHAPTER 3 Further properties of splines and B-splines

CHAPTER 3 Further properties of splines and B-splines CHAPTER 3 Further properties of splines and B-splines In Chapter 2 we established some of the most elementary properties of B-splines. In this chapter our focus is on the question What kind of functions

More information

Assignment 10. Arfken Show that Stirling s formula is an asymptotic expansion. The remainder term is. B 2n 2n(2n 1) x1 2n.

Assignment 10. Arfken Show that Stirling s formula is an asymptotic expansion. The remainder term is. B 2n 2n(2n 1) x1 2n. Assignment Arfken 5.. Show that Stirling s formula is an asymptotic expansion. The remainder term is R N (x nn+ for some N. The condition for an asymptotic series, lim x xn R N lim x nn+ B n n(n x n B

More information

MULTIPLIERS OF THE TERMS IN THE LOWER CENTRAL SERIES OF THE LIE ALGEBRA OF STRICTLY UPPER TRIANGULAR MATRICES. Louis A. Levy

MULTIPLIERS OF THE TERMS IN THE LOWER CENTRAL SERIES OF THE LIE ALGEBRA OF STRICTLY UPPER TRIANGULAR MATRICES. Louis A. Levy International Electronic Journal of Algebra Volume 1 (01 75-88 MULTIPLIERS OF THE TERMS IN THE LOWER CENTRAL SERIES OF THE LIE ALGEBRA OF STRICTLY UPPER TRIANGULAR MATRICES Louis A. Levy Received: 1 November

More information

Fast Probability Generating Function Method for Stochastic Chemical Reaction Networks

Fast Probability Generating Function Method for Stochastic Chemical Reaction Networks MATCH Communications in Mathematical and in Computer Chemistry MATCH Commun. Math. Comput. Chem. 71 (2014) 57-69 ISSN 0340-6253 Fast Probability Generating Function Method for Stochastic Chemical Reaction

More information

Least Squares Regression

Least Squares Regression Least Squares Regression Chemical Engineering 2450 - Numerical Methods Given N data points x i, y i, i 1 N, and a function that we wish to fit to these data points, fx, we define S as the sum of the squared

More information

Qualification Exam: Mathematical Methods

Qualification Exam: Mathematical Methods Qualification Exam: Mathematical Methods Name:, QEID#41534189: August, 218 Qualification Exam QEID#41534189 2 1 Mathematical Methods I Problem 1. ID:MM-1-2 Solve the differential equation dy + y = sin

More information

ANALYSIS AND CONTROLLING OF HOPF BIFURCATION FOR CHAOTIC VAN DER POL-DUFFING SYSTEM. China

ANALYSIS AND CONTROLLING OF HOPF BIFURCATION FOR CHAOTIC VAN DER POL-DUFFING SYSTEM. China Mathematical and Computational Applications, Vol. 9, No., pp. 84-9, 4 ANALYSIS AND CONTROLLING OF HOPF BIFURCATION FOR CHAOTIC VAN DER POL-DUFFING SYSTEM Ping Cai,, Jia-Shi Tang, Zhen-Bo Li College of

More information

Kernel Method: Data Analysis with Positive Definite Kernels

Kernel Method: Data Analysis with Positive Definite Kernels Kernel Method: Data Analysis with Positive Definite Kernels 2. Positive Definite Kernel and Reproducing Kernel Hilbert Space Kenji Fukumizu The Institute of Statistical Mathematics. Graduate University

More information

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS chapter MORE MATRIX ALGEBRA GOALS In Chapter we studied matrix operations and the algebra of sets and logic. We also made note of the strong resemblance of matrix algebra to elementary algebra. The reader

More information

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems

Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 45 Lecture Notes 6: Dynamic Equations Part C: Linear Difference Equation Systems Peter J. Hammond latest revision 2017 September

More information

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with

In particular, if A is a square matrix and λ is one of its eigenvalues, then we can find a non-zero column vector X with Appendix: Matrix Estimates and the Perron-Frobenius Theorem. This Appendix will first present some well known estimates. For any m n matrix A = [a ij ] over the real or complex numbers, it will be convenient

More information

Econ Slides from Lecture 10

Econ Slides from Lecture 10 Econ 205 Sobel Econ 205 - Slides from Lecture 10 Joel Sobel September 2, 2010 Example Find the tangent plane to {x x 1 x 2 x 2 3 = 6} R3 at x = (2, 5, 2). If you let f (x) = x 1 x 2 x3 2, then this is

More information

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway Linear Algebra: Lecture Notes Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway November 6, 23 Contents Systems of Linear Equations 2 Introduction 2 2 Elementary Row

More information

Spectra of Semidirect Products of Cyclic Groups

Spectra of Semidirect Products of Cyclic Groups Spectra of Semidirect Products of Cyclic Groups Nathan Fox 1 University of Minnesota-Twin Cities Abstract The spectrum of a graph is the set of eigenvalues of its adjacency matrix A group, together with

More information

QM and Angular Momentum

QM and Angular Momentum Chapter 5 QM and Angular Momentum 5. Angular Momentum Operators In your Introductory Quantum Mechanics (QM) course you learned about the basic properties of low spin systems. Here we want to review that

More information

Linear Algebra Review (Course Notes for Math 308H - Spring 2016)

Linear Algebra Review (Course Notes for Math 308H - Spring 2016) Linear Algebra Review (Course Notes for Math 308H - Spring 2016) Dr. Michael S. Pilant February 12, 2016 1 Background: We begin with one of the most fundamental notions in R 2, distance. Letting (x 1,

More information

01 Harmonic Oscillations

01 Harmonic Oscillations Utah State University DigitalCommons@USU Foundations of Wave Phenomena Library Digital Monographs 8-2014 01 Harmonic Oscillations Charles G. Torre Department of Physics, Utah State University, Charles.Torre@usu.edu

More information

Econ Lecture 14. Outline

Econ Lecture 14. Outline Econ 204 2010 Lecture 14 Outline 1. Differential Equations and Solutions 2. Existence and Uniqueness of Solutions 3. Autonomous Differential Equations 4. Complex Exponentials 5. Linear Differential Equations

More information

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin On the stability of invariant subspaces of commuting matrices Tomaz Kosir and Bor Plestenjak September 18, 001 Abstract We study the stability of (joint) invariant subspaces of a nite set of commuting

More information

The Matrix Representation of a Three-Dimensional Rotation Revisited

The Matrix Representation of a Three-Dimensional Rotation Revisited Physics 116A Winter 2010 The Matrix Representation of a Three-Dimensional Rotation Revisited In a handout entitled The Matrix Representation of a Three-Dimensional Rotation, I provided a derivation of

More information

Notes on SU(3) and the Quark Model

Notes on SU(3) and the Quark Model Notes on SU() and the Quark Model Contents. SU() and the Quark Model. Raising and Lowering Operators: The Weight Diagram 4.. Triangular Weight Diagrams (I) 6.. Triangular Weight Diagrams (II) 8.. Hexagonal

More information

Subquadratic Space Complexity Multiplication over Binary Fields with Dickson Polynomial Representation

Subquadratic Space Complexity Multiplication over Binary Fields with Dickson Polynomial Representation Subquadratic Space Complexity Multiplication over Binary Fields with Dickson Polynomial Representation M A Hasan and C Negre Abstract We study Dickson bases for binary field representation Such representation

More information