Iterative solution of SPARK methods applied to DAEs

Size: px
Start display at page:

Download "Iterative solution of SPARK methods applied to DAEs"

Transcription

1 Numerical Algorithms 31: Kluwer Academic Publishers Printed in the Netherlands Iterative solution of SPARK methods applied to DAEs Laurent O Jay Department of Mathematics 14 MacLean Hall The University of Iowa Iowa City IA USA ljay@mathuiowaedu; naljay@na-netornlgov Received 8 January 2001; revised 4 October 2001 In this article a broad class of systems of implicit differential algebraic equations DAEs is considered including the equations of mechanical systems with holonomic and nonholonomic constraints Solutions to these DAEs can be approximated numerically by applying a class of super partitioned additive Runge Kutta SPARK methods Several properties of the SPARK coefficients satisfied by the family of Lobatto IIIA-B-C-C -D coefficients are crucial to deal properly with the presence of constraints and algebraic variables A main difficulty for an efficient implementation of these methods lies in the numerical solution of the resulting systems of nonlinear equations Inexact modified Newton iterations can be used to solve these systems Linear systems of the modified Newton method can be solved approximately with a preconditioned linear iterative method Preconditioners can be obtained after certain transformations to the systems of nonlinear and linear equations These transformations rely heavily on specific properties of the SPARK coefficients A new truly parallelizable preconditioner is presented Keywords: differential algebraic equations holonomic constraints inexact modified Newton method Lobatto coefficients mechanical systems nonholonomic constraints overdetermined DAEs perturbation index preconditioning Runge Kutta methods stiffness AMS subject classification: 65F10 65H10 65L05 65L06 65L80 70F20 70F25 70H03 70H45 1 Introduction In this article a broad class of systems of possibly stiff and implicit differential algebraic equations DAEs is considered including Hessenberg DAEs of index 1 2 and 3 [15689] These equations encompass the formulation of mechanical systems with mixed constraints of holonomic nonholonomic scleronomic and rheonomic types [71617] Solutions to these DAEs can be approximated numerically by applying a class of super partitioned additive Runge Kutta SPARK methods such as the combination of Lobatto IIIA-B-C-C -D methods [9] SPARK methods can take advantage of splitting the differential equations into different terms and of partitioning the variables into different classes Several properties of the SPARK coefficients satisfied by the Lo- This material is based upon work supported by the National Science Foundation under Grant No This research was also supported in part by NASA Award No JPL subcontract No

2 172 LO Jay / Iterative solution of SPARK methods batto family are essential to treat the constraints and the algebraic variables properly A main difficulty for an efficient implementation of these methods lies in the numerical solution of the resulting systems of nonlinear equations For this purpose inexact modified Newton iterations can be used extending techniques proposed for the solution of implicit Runge Kutta IRK methods applied to implicit systems of stiff ordinary differential equations ODEs [10 12] Such an extension is not straightforward as the presence of constraints and algebraic variables adds some extra difficulty Linear systems of the modified Newton method can be solved approximately with a preconditioned linear iterative method after certain transformations to the systems of nonlinear and linear equations These transformations rely heavily on specific properties of the SPARK coefficients For an s-stage SPARK method and stiff DAEs the decomposition of at most s + 1 independent submatrices of the same dimension as the DAEs is required to build an efficient preconditioner The main purpose of this paper is to present the steps involved to put the system of linear equations of the modified Newton method in a form such that preconditioning these linear equations can be done in a straightforward manner following the results of [10 12] In section 2 the class of implicit DAEs considered in this article is presented In section 3 the definition of SPARK methods applied to these DAEs is given Some properties of the SPARK coefficients are given which are crucial for an efficient implementation of these methods In section 4 an approximate Jacobian to the system of nonlinear equations is derived after certain linear transformations Section 5 describes the steps involved to transform the approximate Jacobian before application of a preconditioner A new truly parallelizable preconditioner is succinctly presented 2 The system of implicit differential algebraic equations Consider the following class of systems of implicit differential algebraic equations DAEs d qty =vtyz dt 1a d pt y z = ftyzuγλψ dt 1b d ct y z u = dtyzuγλψ dt 1c mt y z u γ = 0 ht y z = 0 gty = 0 which may present some stiffness These equations encompass Hessenberg DAEs of index 1 2 and 3 [15689] They also include the formulation of mechanical systems with mixed constraints of holonomic nonholonomic scleronomic and rheonomic types [ ] In mechanics the quantities qvp represent respectively gener- 1d 1e 1f

3 LO Jay / Iterative solution of SPARK methods 173 alized coordinates generalized velocities and generalized momenta The right-hand side f of 1b contains generalized forces acting on the system and 1c describes the dynamics of external variables u The algebraic variables λ and ψ are Lagrange multipliers associated respectively to the nonholonomic constraints 1e and the holonomic constraints 1f The equations of constrained systems in mechanics can be derived from Newton s law of motion and the generalized Gauss variational principle of least constraint [13] It is assumed that the constraints 1f can also be expressed as gty = rtqty = 0 1g The variable t R is the independent variable and y = y 1 y n y T R n y z= z 1 z n z T R n z u= u 1 u n u T R n u γ = γ 1 γ n γ T R n γ λ= λ 1 λ n λ T R n λ ψ= ψ 1 ψ n ψ T R n ψ q : R R n y R n y p : R R n y R n z R n z c : R R n y R n z R n u R n u m : R R n y R n z R n u R n γ R n γ h : R R n y R n z R n λ g : R R n y R n ψ r : R R n y R n ψ v : R R n y R n z R n y f : R R n y R n z R n u R n γ R n λ R n ψ R n z d : R R n y R n z R n u R n γ R n λ R n ψ R n u The variables yzu are called the differential variables and the variables γλψ are called the algebraic variables The latter often correspond to Lagrange multipliers when the DAEs are derived from some constrained variational principle [713] The initial values y 0 z 0 u 0 γ 0 ψ 0 λ 0 at t 0 are supposed to be given Some differentiability conditions on the above functions and consistency of the initial values are assumed to ensure existence and uniqueness of the solution In a neighborhood of the solution the following conditions are supposed to be satisfied q y is invertible p z is invertible 2a 2b

4 174 LO Jay / Iterative solution of SPARK methods c u is invertible m γ is invertible p z f λ f ψ h z O O is invertible r q v z O O Notice that from 1g g y = r q q y holds hence r q v z = g y q y 1 v z in 2e Under conditions 2a 2c explicit expressions for the derivatives of the differential variables can be obtained The differential equations 1a 1c lead to q t t y + q y t y dy dt 2c 2d 2e = vtyz 3a p t tyz+p y tyz dy dt + p ztyz dz dt = ftyzuγλψ 3b c t tyzu+c y tyzu dy dt + c ztyzu dz dt + c utyzu du dt = dtyzuγλψ 3c For example 3a leads to dy dt = q y t y 1 vtyz qt t y Implicit expressions for the algebraic variables can be obtained by application of the implicit function theorem From 1d 2d the algebraic variables γ can be implicitly expressed as γ = γtyzu Differentiating the constraints 1e once gives 0 = d dt ht y z = h ttyz+h y tyz dy dt + h ztyz dz dt Differentiating the constraints 1g twice leads successively to 4a 0 = d dt gty = r t tqty + rq tqty vtyz 4b 0 = d2 dt gty = r 2 tt tqty + 2rtq tqty vtyz + r qq tqty vt y zvtyz +rq tqty vt tyz +r q tqty vy tyz dy dt + r q tqty vz tyz dz dt Hence from 2e 3b 4a 4c implicit expressions for the algebraic variables λ ψ can be obtained The exact solution must satisfy these additional so-called underlying constraints 4 To be consistent the initial values y 0 z 0 u 0 γ 0 ψ 0 λ 0 at t 0 must satisfy the whole set of constraints 1d 1f 4 After one more differentiation of the constraints 1d 4a 4c explicit expressions for the derivatives of the algebraic variables can be obtained forming together with 1a 1c an underlying system of ODEs 4c

5 LO Jay / Iterative solution of SPARK methods 175 The overdetermined system of DAEs 1 4 is therefore of differential and perturbation index 1 [8] The constraints 1d 1f are often called the index constraints respectively although the notion of index especially of perturbation index is more closely related to variables than to equations [8 p 10] DAEs of perturbation index less than or equal to 1 such as 1 4 are well-posed contrary to higher perturbation index DAEs such as 1 From a numerical and computational perspective and from the mathematical point of view of well- and ill-posedness the perturbation index is certainly one of the most relevant notions of index With the equations of mechanical systems in mind where different types of forces are present see [791617] decompositions of the right-hand sides of 1a 1c can be considered vtyz = v m tyz m=1 ftyzuγλψ= f m tyzuγλψ m=1 dtyzuγλψ = d m tyzuγλψ m=1 5a 5b 5c The functions v m f m d m are supposed to have distinct properties and can therefore be numerically treated in a different way The value of m corresponds to different classes of certain types of right-hand side terms This value should be reasonably small For example mechanical systems may include different types of forces such as conservative dissipative explosive and highly oscillatory forces hence typically m = 4 For the application of the numerical methods considered in this paper the following additional assumptions are made f 1 tyzuγλψ =f 1 tyzuγ d 1 tyzuγλψ =d 1 tyzuγ 5d This is obviously not a restriction on the system 1 per se but it is used as a restriction on the application of SPARK methods see section 3 3 SPARK methods The application of SPARK methods to the overdetermined system of implicit DAEs 1 4 is tentatively given as follows generalizing the definition of [9]:

6 176 LO Jay / Iterative solution of SPARK methods Definition 1 One step of an s-stage super partitioned additive Runge Kutta SPARK method applied with stepsize h to the overdetermined system of implicit differential algebraic equations 1 4 satisfying the assumptions with decompositions 5 reads Q i q 0 h P i p 0 h C i c 0 h a ijm v m T j Y j Z j =0 for i = 1s 6a a ijm f m T j Y j Z j U j Ɣ j j j =0 a ijm d m T j Y j Z j U j Ɣ j j j =0 for i = 1s for i = 1s mt i Y i Z i U i Ɣ i =0 for i = 1s 6d ht i Y i Z i =0 for i = 1s 6e r T i q 0 +h a ij1 vt j Y j Z j =0 for i = 1s 6f q 1 q 0 h p 1 p 0 h c 1 c 0 h j=1 b j vt j Y j Z j =0 j=1 b j ft j Y j Z j U j Ɣ j j j =0 j=1 b j dt j Y j Z j U j Ɣ j j j =0 j=1 mt 1 y 1 z 1 u 1 γ 1 =0 ht 1 y 1 z 1 =0 rt 1 q 1 =0 r t t 1 q 1 +r q t 1 q 1 vt 1 y 1 z 1 =0 6b 6c 6g 6h 6i 6j 6k 6l 6m where q 0 := qt 0 y 0 p 0 := pt 0 y 0 z 0 c 0 := ct 0 y 0 z 0 u 0 T i := t 0 + c i h Q i := qt i Y i for i = 1s for i = 1s

7 LO Jay / Iterative solution of SPARK methods 177 P i := pt i Y i Z i C i := ct i Y i Z i U i t 1 := t 0 + h q 1 := qt 1 y 1 p 1 := pt 1 y 1 z 1 c 1 := ct 1 y 1 z 1 u 1 for i = 1s for i = 1s The RK coefficients matrices of m Runge Kutta RK methods based on the same quadrature formula b i c i i=1s are denoted by A m := a ijm ij=1s for m = 1 m To ensure existence and uniqueness of the numerical solution only a certain linear combination of equations 6e 6k is actually considered see equations 14e From this tentative definition of SPARK methods results a system of nonlinear equations to be solved for the internal stages Y i Z i U i Ɣ i i i for i = 1s and for the numerical approximation at t 1 given by y 1 z 1 u 1 γ 1 In general existence and uniqueness to these nonlinear equations cannot be shown unless some assumptions on the SPARK coefficients are made Also equations 6e 6k for the constraints 1e cannot be all satisfied The actual definition of SPARK methods is given in definition 5 Accurate values for the algebraic variables γ 1 λ 1 ψ 1 are not necessary for the step by step integration In any case the accuracy of these algebraic variables does not influence the convergence of the other variables and the properties of SPARK methods since the values γ 0 λ 0 ψ 0 do not enter explicitly the definition of SPARK methods For SPARK methods satisfying c s = 1 the approximations given by γ 1 := Ɣ s λ 1 := s ψ 1 := s are adequate 31 Properties of SPARK coefficients In this article I m denotes the m m identity matrix O mn denotes the m n zero matrix b := b 1 b s T is the weight vector e i := T is the ith s-dimensional unit basis vector and 0 s := 00 T is the s-dimensional zero vector It is assumed that the number of internal stages satisfies s 2 SPARK methods 6 satisfying the following assumptions are considered e T 1 A 1 = 0 T s e T s A 1 = b T es T A 3 = b T 0 T A 1 A m = s for m = 2m N N is invertible b T A 3 is invertible 7a 7b 7c 7d 7e 7f

8 178 LO Jay / Iterative solution of SPARK methods These assumptions are satisfied for example by the Lobatto family with m = 5and A 1 A 2 A 3 A 4 A 5 being the RK matrices of Lobatto IIIA-B-C-C -D coefficients respectively [9] The assumptions 7b 7c are stiff accuracy conditions Notice that 7a is also a direct consequence of 7f and 7d for m = 3 Let à 1 be the s 1 s submatrix of A 1 given by the relation 0 T s A 1 = 8 The following lemmas will be extremely useful to obtain an efficient implementation of SPARK methods applied to DAEs à 1 Lemma 2 The relation follows from 7c 7f 1 Ã1 N = A 3 9 e T s b T Proof The relation à 1 A 3 = N follows from 7d for m = 3 Hence together with 7c it leads to Ã1 N A 3 = e T s The invertibility of the left matrix on the left-hand side follows from the conditions 7e 7f Under the assumptions of lemma 2 the following s s + 1 matrices are defined 1 Ã1 Ã1 0 s 1 Q ss+1 := 0 T = I s 0 s + Pss+1 10a s 1 e T s b T where P ss+1 := 1 Ã1 Oss 1 e s e s = Oss 1 p s p s 10b e T s p s := 1 Ã1 e s = e T s vs c Lemma 3 From 7c 7f it follows that Am 0 Q s ss+1 b T = A s for m = 2m

9 LO Jay / Iterative solution of SPARK methods 179 Moreover if in addition 7b holds then the following relation is obtained A1 0 Q s ss+1 b T = A s Proof The relations 1 Ã1 Ã1 0 s 1 Am 0 s = A 0 T s 1 b T 3 0 s 0 e T s follow directly from 7d and lemma 2 For m = 1 the relation 1 Ã1 Ã1 0 s 1 A1 0 s = A 0 T s 1 b T 1 0 s 0 follows from which is a simple consequence of 7b e T s A 1 0 s Oss 1 e s e s b T = O ss+1 0 for m = 2m Lemma 4 Consider the matrix Q ss+1 as defined in 10 and the s + 1-dimensional vector q s+1 := T Then the following s + 1 s + 1 matrix Q s+1s+1 := is invertible Qss+1 q T s+1 11 Proof The expressions 10 give Is 1 v s 1 v s 1 Q ss+1 = 12 0 T s hence detq s+1s+1 = 1 is easily obtained In fact the inverse to Q s+1s+1 can be given explicitly by using the factorization Q s+1s+1 = R s+1s+1 S s+1s+1 where 1 O 1 O R s+1s+1 := ps 1 S s+1s+1 := 1 O 1 O a 13b

10 180 LO Jay / Iterative solution of SPARK methods Hence Q 1 s+1s+1 = S 1 s+1s+1 R 1 s+1s+1 holds where 1 O S 1 s+1s+1 = 1 O 1 1 R 1 s+1s+1 = 1 O ps 1 O 1 32 The system of nonlinear equations It is essential to put the system of nonlinear equations in a form such that preconditioning the linear equations of the modified Newton method can be done in a straightforward manner following the results of [10 12] see section 5 From the assumption 7a and the consistency condition rt 0 q 0 = 0the equation 6f for i = 1 is automatically satisfied A consequence of the assumption 7b is rt 1 q 1 =r T s q 0 +h a sj1 vt j Y j Z j j=1 therefore by 6f for i = s equation 6l is also automatically satisfied Instead of solving directly the remaining set of equations of 6 some specific linear transformations are applied The remaining equations are expressed by making use of the matrices Q ss+1 10 and Q s+1s+1 11 as follows: Definition 5 The actual definition of SPARK methods applied to is taken as m Q 1 q 0 h a 1jm v m T j Y j Z j Q s+1s+1 I ny =0 14a Q s q 0 h a sjm v m T j Y j Z j q 1 q 0 h b j vt j Y j Z j j=1

11 LO Jay / Iterative solution of SPARK methods 181 Q s+1s+1 I nz m P 1 p 0 h a 1jm f m T j Y j Z j U j Ɣ j j j =0 14b P s p 0 h a sjm f m T j Y j Z j U j Ɣ j j j p 1 p 0 h b j ft j Y j Z j U j Ɣ j j j j=1 Q s+1s+1 I nu m C 1 c 0 h a 1jm d m T j Y j Z j U j Ɣ j j j =0 14c C s c 0 h a sjm d m T j Y j Z j U j Ɣ j j j c 1 c 0 h b j dt j Y j Z j U j Ɣ j j j j=1 mt 1 Y 1 Z 1 U 1 Ɣ 1 Q s+1s+1 I nγ =0 14d mt s Y s Z s U s Ɣ s mt 1 y 1 z 1 u 1 γ 1 ht 1 Y 1 Z 1 Q ss+1 I nλ ht s Y s Z s =0 14e ht 1 y 1 z 1 1 h r T 2 q 0 +h a 2j1 vt j Y j Z j j=1 1 Ã1 es T I nψ =0 14f 1 h r T s q 0 +h a sj1 vt j Y j Z j j=1 r t t 1 q 1 +r q t 1 q 1 vt 1 y 1 z 1

12 182 LO Jay / Iterative solution of SPARK methods Equations 14e correspond only to a linear combination of the constraints 6e 6k From 12 equation 6k is truly the only one preserved among 6e 6k this implies that the numerical solution at t 1 still satisfies the constraint 1e This linear combination 14e of 6e 6k is somehow necessary to ensure existence and uniqueness of the numerical solution since in 6 there are only s algebraic variables i i = 1s for s + 1 sets of equations 6e 6k In order to combine the constraints 6f and 6l properly the equations 6f have been multiplied by 1/h where h is the stepsize Since rt 0 q 0 = 0 this can be interpreted as a finite difference approximation to drtqty/dt = 0atT i d dt r T i q T i yt i rt iq 0 +h s j=1 a ij1vt j Y j Z j rt 0 q 0 h The main reason for these linear transformations is to obtain an advantageous structure of the approximate Jacobian given in section 4 for the construction of efficient preconditioners to be discussed in section 5 Notice that because of the factorization 13a 13b multiplication by Q s+1s+1 of equations 14a 14c has an implication easily interpretable For example 14b can be rewritten as Rs+1s+1 I nz m P 1 p 0 h a 1jm f m T j Y j Z j U j Ɣ j j j =0 14g P s p 0 h a sjm f m T j Y j Z j U j Ɣ j j j p 1 P s h b j a sjm f m T j Y j Z j U j Ɣ j j j One effect is to remove stiffness from the last set of equations provided the terms causing stiffness are treated with coefficients a ijm satisfying the stiff accuracy condition a sjm = b j for j = 1ssee 7b 7c A set of dummy equations can be appended to 14e 14f ht 1 y 1 z 1 ht s Y s Z s +λ 1 =0 14h r t t 1 q 1 +r q t 1 q 1 vt 1 y 1 z 1 r t T s Q s +r q T s Q s vt s Y s Z s +ψ 1 =0 14i These equations must be taken as a dummy definition of λ 1 ψ 1 Of course the exact solution λt ψt at t = t 0 + h must not be approximated by λ 1 ψ 1 as given above but for example by s s when c s = 1 The unknowns λ 1 ψ 1 have been included only for ease of presentation in the derivation hereafter but they are actually not unknowns

13 LO Jay / Iterative solution of SPARK methods 183 of the nonlinear system 14a 14f In 14 a system of nonlinear equations has been obtained for the following vector collecting all unknowns X := Y 1 Z 1 U 1 Ɣ Y s Z s U s Ɣ s s s y 1 z 1 u 1 γ 1 λ 1 ψ 1 T 15 4 Inexact modified Newton iterations for SPARK methods The iteration schemes proposed to solve the system of nonlinear equations corresponding to the application of IRK methods to DAEs have generally been based on ad hoc modifications of the simplified Newton method [5] For SPARK methods there are additional difficulties to obtain an efficient implementation due to their additive and partitioned nature This paper proposes the use of inexact modified Newton iterations or more precisely using another terminology of modified Newton-iterative methods extending techniques developed in [10 12] Instead of solving exactly the linear systems of the modified Newton method they can be solved approximately and iteratively after application of specific linear transformations and the use of a preconditioner 41 Inexact modified Newton iterations Modified Newton iterations applied to the set of equations 14 read as follows M X k = G X k X k+1 = X k + X k k = where M is a modified Jacobian ie roughly speaking an approximation to the exact Jacobian X := X 1 X s x 1 T 17a X i := Y i Z i U i Ɣ i i i T for i = 1s 17b x 1 := y 1 z 1 u 1 γ 1 λ 1 ψ 1 T 17c and GX corresponds to the expressions in 14 reordered accordingly A direct decomposition of the modified Jacobian M may be inefficient In the inexact modified Newton method the linear systems 16 are solved only approximately generally by a preconditioned linear iterative method such as preconditioned versions of Richardson or GMRES iterations [ ] This requires the construction of a good preconditioner see section 5 A sequence of iterates X k with a residual error r k := M X k + G X k is obtained at each iteration Sufficient a priori and a posteriori conditions to ensure convergence of the inexact modified Newton iterates toward the solution of a system of nonlinear equations have been given in [11] In combination with a linear iterative method the inexact Newton method is called a modified Newtoniterative method The use of preconditioned linear iterative methods for the numerical solution of ODEs and DAEs was first considered in the context of implicit multistep methods by Brown et al [2]

14 184 LO Jay / Iterative solution of SPARK methods 42 The simplified Jacobian In a standard approach the system of nonlinear equations 14 is solved by simplified Newton iterations This requires the simplified Jacobian which is the Jacobian at the initial guess The simplified Jacobian corresponding to the equations 14a 14f with respect to the variables in 15 can be expressed as follows Q s+1s+1 q y O O O O O Am 0 s h Q s+1s+1 b T 0 m=1 Q s+1s+1 p y p z O O O O Am 0 s h Q s+1s+1 b T 0 m=1 Q s+1s+1 c y c z c u O O O Am 0 s h Q s+1s+1 b T 0 m=1 v my v mz O O O O 18a f my f mz f mu f mγ f mλ f mψ 18b d my d mz d mu d mγ d mλ d mψ 18c Q s+1s+1 m y m z m u m γ O O 18d Q ss+1 h y h z O O O O 18e 1 Ã1 Ã1 0 s 1 es T 0 T r q v y r q v z O O O O s 1 1Os 1s Ã1 0 s 1 + es T 0 T r tq q y + r qq q y v O O O O O s 1 = Q ss+1 r q v y r q v z O O O O + O ss p s rtq q y + r qq q y v O O O O O 18f where the symbol denotes the tensor product The arguments of the expressions q y v my etc which have been omitted are given by the initial values t 0 y 0 z 0 u 0 γ 0 λ 0 ψ 0 Strictly speaking this is in fact not really the simplified Jacobian since the initial guess of the iterations is generally not given by the initial values Nevertheless the terminology of simplified Jacobian to refer to 18 will be kept since this is how it is generally called following eg [6 section IV8] The terminology of modified Jacobian would actually be more correct 43 An approximate Jacobian Some modifications to the simplified Jacobian 18 can actually be used since it is not necessary to keep the full simplified Jacobian to ensure convergence of the modified

15 LO Jay / Iterative solution of SPARK methods 185 Newton iterates 16 The expression of a so-called approximate Jacobian L is given here It will be used in the discussion given in section 5 about the construction of preconditioners It should be noticed that it is not necessary to use exactly the approximate Jacobian presented here to ensure convergence of the whole inexact modified Newton procedure additional modifications are possible A main point in the specification of the approximate Jacobian concerns the equations q 1 Q s h p 1 P s h c 1 C s h b j a sjm v m T j Y j Z j =0 19a b j a sjm f m T j Y j Z j U j Ɣ j j j =0 19b b j a sjm d m T j Y j Z j U j Ɣ j j j =0 19c mt 1 y 1 z 1 u 1 γ 1 mt s Y s Z s U s Ɣ s =0 ht 1 y 1 z 1 ht s Y s Z s +λ 1 =0 r t t 1 q 1 +r q t 1 q 1 vt 1 y 1 z 1 r t T s Q s +r q T s Q s vt s Y s Z s +ψ 1 =0 19d 19e 19f which involve the numerical solution at the endpoint t 1 see 14 Denoting minus the left-hand side of equations 19 by r 1 one step of the simplified Newton method for these equations reads where E + F x 1 X s + J d x 1 h b T es T A m Jm X = r 1 20 m=1 with X i for i = 1s and x 1 as in 17 and X := X 1 X s T 21 q y O O O O O p y p z O O O O E := c y c z c u O O O m y m z m u m γ O O h y h z O O O O r q v y r q v z O O O O

16 186 LO Jay / Iterative solution of SPARK methods F := rtq q y + r qq q y v O O O O O v my v mz O O O O f my f mz f mu f mγ f mλ f mψ J m := d my d mz d mu d mγ d mλ d mψ J d := O O O O I O O O O O O I Thequantities y 1 z 1 u 1 γ 1 and Y s Z s U s Ɣ s both approximate the exact solution yt zt ut γt at t = t 0 + h = t 1 = T s By 19a 19d they are close to each other provided stiff terms are treated by stiffly accurate RK coefficients Hence some terms in 20 can be neglected leading to some sort of fixed-point iterations for y 1 z 1 u 1 γ 1 Since the variables λ 1 ψ 1 are dummy variables which are directly defined by 14h 14i they should have no influence on the numerical solution to the system of equations 14a 14f Therefore in 20 the term J d x 1 and the components of r 1 corresponding to 19e 19f can be neglected The dummy values λ 1 ψ 1 can be set explicitly after each modified Newton iteration such that the equations 14h 14i are satisfied exactly for the current iterate X k This means that the corresponding components of r 1 can be assumed to vanish Provided stiff terms are treated by stiffly accurate coefficients ie satisfying a sjm = b j for j = 1s the linear equation 20 can be simplified for example to E X s + E x 1 = r 1 22 From 18 the whole set of equations 14 and variables can be reordered according to 15 such that after application of lemma 3 the corresponding approximate Jacobian L considered here is expressed as follows

17 L := Q s+1s+1 E h LO Jay / Iterative solution of SPARK methods 187 A1 0 s A3 0 0 T J s 0 1 h s 0 T J s J 23 where from 5d O O O O O O O O O O f mλ f mψ O O O O f λ f ψ J 0 := O O O O d mλ d mψ m=2 = O O O O d λ d ψ 24a v 1y v 1z O O O O f 1y f 1z f 1u f 1γ O O J 1 := d 1y d 1z d 1u d 1γ O O 24b v my v mz O O O O f my f mz f mu f mγ O O J := d my d mz d mu d mγ O O m=2 24c The particular tensorial structure of the approximate Jacobian L can now be used to obtain good preconditioners It will be discussed in section 5 It would not be possible to do so if the simplified Jacobian 18 was kept Some other parts of the approximate Jacobian such as h y and r q v y can actually be neglected without jeopardizing convergence of the inexact modified Newton iterates 5 Preconditioning the approximate Jacobian In this section the discussion is based on the approximate Jacobian L 23 though as previously mentioned other choices are possible Consider a linear system with matrix L X as in 17 and right-hand side R decomposed as L X = R 25 R = R 1 R s r 1 T

18 188 LO Jay / Iterative solution of SPARK methods 51 Transforming the linear system As a first step the relation 22 can be introduced into the first s subequations of 25 A reduced linear system L X = R 26 is obtained for X given in 21 where L := I s E ha 1 J 1 ha 3 J 0 + J R := R 1 R s T p s r 1 by using the decomposition 13 for Q s+1s+1 in 23 As a second step the quantities i and i for i = 1s can be replaced in 17 as follows 1 = h 1 A 3 I nλ s s Hence a linear system is obtained with matrix where 1 = h 1 A 3 I nψ s s K x = b 27 K = I s E J 0 ha 1 J 1 ha 3 J 28 q y O O O O O p y p z O O f λ f ψ E J 0 = c y c z c u O d λ d ψ m y m z m u m γ O O h y h z O O O O r q v y r q v z O O O O Under the assumptions 2 the matrix E J 0 is invertible For nonstiff DAEs the terms ha 1 J 1 and ha 3 J in 28 can be neglected Once an approximation to the solution of the linear system 26 is obtained it remains to define an approximation to y 1 z 1 u 1 γ 1 The relation 20 or 22 can be used for that purpose The current iterates for λ 1 ψ 1 can be defined directly from 14h 14i but this need not be done explicitly The linear system 25 with approximate Jacobian L 23 has been transformed under a form such that preconditioning the linear equations 27 can now be done in a straightforward manner following the results of [10 12] The preconditioner developed in [1011] will not be described here The main drawback of this preconditioner is the

19 LO Jay / Iterative solution of SPARK methods 189 fact that although its decomposition is parallelizable the solution of the linear systems involved is not Instead a new truly parallel preconditioner is presented in the next subsection 52 A truly parallel preconditioner In this subsection some recent results of [12] are followed and briefly presented The linear system 27 is solved approximately by application of linear iterative methods with a preconditioner Q K 1 of the form Q := H 1 G H 1 where H := I s E J 0 hɣ 1 J 1 hɣ 3 J G := I s E J 0 h 1 J 1 h 3 J The coefficients matrices Ɣ 1 and Ɣ 3 are chosen to be diagonal Ɣ 1 := diagγ 11 γ 21 γ s1 Ɣ 3 := diagγ 13 γ 23 γ s3 with γ 11 = 0 because of 7a leading to where H 1 = O H 1 1 O H 1 2 H i := E J 0 hγ i1 J 1 hγ i3 J H 1 s for i = 1s The matrices H i can be decomposed independently hence in parallel Solving a linear system with matrix H can also be done in parallel since it is block-diagonal This is the main advantage of this preconditioner compared to the one presented in [1011] The coefficients of Ɣ 1 Ɣ 3 1 and 3 still remain to be fixed to some values Assuming the coefficients of Ɣ 1 and of Ɣ 3 to be given the coefficients of 1 and 3 are taken as where 1 = Ɣ 1  T s  11 Ɣ 1  1 1 Ɣ 1 3 = Ɣ 3 A 1 3 Ɣ 3 à 1 =  11  1 Ɣ 1 := diagγ 21 γ s1 with à 1 as in 8 The coefficients matrices 1 and 3 have been separately determined such that the preconditioner Q is asymptotically correct when considering the Dahlquist test equation y = λy Reλ 0

20 190 LO Jay / Iterative solution of SPARK methods The coefficients γ i1 for i = 2s and γ i3 for i = 1s are free They are required to satisfy γ 1i > 0fori =2s and γ i3 > 0fori =1s which is a natural assumption to ensure the invertibility of the matrices H i These coefficients can be chosen for example to minimize λ i Mz 1 Rez 0 i=1s where z = hλ Mz = Qz Kzandλ i Mz for i = 1s are the s eigenvalues of Mz Whenγ i1 = γ 1 for i = 2s and γ i3 = γ 3 for i = 1s only one or two matrix decompositions beside E J 0 are needed This can be quite advantageous on a serial computer The cost of computing a matrix vector product Qv with at least s processors on a parallel computer consists of one decomposition of H i on each processor two linear systems with matrix H i to be solved one local matrix vector product with each matrix E J 0 hj 1 andhj and some communication between processors according to the nonzero elements of the coefficients matrices 1 and 3 6 Conclusion The approximation of a certain class of DAEs by SPARK methods has been considered The main difficulty of these methods resides in finding a way to implement them efficiently Certain linear transformations are applied to the resulting system of nonlinear equations such that an efficient preconditioner to the linear systems of the modified Newton method can be constructed These linear transformations rely heavily on specific properties of the SPARK coefficients An approximate Jacobian of the system of nonlinear equations is presented Based on this approximate Jacobian a new truly parallelizable preconditioner is given References [1] KE Brenan SL Campbell and LR Petzold Numerical Solution of Initial-Value Problems in Differential Algebraic Equations SIAM Classics in Applied Mathematics 2nd ed SIAM Philadelphia PA 1996 [2] PN Brown AC Hindmarsh and LR Petzold Using Krylov methods in the solution of large-scale differential algebraic systems SIAM J Sci Comput [3] G Golub and CF van Loan Matrix Computations Johns Hopkins Studies in the Mathematical Science 3rd ed Johns Hopkins Univ Press Baltimore/London 1996 [4] A Greenbaum Iterative Methods for Solving Linear Systems Frontiers in Applied Mathematics Vol 17 SIAM Philadelphia PA 1997 [5] E Hairer Ch Lubich and M Roche The Numerical Solution of Differential Algebraic Systems by Runge Kutta Methods Lecture Notes in Mathematics Vol 1409 Springer Berlin 1989 [6] E Hairer and G Wanner Solving Ordinary Differential Equations II Stiff and Differential Algebraic Problems Comput Math Vol 14 2nd revised ed Springer Berlin 1996 [7] EJ Haug Computer Aided Kinematics and Dynamics of Mechanical Systems Vol I: Basic Methods Allyn and Bacon Boston USA 1989

21 LO Jay / Iterative solution of SPARK methods 191 [8] LO Jay Runge Kutta type methods for index three differential algebraic equations with applications to Hamiltonian systems PhD thesis Department of Mathematics University of Geneva Switzerland 1994 [9] LO Jay Structure preservation for constrained dynamics with super partitioned additive Runge Kutta methods SIAM J Sci Comput [10] LO Jay A parallelizable preconditioner for the iterative solution of implicit Runge Kutta type methods J Comput Appl Math [11] LO Jay Inexact simplified Newton iterations for implicit Runge Kutta methods SIAM J Numer Anal [12] LO Jay Preconditioning and parallel implementation of implicit Runge Kutta methods Technical Report Dept of Math University of Iowa USA 2001 in preparation [13] PJ Rabier and WC Rheinboldt Nonholonomic Motion of Rigid Mechanical Systems from a DAE Viewpoint SIAM Philadelphia PA 2000 [14] Y Saad Iterative Methods for Sparse Linear Systems PWS Boston MA 1996 [15] Y Saad and MH Schultz GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems SIAM J Sci Statist Comput [16] WO Schiehlen ed Multibody Systems Handbook Springer Berlin 1990 [17] WO Schiehlen ed Advanced Multibody System Dynamics Simulation and Software Tools Kluwer Academic London 1993

PRECONDITIONING AND PARALLEL IMPLEMENTATION OF IMPLICIT RUNGE-KUTTA METHODS.

PRECONDITIONING AND PARALLEL IMPLEMENTATION OF IMPLICIT RUNGE-KUTTA METHODS. PRECONDITIONING AND PARALLEL IMPLEMENTATION OF IMPLICIT RUNGE-KUTTA METHODS. LAURENT O. JAY Abstract. A major problem in obtaining an efficient implementation of fully implicit Runge- Kutta IRK methods

More information

SPECIALIZED RUNGE-KUTTA METHODS FOR INDEX 2 DIFFERENTIAL-ALGEBRAIC EQUATIONS

SPECIALIZED RUNGE-KUTTA METHODS FOR INDEX 2 DIFFERENTIAL-ALGEBRAIC EQUATIONS MATHEMATICS OF COMPUTATION Volume 75, Number 254, Pages 641 654 S 0025-5718(05)01809-0 Article electronically published on December 19, 2005 SPECIALIZED RUNGE-KUTTA METHODS FOR INDEX 2 DIFFERENTIAL-ALGEBRAIC

More information

Discontinuous Collocation Methods for DAEs in Mechanics

Discontinuous Collocation Methods for DAEs in Mechanics Discontinuous Collocation Methods for DAEs in Mechanics Scott Small Laurent O. Jay The University of Iowa Iowa City, Iowa, USA July 11, 2011 Outline 1 Introduction of the DAEs 2 SPARK and EMPRK Methods

More information

EXTENSIONS OF THE HHT-α METHOD TO DIFFERENTIAL-ALGEBRAIC EQUATIONS IN MECHANICS

EXTENSIONS OF THE HHT-α METHOD TO DIFFERENTIAL-ALGEBRAIC EQUATIONS IN MECHANICS EXTENSIONS OF THE HHT-α METHOD TO DIFFERENTIAL-ALGEBRAIC EQUATIONS IN MECHANICS LAURENT O. JAY AND DAN NEGRUT Abstract. We present second-order extensions of the Hilber-Hughes-Taylor-α HHT-α) method for

More information

INEXACT SIMPLIFIED NEWTON ITERATIONS FOR IMPLICIT RUNGE-KUTTA METHODS

INEXACT SIMPLIFIED NEWTON ITERATIONS FOR IMPLICIT RUNGE-KUTTA METHODS SIAM J. NUMER. ANAL. Vol. 38, No. 4, pp. 1369 1388 c 2000 Society for Industrial and Applied Mathematics INEXACT SIMPLIFIED NEWTON ITERATIONS FOR IMPLICIT RUNGE-KUTTA METHODS LAURENT O. JAY Abstract. We

More information

Numerical Integration of Equations of Motion

Numerical Integration of Equations of Motion GraSMech course 2009-2010 Computer-aided analysis of rigid and flexible multibody systems Numerical Integration of Equations of Motion Prof. Olivier Verlinden (FPMs) Olivier.Verlinden@fpms.ac.be Prof.

More information

A Second Order Extension of the Generalized α Method for Constrained Systems in Mechanics

A Second Order Extension of the Generalized α Method for Constrained Systems in Mechanics A Second Order Extension of the Generalized α Method for Constrained Systems in Mechanics Laurent O. Jay 1 and Dan Negrut 2 1 Department of Mathematics, The University of Iowa, 14 MacLean Hall, Iowa City,

More information

Semi-implicit Krylov Deferred Correction Methods for Ordinary Differential Equations

Semi-implicit Krylov Deferred Correction Methods for Ordinary Differential Equations Semi-implicit Krylov Deferred Correction Methods for Ordinary Differential Equations Sunyoung Bu University of North Carolina Department of Mathematics CB # 325, Chapel Hill USA agatha@email.unc.edu Jingfang

More information

SYMMETRIC PROJECTION METHODS FOR DIFFERENTIAL EQUATIONS ON MANIFOLDS

SYMMETRIC PROJECTION METHODS FOR DIFFERENTIAL EQUATIONS ON MANIFOLDS BIT 0006-3835/00/4004-0726 $15.00 2000, Vol. 40, No. 4, pp. 726 734 c Swets & Zeitlinger SYMMETRIC PROJECTION METHODS FOR DIFFERENTIAL EQUATIONS ON MANIFOLDS E. HAIRER Section de mathématiques, Université

More information

Runge-Kutta type methods for differentialalgebraic equations in mechanics

Runge-Kutta type methods for differentialalgebraic equations in mechanics University of Iowa Iowa Research Online Theses and Dissertations Spring 211 Runge-Kutta type methods for differentialalgebraic equations in mechanics Scott Joseph Small University of Iowa Copyright 211

More information

CHAPTER 10: Numerical Methods for DAEs

CHAPTER 10: Numerical Methods for DAEs CHAPTER 10: Numerical Methods for DAEs Numerical approaches for the solution of DAEs divide roughly into two classes: 1. direct discretization 2. reformulation (index reduction) plus discretization Direct

More information

The Plan. Initial value problems (ivps) for ordinary differential equations (odes) Review of basic methods You are here: Hamiltonian systems

The Plan. Initial value problems (ivps) for ordinary differential equations (odes) Review of basic methods You are here: Hamiltonian systems Scientific Computing with Case Studies SIAM Press, 2009 http://www.cs.umd.edu/users/oleary/sccswebpage Lecture Notes for Unit V Solution of Differential Equations Part 2 Dianne P. O Leary c 2008 The Plan

More information

Lagrange-d Alembert integrators for constrained systems in mechanics

Lagrange-d Alembert integrators for constrained systems in mechanics Lagrange-d Alembert integrators for constrained systems in mechanics Conference on Scientific Computing in honor of Ernst Hairer s 6th birthday, Geneva, Switzerland Laurent O. Jay Dept. of Mathematics,

More information

Parallel Methods for ODEs

Parallel Methods for ODEs Parallel Methods for ODEs Levels of parallelism There are a number of levels of parallelism that are possible within a program to numerically solve ODEs. An obvious place to start is with manual code restructuring

More information

GEOMETRIC INTEGRATION OF ORDINARY DIFFERENTIAL EQUATIONS ON MANIFOLDS

GEOMETRIC INTEGRATION OF ORDINARY DIFFERENTIAL EQUATIONS ON MANIFOLDS BIT 0006-3835/01/4105-0996 $16.00 2001, Vol. 41, No. 5, pp. 996 1007 c Swets & Zeitlinger GEOMETRIC INTEGRATION OF ORDINARY DIFFERENTIAL EQUATIONS ON MANIFOLDS E. HAIRER Section de mathématiques, Université

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 9 Initial Value Problems for Ordinary Differential Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9 Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 9 Initial Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction

More information

Geometric Numerical Integration

Geometric Numerical Integration Geometric Numerical Integration (Ernst Hairer, TU München, winter 2009/10) Development of numerical ordinary differential equations Nonstiff differential equations (since about 1850), see [4, 2, 1] Adams

More information

A NOTE ON Q-ORDER OF CONVERGENCE

A NOTE ON Q-ORDER OF CONVERGENCE BIT 0006-3835/01/4102-0422 $16.00 2001, Vol. 41, No. 2, pp. 422 429 c Swets & Zeitlinger A NOTE ON Q-ORDER OF CONVERGENCE L. O. JAY Department of Mathematics, The University of Iowa, 14 MacLean Hall Iowa

More information

The Milne error estimator for stiff problems

The Milne error estimator for stiff problems 13 R. Tshelametse / SAJPAM. Volume 4 (2009) 13-28 The Milne error estimator for stiff problems Ronald Tshelametse Department of Mathematics University of Botswana Private Bag 0022 Gaborone, Botswana. E-mail

More information

Curriculum Vitæ. Laurent O. JAY

Curriculum Vitæ. Laurent O. JAY Curriculum Vitæ Laurent O. JAY Address: Department of Mathematics 14 MacLean Hall The University of Iowa Iowa City, IA 52242-1419 USA Phone: (319)-335-0898 Fax: (319)-335-0627 E-mails: ljay@math.uiowa.edu

More information

The antitriangular factorisation of saddle point matrices

The antitriangular factorisation of saddle point matrices The antitriangular factorisation of saddle point matrices J. Pestana and A. J. Wathen August 29, 2013 Abstract Mastronardi and Van Dooren [this journal, 34 (2013) pp. 173 196] recently introduced the block

More information

Technische Universität Berlin

Technische Universität Berlin Technische Universität Berlin Institut für Mathematik M7 - A Skateboard(v1.) Andreas Steinbrecher Preprint 216/8 Preprint-Reihe des Instituts für Mathematik Technische Universität Berlin http://www.math.tu-berlin.de/preprints

More information

SOME PROPERTIES OF SYMPLECTIC RUNGE-KUTTA METHODS

SOME PROPERTIES OF SYMPLECTIC RUNGE-KUTTA METHODS SOME PROPERTIES OF SYMPLECTIC RUNGE-KUTTA METHODS ERNST HAIRER AND PIERRE LEONE Abstract. We prove that to every rational function R(z) satisfying R( z)r(z) = 1, there exists a symplectic Runge-Kutta method

More information

A New Block Method and Their Application to Numerical Solution of Ordinary Differential Equations

A New Block Method and Their Application to Numerical Solution of Ordinary Differential Equations A New Block Method and Their Application to Numerical Solution of Ordinary Differential Equations Rei-Wei Song and Ming-Gong Lee* d09440@chu.edu.tw, mglee@chu.edu.tw * Department of Applied Mathematics/

More information

Reducing round-off errors in symmetric multistep methods

Reducing round-off errors in symmetric multistep methods Reducing round-off errors in symmetric multistep methods Paola Console a, Ernst Hairer a a Section de Mathématiques, Université de Genève, 2-4 rue du Lièvre, CH-1211 Genève 4, Switzerland. (Paola.Console@unige.ch,

More information

Preconditioning Techniques Analysis for CG Method

Preconditioning Techniques Analysis for CG Method Preconditioning Techniques Analysis for CG Method Huaguang Song Department of Computer Science University of California, Davis hso@ucdavis.edu Abstract Matrix computation issue for solve linear system

More information

Southern Methodist University.

Southern Methodist University. Title: Continuous extensions Name: Lawrence F. Shampine 1, Laurent O. Jay 2 Affil./Addr. 1: Department of Mathematics Southern Methodist University Dallas, TX 75275 USA Phone: +1 (972) 690-8439 E-mail:

More information

THE θ-methods IN THE NUMERICAL SOLUTION OF DELAY DIFFERENTIAL EQUATIONS. Karel J. in t Hout, Marc N. Spijker Leiden, The Netherlands

THE θ-methods IN THE NUMERICAL SOLUTION OF DELAY DIFFERENTIAL EQUATIONS. Karel J. in t Hout, Marc N. Spijker Leiden, The Netherlands THE θ-methods IN THE NUMERICAL SOLUTION OF DELAY DIFFERENTIAL EQUATIONS Karel J. in t Hout, Marc N. Spijker Leiden, The Netherlands 1. Introduction This paper deals with initial value problems for delay

More information

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations

CS 450 Numerical Analysis. Chapter 9: Initial Value Problems for Ordinary Differential Equations Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Math Ordinary Differential Equations

Math Ordinary Differential Equations Math 411 - Ordinary Differential Equations Review Notes - 1 1 - Basic Theory A first order ordinary differential equation has the form x = f(t, x) (11) Here x = dx/dt Given an initial data x(t 0 ) = x

More information

An Accelerated Block-Parallel Newton Method via Overlapped Partitioning

An Accelerated Block-Parallel Newton Method via Overlapped Partitioning An Accelerated Block-Parallel Newton Method via Overlapped Partitioning Yurong Chen Lab. of Parallel Computing, Institute of Software, CAS (http://www.rdcps.ac.cn/~ychen/english.htm) Summary. This paper

More information

Partitioned Runge-Kutta Methods for Semi-explicit Differential-Algebraic Systems of Index 2

Partitioned Runge-Kutta Methods for Semi-explicit Differential-Algebraic Systems of Index 2 Partitioned Runge-Kutta Methods for Semi-explicit Differential-Algebraic Systems of Index 2 A. Murua Konputazio Zientziak eta A. A. Saila, EHU/UPV, Donostia-San Sebastián email ander@si.ehu.es December

More information

Efficiency of Runge-Kutta Methods in Solving Simple Harmonic Oscillators

Efficiency of Runge-Kutta Methods in Solving Simple Harmonic Oscillators MATEMATIKA, 8, Volume 3, Number, c Penerbit UTM Press. All rights reserved Efficiency of Runge-Kutta Methods in Solving Simple Harmonic Oscillators Annie Gorgey and Nor Azian Aini Mat Department of Mathematics,

More information

The collocation method for ODEs: an introduction

The collocation method for ODEs: an introduction 058065 - Collocation Methods for Volterra Integral Related Functional Differential The collocation method for ODEs: an introduction A collocation solution u h to a functional equation for example an ordinary

More information

19.2 Mathematical description of the problem. = f(p; _p; q; _q) G(p; q) T ; (II.19.1) g(p; q) + r(t) _p _q. f(p; v. a p ; q; v q ) + G(p; q) T ; a q

19.2 Mathematical description of the problem. = f(p; _p; q; _q) G(p; q) T ; (II.19.1) g(p; q) + r(t) _p _q. f(p; v. a p ; q; v q ) + G(p; q) T ; a q II-9-9 Slider rank 9. General Information This problem was contributed by Bernd Simeon, March 998. The slider crank shows some typical properties of simulation problems in exible multibody systems, i.e.,

More information

On the Preconditioning of the Block Tridiagonal Linear System of Equations

On the Preconditioning of the Block Tridiagonal Linear System of Equations On the Preconditioning of the Block Tridiagonal Linear System of Equations Davod Khojasteh Salkuyeh Department of Mathematics, University of Mohaghegh Ardabili, PO Box 179, Ardabil, Iran E-mail: khojaste@umaacir

More information

A Rosenbrock-Nystrom State Space Implicit Approach for the Dynamic Analysis of Mechanical Systems: I Theoretical Formulation

A Rosenbrock-Nystrom State Space Implicit Approach for the Dynamic Analysis of Mechanical Systems: I Theoretical Formulation A Rosenbrock-Nystrom State Space Implicit Approach for the Dynamic Analysis of Mechanical Systems: I Theoretical Formulation Adrian Sandu Department of Computer Science, Michigan Technological University,

More information

Numerical solution of ODEs

Numerical solution of ODEs Numerical solution of ODEs Arne Morten Kvarving Department of Mathematical Sciences Norwegian University of Science and Technology November 5 2007 Problem and solution strategy We want to find an approximation

More information

M.A. Botchev. September 5, 2014

M.A. Botchev. September 5, 2014 Rome-Moscow school of Matrix Methods and Applied Linear Algebra 2014 A short introduction to Krylov subspaces for linear systems, matrix functions and inexact Newton methods. Plan and exercises. M.A. Botchev

More information

Advanced methods for ODEs and DAEs

Advanced methods for ODEs and DAEs Advanced methods for ODEs and DAEs Lecture 8: Dirk and Rosenbrock Wanner methods Bojana Rosic, 14 Juni 2016 General implicit Runge Kutta method Runge Kutta method is presented by Butcher table: c1 a11

More information

Numerical Methods I Non-Square and Sparse Linear Systems

Numerical Methods I Non-Square and Sparse Linear Systems Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant

More information

STRUCTURE PRESERVATION FOR CONSTRAINED DYNAMICS WITH SUPER PARTITIONED ADDITIVE RUNGE KUTTA METHODS

STRUCTURE PRESERVATION FOR CONSTRAINED DYNAMICS WITH SUPER PARTITIONED ADDITIVE RUNGE KUTTA METHODS SIAM J. SCI. COMPUT. Vol. 20, No. 2, pp. 416 446 c 1998 Society for Industrial and Applied Mathematics STRUCTURE PRESERVATION FOR CONSTRAINED DYNAMICS WITH SUPER PARTITIONED ADDITIVE RUNGE KUTTA METHODS

More information

NUMERICAL ANALYSIS AND SYSTEMS THEORY

NUMERICAL ANALYSIS AND SYSTEMS THEORY Int. J. Appl. Math. Comput. Sci., 2001, Vol.11, No.5, 1025 1033 NUMERICAL ANALYSIS AND SYSTEMS THEORY Stephen L. CAMPBELL The area of numerical analysis interacts with the area of control and systems theory

More information

Energy-Preserving Runge-Kutta methods

Energy-Preserving Runge-Kutta methods Energy-Preserving Runge-Kutta methods Fasma Diele, Brigida Pace Istituto per le Applicazioni del Calcolo M. Picone, CNR, Via Amendola 122, 70126 Bari, Italy f.diele@ba.iac.cnr.it b.pace@ba.iac.cnr.it SDS2010,

More information

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact

OUTLINE 1. Introduction 1.1 Notation 1.2 Special matrices 2. Gaussian Elimination 2.1 Vector and matrix norms 2.2 Finite precision arithmetic 2.3 Fact Computational Linear Algebra Course: (MATH: 6800, CSCI: 6800) Semester: Fall 1998 Instructors: { Joseph E. Flaherty, aherje@cs.rpi.edu { Franklin T. Luk, luk@cs.rpi.edu { Wesley Turner, turnerw@cs.rpi.edu

More information

Multistage Methods I: Runge-Kutta Methods

Multistage Methods I: Runge-Kutta Methods Multistage Methods I: Runge-Kutta Methods Varun Shankar January, 0 Introduction Previously, we saw that explicit multistep methods (AB methods) have shrinking stability regions as their orders are increased.

More information

Defect-based a-posteriori error estimation for implicit ODEs and DAEs

Defect-based a-posteriori error estimation for implicit ODEs and DAEs 1 / 24 Defect-based a-posteriori error estimation for implicit ODEs and DAEs W. Auzinger Institute for Analysis and Scientific Computing Vienna University of Technology Workshop on Innovative Integrators

More information

Numerical Methods for Differential Equations

Numerical Methods for Differential Equations Numerical Methods for Differential Equations Chapter 2: Runge Kutta and Linear Multistep methods Gustaf Söderlind and Carmen Arévalo Numerical Analysis, Lund University Textbooks: A First Course in the

More information

Study the Numerical Methods for Solving System of Equation

Study the Numerical Methods for Solving System of Equation Study the Numerical Methods for Solving System of Equation Ravi Kumar 1, Mr. Raj Kumar Duhan 2 1 M. Tech. (M.E), 4 th Semester, UIET MDU Rohtak 2 Assistant Professor, Dept. of Mechanical Engg., UIET MDU

More information

A note on the uniform perturbation index 1

A note on the uniform perturbation index 1 Rostock. Math. Kolloq. 52, 33 46 (998) Subject Classification (AMS) 65L5 M. Arnold A note on the uniform perturbation index ABSTRACT. For a given differential-algebraic equation (DAE) the perturbation

More information

Computation of a canonical form for linear differential-algebraic equations

Computation of a canonical form for linear differential-algebraic equations Computation of a canonical form for linear differential-algebraic equations Markus Gerdin Division of Automatic Control Department of Electrical Engineering Linköpings universitet, SE-581 83 Linköping,

More information

An Implicit Runge Kutta Solver adapted to Flexible Multibody System Simulation

An Implicit Runge Kutta Solver adapted to Flexible Multibody System Simulation An Implicit Runge Kutta Solver adapted to Flexible Multibody System Simulation Johannes Gerstmayr 7. WORKSHOP ÜBER DESKRIPTORSYSTEME 15. - 18. March 2005, Liborianum, Paderborn, Germany Austrian Academy

More information

PROJECTED GMRES AND ITS VARIANTS

PROJECTED GMRES AND ITS VARIANTS PROJECTED GMRES AND ITS VARIANTS Reinaldo Astudillo Brígida Molina rastudillo@kuaimare.ciens.ucv.ve bmolina@kuaimare.ciens.ucv.ve Centro de Cálculo Científico y Tecnológico (CCCT), Facultad de Ciencias,

More information

High-order symmetric schemes for the energy conservation of polynomial Hamiltonian problems.

High-order symmetric schemes for the energy conservation of polynomial Hamiltonian problems. High-order symmetric schemes for the energy conservation of polynomial Hamiltonian problems. Felice Iavernaro and Donato Trigiante Abstract We define a class of arbitrary high order symmetric one-step

More information

WHEN studying distributed simulations of power systems,

WHEN studying distributed simulations of power systems, 1096 IEEE TRANSACTIONS ON POWER SYSTEMS, VOL 21, NO 3, AUGUST 2006 A Jacobian-Free Newton-GMRES(m) Method with Adaptive Preconditioner and Its Application for Power Flow Calculations Ying Chen and Chen

More information

Newton s Method and Efficient, Robust Variants

Newton s Method and Efficient, Robust Variants Newton s Method and Efficient, Robust Variants Philipp Birken University of Kassel (SFB/TRR 30) Soon: University of Lund October 7th 2013 Efficient solution of large systems of non-linear PDEs in science

More information

The Direct Transcription Method For Optimal Control. Part 2: Optimal Control

The Direct Transcription Method For Optimal Control. Part 2: Optimal Control The Direct Transcription Method For Optimal Control Part 2: Optimal Control John T Betts Partner, Applied Mathematical Analysis, LLC 1 Fundamental Principle of Transcription Methods Transcription Method

More information

The estimation problem ODE stability The embedding method The simultaneous method In conclusion. Stability problems in ODE estimation

The estimation problem ODE stability The embedding method The simultaneous method In conclusion. Stability problems in ODE estimation Mathematical Sciences Institute Australian National University HPSC Hanoi 2006 Outline The estimation problem ODE stability The embedding method The simultaneous method In conclusion Estimation Given the

More information

A new nonmonotone Newton s modification for unconstrained Optimization

A new nonmonotone Newton s modification for unconstrained Optimization A new nonmonotone Newton s modification for unconstrained Optimization Aristotelis E. Kostopoulos a George S. Androulakis b a a Department of Mathematics, University of Patras, GR-265.04, Rio, Greece b

More information

Curriculum Vitæ LAURENT-OLIVIER JAY. Department of Mathematics University of Iowa, Iowa City, Iowa

Curriculum Vitæ LAURENT-OLIVIER JAY. Department of Mathematics University of Iowa, Iowa City, Iowa Curriculum Vitæ LAURENT-OLIVIER JAY Business address: Department of Mathematics University of Iowa, Iowa City, Iowa 52242 Phone: 319-335-0898 E-mail: Laurent-Jay@uiowa.edu 1 Educational and professional

More information

Domain Decomposition Preconditioners for Spectral Nédélec Elements in Two and Three Dimensions

Domain Decomposition Preconditioners for Spectral Nédélec Elements in Two and Three Dimensions Domain Decomposition Preconditioners for Spectral Nédélec Elements in Two and Three Dimensions Bernhard Hientzsch Courant Institute of Mathematical Sciences, New York University, 51 Mercer Street, New

More information

High-order Symmetric Schemes for the Energy Conservation of Polynomial Hamiltonian Problems 1 2

High-order Symmetric Schemes for the Energy Conservation of Polynomial Hamiltonian Problems 1 2 European Society of Computational Methods in Sciences and Engineering (ESCMSE) Journal of Numerical Analysis, Industrial and Applied Mathematics (JNAIAM) vol 4, no -2, 29, pp 87- ISSN 79 84 High-order

More information

Ordinary differential equations - Initial value problems

Ordinary differential equations - Initial value problems Education has produced a vast population able to read but unable to distinguish what is worth reading. G.M. TREVELYAN Chapter 6 Ordinary differential equations - Initial value problems In this chapter

More information

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH

ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH ON ORTHOGONAL REDUCTION TO HESSENBERG FORM WITH SMALL BANDWIDTH V. FABER, J. LIESEN, AND P. TICHÝ Abstract. Numerous algorithms in numerical linear algebra are based on the reduction of a given matrix

More information

A Gauss Lobatto quadrature method for solving optimal control problems

A Gauss Lobatto quadrature method for solving optimal control problems ANZIAM J. 47 (EMAC2005) pp.c101 C115, 2006 C101 A Gauss Lobatto quadrature method for solving optimal control problems P. Williams (Received 29 August 2005; revised 13 July 2006) Abstract This paper proposes

More information

9.1 Preconditioned Krylov Subspace Methods

9.1 Preconditioned Krylov Subspace Methods Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete

More information

ONE-STEP 4-STAGE HERMITE-BIRKHOFF-TAYLOR DAE SOLVER OF ORDER 12

ONE-STEP 4-STAGE HERMITE-BIRKHOFF-TAYLOR DAE SOLVER OF ORDER 12 CANADIAN APPLIED MATHEMATICS QUARTERLY Volume 16, Number 4, Winter 2008 ONE-STEP 4-STAGE HERMITE-BIRKHOFF-TAYLOR DAE SOLVER OF ORDER 12 TRUONG NGUYEN-BA, HAN HAO, HEMZA YAGOUB AND RÉMI VAILLANCOURT ABSTRACT.

More information

On the stability regions of implicit linear multistep methods

On the stability regions of implicit linear multistep methods On the stability regions of implicit linear multistep methods arxiv:1404.6934v1 [math.na] 8 Apr 014 Lajos Lóczi April 9, 014 Abstract If we apply the accepted definition to determine the stability region

More information

RANA03-02 January Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis

RANA03-02 January Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis RANA03-02 January 2003 Jacobi-Davidson methods and preconditioning with applications in pole-zero analysis by J.Rommes, H.A. van der Vorst, EJ.W. ter Maten Reports on Applied and Numerical Analysis Department

More information

Ordinary Differential Equations

Ordinary Differential Equations Chapter 13 Ordinary Differential Equations We motivated the problem of interpolation in Chapter 11 by transitioning from analzying to finding functions. That is, in problems like interpolation and regression,

More information

Newton additive and multiplicative Schwarz iterative methods

Newton additive and multiplicative Schwarz iterative methods IMA Journal of Numerical Analysis (2008) 28, 143 161 doi:10.1093/imanum/drm015 Advance Access publication on July 16, 2007 Newton additive and multiplicative Schwarz iterative methods JOSEP ARNAL, VIOLETA

More information

Improved Starting Methods for Two-Step Runge Kutta Methods of Stage-Order p 3

Improved Starting Methods for Two-Step Runge Kutta Methods of Stage-Order p 3 Improved Starting Methods for Two-Step Runge Kutta Methods of Stage-Order p 3 J.H. Verner August 3, 2004 Abstract. In [5], Jackiewicz and Verner derived formulas for, and tested the implementation of two-step

More information

A CLASS OF CONTINUOUS HYBRID LINEAR MULTISTEP METHODS FOR STIFF IVPs IN ODEs

A CLASS OF CONTINUOUS HYBRID LINEAR MULTISTEP METHODS FOR STIFF IVPs IN ODEs ANALELE ŞTIINŢIFICE ALE UNIVERSITĂŢII AL.I. CUZA DIN IAŞI (S.N.) MATEMATICĂ, Tomul LVIII, 0, f. A CLASS OF CONTINUOUS HYBRID LINEAR MULTISTEP METHODS FOR STIFF IVPs IN ODEs BY R.I. OKUONGHAE Abstract.

More information

Efficient numerical methods for the solution of stiff initial-value problems and differential algebraic equations

Efficient numerical methods for the solution of stiff initial-value problems and differential algebraic equations 10.1098/rspa.2003.1130 R EVIEW PAPER Efficient numerical methods for the solution of stiff initial-value problems and differential algebraic equations By J. R. Cash Department of Mathematics, Imperial

More information

Index Reduction and Discontinuity Handling using Substitute Equations

Index Reduction and Discontinuity Handling using Substitute Equations Mathematical and Computer Modelling of Dynamical Systems, vol. 7, nr. 2, 2001, 173-187. Index Reduction and Discontinuity Handling using Substitute Equations G. Fábián, D.A. van Beek, J.E. Rooda Abstract

More information

Exponentially Fitted Error Correction Methods for Solving Initial Value Problems

Exponentially Fitted Error Correction Methods for Solving Initial Value Problems KYUNGPOOK Math. J. 52(2012), 167-177 http://dx.doi.org/10.5666/kmj.2012.52.2.167 Exponentially Fitted Error Correction Methods for Solving Initial Value Problems Sangdong Kim and Philsu Kim Department

More information

A graph theoretic approach to matrix inversion by partitioning

A graph theoretic approach to matrix inversion by partitioning Numerische Mathematik 4, t 28-- t 35 (1962) A graph theoretic approach to matrix inversion by partitioning By FRANK HARARY Abstract Let M be a square matrix whose entries are in some field. Our object

More information

Backward error analysis

Backward error analysis Backward error analysis Brynjulf Owren July 28, 2015 Introduction. The main source for these notes is the monograph by Hairer, Lubich and Wanner [2]. Consider ODEs in R d of the form ẏ = f(y), y(0) = y

More information

Finite Differences for Differential Equations 28 PART II. Finite Difference Methods for Differential Equations

Finite Differences for Differential Equations 28 PART II. Finite Difference Methods for Differential Equations Finite Differences for Differential Equations 28 PART II Finite Difference Methods for Differential Equations Finite Differences for Differential Equations 29 BOUNDARY VALUE PROBLEMS (I) Solving a TWO

More information

8.1 Introduction. Consider the initial value problem (IVP):

8.1 Introduction. Consider the initial value problem (IVP): 8.1 Introduction Consider the initial value problem (IVP): y dy dt = f(t, y), y(t 0)=y 0, t 0 t T. Geometrically: solutions are a one parameter family of curves y = y(t) in(t, y)-plane. Assume solution

More information

Discrete Projection Methods for Incompressible Fluid Flow Problems and Application to a Fluid-Structure Interaction

Discrete Projection Methods for Incompressible Fluid Flow Problems and Application to a Fluid-Structure Interaction Discrete Projection Methods for Incompressible Fluid Flow Problems and Application to a Fluid-Structure Interaction Problem Jörg-M. Sautter Mathematisches Institut, Universität Düsseldorf, Germany, sautter@am.uni-duesseldorf.de

More information

Part IB Numerical Analysis

Part IB Numerical Analysis Part IB Numerical Analysis Definitions Based on lectures by G. Moore Notes taken by Dexter Chua Lent 206 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after

More information

On The Numerical Solution of Differential-Algebraic Equations(DAES) with Index-3 by Pade Approximation

On The Numerical Solution of Differential-Algebraic Equations(DAES) with Index-3 by Pade Approximation Appl. Math. Inf. Sci. Lett., No. 2, 7-23 (203) 7 Applied Mathematics & Information Sciences Letters An International Journal On The Numerical Solution of Differential-Algebraic Equations(DAES) with Index-3

More information

Variable Step Runge-Kutta-Nyström Methods for the Numerical Solution of Reversible Systems

Variable Step Runge-Kutta-Nyström Methods for the Numerical Solution of Reversible Systems Variable Step Runge-Kutta-Nyström Methods for the Numerical Solution of Reversible Systems J. R. Cash and S. Girdlestone Department of Mathematics, Imperial College London South Kensington London SW7 2AZ,

More information

Signal Identification Using a Least L 1 Norm Algorithm

Signal Identification Using a Least L 1 Norm Algorithm Optimization and Engineering, 1, 51 65, 2000 c 2000 Kluwer Academic Publishers. Manufactured in The Netherlands. Signal Identification Using a Least L 1 Norm Algorithm J. BEN ROSEN Department of Computer

More information

c 2004 Society for Industrial and Applied Mathematics

c 2004 Society for Industrial and Applied Mathematics SIAM J. SCI. COMPUT. Vol. 26, No. 2, pp. 359 374 c 24 Society for Industrial and Applied Mathematics A POSTERIORI ERROR ESTIMATION AND GLOBAL ERROR CONTROL FOR ORDINARY DIFFERENTIAL EQUATIONS BY THE ADJOINT

More information

Block-Triangular and Skew-Hermitian Splitting Methods for Positive Definite Linear Systems

Block-Triangular and Skew-Hermitian Splitting Methods for Positive Definite Linear Systems Block-Triangular and Skew-Hermitian Splitting Methods for Positive Definite Linear Systems Zhong-Zhi Bai State Key Laboratory of Scientific/Engineering Computing Institute of Computational Mathematics

More information

Quadratic SDIRK pair for treating chemical reaction problems.

Quadratic SDIRK pair for treating chemical reaction problems. Quadratic SDIRK pair for treating chemical reaction problems. Ch. Tsitouras TEI of Chalkis, Dept. of Applied Sciences, GR 34400 Psahna, Greece. I. Th. Famelis TEI of Athens, Dept. of Mathematics, GR 12210

More information

From Stationary Methods to Krylov Subspaces

From Stationary Methods to Krylov Subspaces Week 6: Wednesday, Mar 7 From Stationary Methods to Krylov Subspaces Last time, we discussed stationary methods for the iterative solution of linear systems of equations, which can generally be written

More information

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination J.M. Peña 1 Introduction Gaussian elimination (GE) with a given pivoting strategy, for nonsingular matrices

More information

COMPUTATIONAL METHODS AND ALGORITHMS Vol. I - Numerical Analysis and Methods for Ordinary Differential Equations - N.N. Kalitkin, S.S.

COMPUTATIONAL METHODS AND ALGORITHMS Vol. I - Numerical Analysis and Methods for Ordinary Differential Equations - N.N. Kalitkin, S.S. NUMERICAL ANALYSIS AND METHODS FOR ORDINARY DIFFERENTIAL EQUATIONS N.N. Kalitkin Institute for Mathematical Modeling, Russian Academy of Sciences, Moscow, Russia S.S. Filippov Keldysh Institute of Applied

More information

Differential-Algebraic Equations (DAEs) Differential-algebraische Gleichungen

Differential-Algebraic Equations (DAEs) Differential-algebraische Gleichungen Differential-Algebraic Equations (DAEs) Differential-algebraische Gleichungen Bernd Simeon Vorlesung Wintersemester 2012/13 TU Kaiserslautern, Fachbereich Mathematik 1. Examples of DAEs 2. Theory of DAEs

More information

A definition of stiffness for initial value problems for ODEs SciCADE 2011, University of Toronto, hosted by the Fields Institute, Toronto, Canada

A definition of stiffness for initial value problems for ODEs SciCADE 2011, University of Toronto, hosted by the Fields Institute, Toronto, Canada for initial value problems for ODEs SciCADE 2011, University of Toronto, hosted by the Fields Institute, Toronto, Canada Laurent O. Jay Joint work with Manuel Calvo (University of Zaragoza, Spain) Dedicated

More information

Butcher tableau Can summarize an s + 1 stage Runge Kutta method using a triangular grid of coefficients

Butcher tableau Can summarize an s + 1 stage Runge Kutta method using a triangular grid of coefficients AM 205: lecture 13 Last time: ODE convergence and stability, Runge Kutta methods Today: the Butcher tableau, multi-step methods, boundary value problems Butcher tableau Can summarize an s + 1 stage Runge

More information

Mathematics for chemical engineers. Numerical solution of ordinary differential equations

Mathematics for chemical engineers. Numerical solution of ordinary differential equations Mathematics for chemical engineers Drahoslava Janovská Numerical solution of ordinary differential equations Initial value problem Winter Semester 2015-2016 Outline 1 Introduction 2 One step methods Euler

More information

Chebyshev semi-iteration in Preconditioning

Chebyshev semi-iteration in Preconditioning Report no. 08/14 Chebyshev semi-iteration in Preconditioning Andrew J. Wathen Oxford University Computing Laboratory Tyrone Rees Oxford University Computing Laboratory Dedicated to Victor Pereyra on his

More information

Linear Algebra Review (Course Notes for Math 308H - Spring 2016)

Linear Algebra Review (Course Notes for Math 308H - Spring 2016) Linear Algebra Review (Course Notes for Math 308H - Spring 2016) Dr. Michael S. Pilant February 12, 2016 1 Background: We begin with one of the most fundamental notions in R 2, distance. Letting (x 1,

More information

Gaussian interval quadrature rule for exponential weights

Gaussian interval quadrature rule for exponential weights Gaussian interval quadrature rule for exponential weights Aleksandar S. Cvetković, a, Gradimir V. Milovanović b a Department of Mathematics, Faculty of Mechanical Engineering, University of Belgrade, Kraljice

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information