Clearly, (1.1) covers the well-understood case of regular explicit ODEs x (t) = Bx(t) + g(x(t); t) (1.3) by A =?I, h(y; x; t) g(x; t). The pencil f?i;

Similar documents
Projectors for matrix pencils

R. Marz with coecient functions A(x t) L(R n R m ) d(x t) R n and b(x t) R m x D R m t I R, that are continuous in their arguments, and which have the

INTRODUCTION 2 Hence, by various reasons we are led to study equations of the form (.) in more detail. It should be stressed once more that neither A(

QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE. Denitizable operators in Krein spaces have spectral properties similar to those

8 Periodic Linear Di erential Equations - Floquet Theory

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

the reference terminal. For electrotechnical reasons, the current entering terminal n is given by i n = ; P n; k= i k. The conductance matrix G(v :::

Stability of a Class of Singular Difference Equations

4.1 Eigenvalues, Eigenvectors, and The Characteristic Polynomial

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

The Erwin Schrodinger International Boltzmanngasse 9. Institute for Mathematical Physics A-1090 Wien, Austria

1 The Observability Canonical Form

(1.) For any subset P S we denote by L(P ) the abelian group of integral relations between elements of P, i.e. L(P ) := ker Z P! span Z P S S : For ea

ON THE ASYMPTOTIC STABILITY IN TERMS OF TWO MEASURES FOR FUNCTIONAL DIFFERENTIAL EQUATIONS. G. Makay

growth rates of perturbed time-varying linear systems, [14]. For this setup it is also necessary to study discrete-time systems with a transition map

MARKOV CHAINS: STATIONARY DISTRIBUTIONS AND FUNCTIONS ON STATE SPACES. Contents

LECTURE 15 + C+F. = A 11 x 1x1 +2A 12 x 1x2 + A 22 x 2x2 + B 1 x 1 + B 2 x 2. xi y 2 = ~y 2 (x 1 ;x 2 ) x 2 = ~x 2 (y 1 ;y 2 1

equations Abstract Floquet-theory for index-1 dierential-algebraic equations. For this, linear

Vector Space Basics. 1 Abstract Vector Spaces. 1. (commutativity of vector addition) u + v = v + u. 2. (associativity of vector addition)

Linear Regression and Its Applications

N.G.Bean, D.A.Green and P.G.Taylor. University of Adelaide. Adelaide. Abstract. process of an MMPP/M/1 queue is not a MAP unless the queue is a

Notes on the matrix exponential

Entrance Exam, Differential Equations April, (Solve exactly 6 out of the 8 problems) y + 2y + y cos(x 2 y) = 0, y(0) = 2, y (0) = 4.

A(t) L T *(t) t/t t/t

Definition (T -invariant subspace) Example. Example

A proof of the Jordan normal form theorem

LECTURE VII: THE JORDAN CANONICAL FORM MAT FALL 2006 PRINCETON UNIVERSITY. [See also Appendix B in the book]

Spurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics

Congurations of periodic orbits for equations with delayed positive feedback

STABILITY OF INVARIANT SUBSPACES OF COMMUTING MATRICES We obtain some further results for pairs of commuting matrices. We show that a pair of commutin

NORMS ON SPACE OF MATRICES

z x = f x (x, y, a, b), z y = f y (x, y, a, b). F(x, y, z, z x, z y ) = 0. This is a PDE for the unknown function of two independent variables.

Comments on integral variants of ISS 1

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

2 Sequences, Continuity, and Limits

Bare-bones outline of eigenvalue theory and the Jordan canonical form

11 a 12 a 21 a 11 a 22 a 12 a 21. (C.11) A = The determinant of a product of two matrices is given by AB = A B 1 1 = (C.13) and similarly.

Absolutely indecomposable symmetric matrices

Math Ordinary Differential Equations

Homework 2 Solutions

Generalized Shifted Inverse Iterations on Grassmann Manifolds 1

2. Intersection Multiplicities

Semi-Linear Diusive Representations for Non-Linear Fractional Dierential Systems Jacques AUDOUNET 1, Denis MATIGNON 2?, and Gerard MONTSENY 3 1 MIP /

Asymptotic Stability by Linearization

4 Film Extension of the Dynamics: Slowness as Stability

E2 212: Matrix Theory (Fall 2010) Solutions to Test - 1

Equivalence of dynamical systems by bisimulation

II. Systems of viscous hyperbolic balance laws. Bernold Fiedler, Stefan Liebscher. Freie Universitat Berlin. Arnimallee 2-6, Berlin, Germany

Introduction In the classical theory of linear homogeneous ordinary dierential equations (ODEs), the adjoint equations are derived as those which are

1 Linear Algebra Problems

Linear Algebra- Final Exam Review

Linear Algebra March 16, 2019

Functions: A Fourier Approach. Universitat Rostock. Germany. Dedicated to Prof. L. Berg on the occasion of his 65th birthday.

PARAMETER IDENTIFICATION IN THE FREQUENCY DOMAIN. H.T. Banks and Yun Wang. Center for Research in Scientic Computation

Lecture 4. Chapter 4: Lyapunov Stability. Eugenio Schuster. Mechanical Engineering and Mechanics Lehigh University.

Theorem 5.3. Let E/F, E = F (u), be a simple field extension. Then u is algebraic if and only if E/F is finite. In this case, [E : F ] = deg f u.

Dynamical Systems. August 13, 2013

7 Planar systems of linear ODE

INRIA Rocquencourt, Le Chesnay Cedex (France) y Dept. of Mathematics, North Carolina State University, Raleigh NC USA

Half of Final Exam Name: Practice Problems October 28, 2014

Linear Algebra, 4th day, Thursday 7/1/04 REU Info:

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f

Math 1270 Honors ODE I Fall, 2008 Class notes # 14. x 0 = F (x; y) y 0 = G (x; y) u 0 = au + bv = cu + dv

Group Theory. 1. Show that Φ maps a conjugacy class of G into a conjugacy class of G.

Chapter 4. Inverse Function Theorem. 4.1 The Inverse Function Theorem

Continuous Functions on Metric Spaces

c Society for Industrial and Applied Mathematics

Impulsive Stabilization and Application to a Population Growth Model*

Elementary linear algebra

X. Linearization and Newton s Method

Problem Description The problem we consider is stabilization of a single-input multiple-state system with simultaneous magnitude and rate saturations,

for average case complexity 1 randomized reductions, an attempt to derive these notions from (more or less) rst

Dynamical Systems & Scientic Computing: Homework Assignments

Peter C. Müller. Introduction. - and the x 2 - subsystems are called the slow and the fast subsystem, correspondingly, cf. (Dai, 1989).

Math Spring 2011 Final Exam

. HILBERT POLYNOMIALS

1 Relative degree and local normal forms

Abstract. Previous characterizations of iss-stability are shown to generalize without change to the

STOCHASTIC DIFFERENTIAL EQUATIONS WITH EXTRA PROPERTIES H. JEROME KEISLER. Department of Mathematics. University of Wisconsin.

The Rationality of Certain Moduli Spaces of Curves of Genus 3

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

Garrett: `Bernstein's analytic continuation of complex powers' 2 Let f be a polynomial in x 1 ; : : : ; x n with real coecients. For complex s, let f

NONCOMMUTATIVE POLYNOMIAL EQUATIONS. Edward S. Letzter. Introduction

Convergence of The Multigrid Method With A Wavelet. Abstract. This new coarse grid operator is constructed using the wavelet

Initial value problems for singular and nonsmooth second order differential inclusions

294 Meinolf Geck In 1992, Lusztig [16] addressed this problem in the framework of his theory of character sheaves and its application to Kawanaka's th

106 E.G.Kirjackij i.e. in the circle jzj < 1; and let functions f(z) be normalized by conditions f(0) = 0; f 0 (0) = 1; and f 0 (z) 6= 0 in E: Let als

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

CHAPTER 3. Matrix Eigenvalue Problems

Stabilization and Passivity-Based Control

2 G. D. DASKALOPOULOS AND R. A. WENTWORTH general, is not true. Thus, unlike the case of divisors, there are situations where k?1 0 and W k?1 = ;. r;d


REMARKS ON THE TIME-OPTIMAL CONTROL OF A CLASS OF HAMILTONIAN SYSTEMS. Eduardo D. Sontag. SYCON - Rutgers Center for Systems and Control

0.1 Rational Canonical Forms

Bohl exponent for time-varying linear differential-algebraic equations

Linear Algebra, Summer 2011, pt. 2

satisfying ( i ; j ) = ij Here ij = if i = j and 0 otherwise The idea to use lattices is the following Suppose we are given a lattice L and a point ~x

460 HOLGER DETTE AND WILLIAM J STUDDEN order to examine how a given design behaves in the model g` with respect to the D-optimality criterion one uses

Stabilität differential-algebraischer Systeme

t x 0.25

Transcription:

Criteria for the trivial solution of dierential algebraic equations with small nonlinearities to be asymptotically stable Roswitha Marz, Humboldt-Universitat Berlin Abstract Dierential algebraic equations consisting of a constant coecient linear part and a small nonlinearity are considered. Conditions that enable linearizations to work well are discussed. In particular, for index- dierential algebraic equations there results a kind of Perron-Theorem that sounds as clear as its classical model except for the expensive proofs. 1 Introduction This paper deals with the question whether the zero-solution of the equation Ax (t) + Bx(t) + h(x (t); x(t); t) = (1.1) is asymptotically stable in the sense of Lyapunov. Equation (1.1) consists of a linear part characterized by the constant matrix-coecients A; B L(IR m ) and a small nonlinearity described by the function h : D [; 1)! IR m, D IR m IR m open, D, h(; ; t) = ; t [; 1): The zero-function solves (1.1) trivially, i.e. the origin represents a stationary solution of (1.1). The leading coecient matrix A is not necessarily nonsingular, but if A is so, equation (1.1) represents a regular ordinary dierential equation (ODE). For singular matrices A, there are dierential-algebraic equations (DAEs) under consideration. The matrix pencil fa; Bg is assumed to be regular, i.e. the polynomial p() := det(a + B) does not vanish identically. By fa; Bg and indfa; Bg we denote the nite spectrum and the Kronecker index of the pencil fa; Bg, respectively. Recall that fa; Bg is the set of the roots of p(). The given function h is continuous together with its partial Jacobians h x, h x. Moreover, h is small in the following sense. To each " >, there is a (") > such that jxj ("), jyj ("), t [; 1) yield jh x (y; x; t)j "; jh x(y; x; t)j ": (1.) 1

Clearly, (1.1) covers the well-understood case of regular explicit ODEs x (t) = Bx(t) + g(x(t); t) (1.3) by A =?I, h(y; x; t) g(x; t). The pencil f?i; Bg is always regular, further indf?i; Bg =, f?i; Bg = (B). In this case Perron's Theorem (e.g. [1], []) applies immediately. Hence, if fa; Bg belongs to IC?, then the trivial solution is asymptotically stable in the sense of Lyapunov. Does this assertion hold true also in more general cases? If so, to what extend does it hold? Answers should be of great interest, since they constitute the background of further stability considerations via linearization and tracing back linear parts to the constant coecient case. Although the classical stability results formed by Poincare, Perron and Lyapunov (e.g. [1], []) date back more than hundred years, the respective theory for DAEs is rather in its infancy. For so-called transferable DAEs (1.1), in [3] the stability question is reduced to that for an inherent regular ODE relative to a certain invariant subspace. Unfortunately, this inherent state equation is not attainable in practice. On the other hand, criteria by linearization are expected to enable also practical determinations. For autonomous low index DAEs, stability via linearization is considered e.g. in [4], [5], [6]. Unlike regular ODEs nonautonomous DAEs involve nontrivial new diculties in comparison with autonomous ones. A Lyapunov stability criterion for nonautonomous index-1 DAEs (1.1) is proved in [7]. However, even for autonomous DAEs (1.1) with indfa; Bg > 1, this index may become an irrelevant detail of (1.1), that is, linearization does not work in those cases (e.g. [4] and Section below). If the matrix A is nonsingular, then, applying the Implicit Function Theorem, we can transform (1.1) into x (t) =?A?1 Bx(t) + g(x(t); t): (1.4) The pencil fa; Bg is regular, indfa; Bg =, fa; Bg = (?A?1 B). Again by standard arguments, fa; Bg IC? yields the asymptotical stability of the trivial solution. Now, let us turn to the more interesting case of A being singular. To make sure that Ax (t) in (1.1) may be considered as a somewhat leading term in general, we assume the inclusion N := ker A ker h x(y; x; t); (y; x; t) D [; 1) (1.5) to be satised. Note that (1.5) holds for trivial reasons if h(y; x; t) does not depend at all on its rst argument. Due to condition (1.5) only those components of x (t) occur in the nonlinear part of (1.1) that are already involved in the leading term Ax (t). Next, denote by P L(IR m ) any projector matrix along N that is P = P, ker P = N. Then Q := I? P projects onto the nullspace N, hence A = A(P + Q) = AP. It is easily checked that (1.5) implies the identity h(y; x; t) h(p y; x; t) (1.6)

and vice versa. This suggests to reformulate equation (1.1) more precisely as A(P x) (t) + Bx(t) + h((p x) (t); x(t); t) = : (1.7) In the following we indicate equation (1.1) as a shorter notation of (1.7). Naturally, now we should ask for solutions of (1.1) that belong to the class C 1 N := fx() C : P x() C 1 g: Only those components of the unknown function are expected to be from C 1 whose derivatives are really involved in (1.1). For the other components continuity will do. At this place it should be mentioned that both the class CN 1 and the formulation (1.7) are invariant of the special choice of the projector P. For nonsingular A, we have trivially N = ker A = fg, P = I, thus CN 1 = C 1. However, if A is singular, im P IR m becomes a proper subspace, and CN 1 is a larger class than C 1 in fact. Example: Consider the two-dimensional system x 1(t) + x 1 (t) + (t)x 1 (t) = ; (1.8) x (t) + (t)x 1 (t) + (t)x (t) = ; (1.9) with continuous, uniformly bounded on [; 1) scalar functions (), (), (). Obviously, all the above assumptions on h are satised. In particular, (1.5) holds due to h x =. Further, we have A = diag(1; ); B = I; det(a + B) = + 1; indfa; Bg = 1; and P = diag(1; ) is a possible choice. The respective class CN 1 consists of all continuous functions x() = (x 1 (); x ()) T, the rst component of which is continuously dierentiable. System (1.8), (1.9) shows once more that, looking for C 1 solutions instead of those from CN, 1 would necessitate more smoothness of the function h. However, in view of applications, we try for lower smoothness conditions if possible. It is evident that the regular ODE (1.8) for x 1 () can be treated again by standard arguments. Its zero-solution is stable. The constraint equation (1.9) determines the second component in dependence of the rst one, and x 1 (t)! (t! 1) yields x (t)! (t! 1). Obviously, to cover all neighbouring solutions of the trivial one in the complete system (1.8), (1.9) we should vary only the initial data of the rst component. Observe that fa; Bg = f?1g IC? and that the trivial solution is asymptotically stable in this modied sense. The example discussed above demonstrates an important peculiarity of DAEs. One has to deal with constraints like (1.9), but also with so-called hidden ones (cf. x below). Naturally, the initial values x := x(t ) of solutions satisfy all relevant constraints, i.e., x is consistent at t. However, how to state initial value problems? Formulations like 3

x(t ) = x, x IR m is consistent at t, are nice but unt for practical use. In general, one has no idea on how the constraints look like. On the other hand, simply stating x(t ) = x IR m would yield unsolvable problems. In the following, we try to pick up and x the free integration constants involved by means of a certain projector matrix L(IR m ) that can be computed practically in terms of A, B. We state x(t ) = x ; x IR m ; (1.1) as initial condition. Note that, in case of nonsingular A, we obtain again = I, of course. In example (1.8), (1.9) the choice of = P = diag(1; ) is convenient. depends on the pencil fa; Bg in general, and on its index in particular. Denition: The zero-solution of (1.1) is stable in the sense of Lyapunov if there is a certain projector L(IR m ) and, for each t (i) a value > can be found such that the initial value problem (1.1), (1.1) with jx j has a C 1 N-solution x(; x ; t ) dened at least on [t ; 1), and further (ii) a value %() > to each < can be found so that jx j %() yields jx(t; x ; t )j for t t. The trivial solution of (1.1) is asymptotically stable in the sense of Lyapunov if it is stable and, for all suciently small jx j, it holds that x(t; x ; t )?! (t! 1): No doubt, this is a straightforward generalization of the classical notion for regular ODEs which is recovered by = I. As mentioned before, a respective stability result for (1.1) with an index-1 pencil fa; Bg is given in [7]. It says that fa; Bg IC? implies the trivial solution to be asymptotically stable, whereby = P is chosen. In particular, this assertion applies to the special system (1.7), (1.8) and conrms the stability behaviour we discussed before. It should be mentioned that, in the above Lyapunov stability notion, the projector matrix L(IR m ) can be replaced by any matrix C L(IR m ) with the only property ker C = ker. This fact can be realized easily by using the relations C = C, = C + C, where C + L(IR m ) indicates the Moore-Penrose inverse of C. Hence, in particular, Lyapunov stability does not depend on the special choice of the projector, the only relevant characteristic feature is its nullspace, but that is fully determined by the DAE itself. One might expect that, in general, fa; Bg IC? yields the trivial solution to be asymptotically stable. In Section, this tentative, somewhat coarse conjecture, is discussed by means of examples. After that, we derive the main result of the present paper (Theorem 3.3, Section 3), a stability criterion for the index case. Section 4 contains the detailed proofs. 4

A tentative conjecture and counterexamples The good experience with regular ODEs and index-1 DAEs of the form (1.1), which corresponds to matrix pencils fa; Bg of index zero or one, gives rise to the tentative conjecture that the origin is an asymptotically stable stationary point if fa; Bg IC?. We know this to become true for indfa; Bg 1. Thereby, we have ker = ker A. If the nonlinearity in (1.1) disappears, i.e., h(y; x; t), the above conjecture also holds true. The projector projects along the innite eigenspace of the matrix pencil fa; Bg. As shown in [8], if indfa; Bg = k, the projector can be constructed by a special matrix chain as = P P 1 P k?1, where A := A, B := B, A i+1 := A i + B i (I? P i ), P i L(IR m ) projects along ker A i, B i+1 := B i P i, i 1. Then, equation can be reduced to Ax (t) + Bx(t) = P P 1 P k?1 x (t) + P P 1 P k?1 A?1 k Bx(t) = ; (I? P P k?1 )x(t) = ; while fa; Bg consist of exactly those eigenvalues of P P 1 P k?1 A?1 k B whose associated eigenspaces belong to im P P k?1. Unfortunately, our conjecture is wrong if there are nonlinearities in (1.1), even in the case of indfa; Bg =. Example 1 Given the autonomous system x 1? x = ; x 1? x 3 = ; x 3? x 3 = ; x 4? x + x 3 = ; 9 >= >; (.1) which can be rewritten in compact form (1.1) by 1 A = 6 4 1 3 7 5 ; B = 6 4?1 1??1 1 1 3 7 5 ; h(y; x; t) = 6 4?x 3 IR is a parameter. This special function h satises all conditions we agreed upon in Section 1. Choosing P = diagf1; ; 1; g we consider A 1 := A + B(I? P ) = 5 6 4 1?1 1?1 1 3 7 5 : 3 7 5 :

Since A 1 is singular, we know that indfa; Bg > 1. Further, we realize that fz IR 4 : z ker A 1 ; BP z im A 1 g = fg: Consequently (cf. [9]), the matrix pencil fa; Bg has index. Furthermore, p() = det(a + B) =?, thus fa; Bg = fg. For <, our conjecture promises asymptotical stability for the origin. However, taking a look at the ow-picture in the (x 1 ; x )-plan we can realize immediately that the solutions move away from the origin. Hence, our conjecture is wrong. x 1 6 - x As we shall see below, the problem with Example 1 is that linearization does not work in this case. The DAE (.1) does not represent an index- DAE although we have indfa; Bg =. System (.1) is rather a singular index-1 DAE having a singularity at x =. In Section 3 below we shall formulate convenient structural conditions that enable linearization and exclude this kind of singularities. Our next example makes clear that even if linearization works and indfa; Bg =, additional smoothness and boundedness conditions for h have to be satised. Example Given the DAE x + x 1 = ; x + q(t)x 3 = ; x 3? x 3 = ; which can be described in terms of (1.1) as A = B @ 1 1 C A ; B = 1 B @ 9 >= >; 1 1 C 1 A ; h(y; x; t) =? B @ q(t)x 3 1 C A : (.) In (.), < indicates a parameter and q(t) is a continuous, uniformly bounded, scalar function. Again, all conditions for h given in Section 1 are fullled. Moreover, we derive that indfa; Bg =, fa; Bg = fg. On the other hand, the last two equations of (.) yield x 3 (t) = e t x 3 (); x (t) =?q(t)e x 3 () : 6

Considering the rst equation from (.), i.e. x 1 =?x, we learn that continuity of the function q() is not adequate for this problem. To obtain just a continuous solution component x 1 we have to demand that q() is C 1. Then we derive x 1 (t) = (q (t) + q(t))e t x 3 () : (.3) An appropriate projector to state the initial condition (1.1) is = diagf; ; 1g. Obviously, = P = diagf; 1; 1g would not do in this case. As far as the asymptotical behaviour of t! 1 is concerned, there are no problems with the solution components x (), x 3 (). However, to make sure that fa; Bg IC? yields also x 1 (t)! (t! 1), we have to demand the uniform boundedness of q (t) additionally. In terms of the function h, our additional assumptions mean that h has continuous partial derivatives h t, h tx, too. Further, the relation as well as the inequality h t(; ; t) = ; t [; 1) (.4) jh tx(; x; t)j cjxj for small jxj with a certain constant c are given. Considering this additional regularity and smallness of the function h, now fa; Bg = fg IC? implies the zero-solution to be asymptotically stable. This simple fact will be conrmed once more by Theorem 3.3 below. We know the above conjecture to be somewhat coarse. To improve it, one has to add { structural conditions that guarantee linearization to work, but also { more regularity and smallness conditions for the nonlinearity h. 3 A positive result for the case indfa; Bg = In this part we study equation (1.1) with an index- matrix pencil fa; Bg. For that case, we shall verify our improved conjecture (cf. Section ) and give precise formulations of all additional assumptions needed, respectively. For more clarity, we recall the standard assumptions used in Section 1 and indicate them as (A). Assumption (A): (i) h : D [; 1)! IR m, D IR m IR m open, D; h(; ; t) = for t [; 1); h is continuous together with its partial Jacobians h x, h x. 7

(ii) To each small " >, a (") > can be found such that jxj ("), jx j (") yield h x (x ; x; t)j "; jh x(x ; x; t)j " uniformly for all t [; 1). (iii) fa; Bg is a regular matrix pencil. (iv) N := ker A ker h x (x ; x; t) for all (x ; x; t) D [; 1). Further, let us formulate certain additional smoothness and smallness conditions as suggested by Example in x. Assumption (B): (i) The function ~ h dened by ~ h(x; t) := (I? AA + )h(; x; t) has also continuous partial derivatives ~ h t, ~ h tx, ~ h xx. (ii) ~ h t(; t) = for all t [; 1). (iii) A constant can be found so that, with (") from (A), jxj (") yields j ~ h xt(x; t)j " for all t [; 1): (iv) ~ h xx(x; t) is uniformly bounded by a constant. For the special DAE (.) it holds that AA + = diagf1; ; 1g, ~ h(x; t) = (; q(t)x 3 ; )T. If the function q() and its derivative q (t) are uniformly bounded as discussed in Example (x), then this special ~ h fulls (B). The following matrices and subspaces will be used below (cf. [9]): N := ker A, S := fz IR m : Bz im Ag, Q L(IR m ), Q = Q, im Q = N, P := I? Q, A 1 := A + BQ, N 1 := ker A 1, S 1 := fz IR m : BP z im A 1 g, Q 1 L(IR m ), Q 1 = Q 1, im Q 1 = N 1, P 1 := I? Q 1, A := A 1 + BP Q 1, V L(IR m ), V = V, im V = N \ S, U := I? V. It is well-known that the pencil fa; Bg has index if and only if A is nonsingular but A, A 1 are singular (e.g. [9]). Moreover, if fa; Bg has index, we can use the decomposition IR m = N 1 S 1. Hence, a convenient choice of the projector Q 1 is given by this decomposition. In the following we agree to have, more precisely, im Q 1 = N 1, ker Q 1 = S 1, and consequently (e.g. [3], [9]), Q 1 = Q 1 A?1 BP = Q 1A?1 B; Q 1Q = : (3.1) 8

The relations (3.1) lead to Scaling equation (1.1) by A?1 yields We decompose (P P 1 ) = P P 1 ; (P Q 1 ) = P Q 1 ; A?1 A = P 1P; A?1 B = A?1 BP P 1 + Q 1 + Q: (P P 1 x) (t)? QQ 1 (P Q 1 x) (t) + A?1 BP P 1x(t) + Q 1 x(t) + Qx(t) +A?1 h((p P 1 x) (t) + (P Q 1 x) (t); x(t); t) = : (3.) and decouple (3.) into the system x = P x + Qx = P P 1 x + P Q 1 x + Qx =: u + v + w; u (t) + P P 1 A?1 P P 1A?1 h(u (t) + v (t); u(t) + v(t) + w(t); t) = ; (3.3) v(t) + P Q 1 A?1 h(u (t) + v (t); u(t) + v(t) + w(t); t) = ; (3.4)?QQ 1 v (t) + w(t) + P Q 1 A?1 +QP 1 A?1 h(u (t) + v (t); u(t) + v(t) + w(t); t) = : (3.5) Because of im QQ 1 = N \ S the latter equation (3.5) splits up further into Uw(t) + UQA?1 Bu(t) + UQA?1 h(u (t) + v (t); u(t) + v(t) + w(t); t) = ; (3.6)?QQ 1 v (t) + V w(t) + V QP 1 A?1 Bu(t) +V QP 1 A?1 h(u (t) + v (t); u(t) + v(t) + w(t); t) = : (3.7) If the nonlinearity h disappears, equation (3.3) simplies to a regular explicit linear ODE for u() that has the invariant subspace im P P 1. (3.4) realizes v(t) =, hence it results that x(t) = u(t) + w(t) = (I? P Q 1 A?1 B)u(t) = (I? P Q 1A?1 BP P 1)P P 1 u(t). Note that can := (I? P Q 1 A?1 BP P 1)P P 1 is also a projector. It holds that ker can = ker P P 1 = N N 1. Further, im can represents the nite eigenspace of the matrix pencil (cf. [8]), while the vectors Uw correspond to that part of the innite eigenspace that has simple structure. The vectors V w and v form the respective part for Jordan blocks of order. In [1] we nd the relation im A = ker((p Q 1 + UQ)A?1 ); (3.8) which will become very helpful to realize the appropriate structural conditions below. Lemma 3.1 Given (A), indfa; Bg =. Additionally, let Then the identity is valid. im h x (x ; x; t) im A for (x ; x; t) D [; 1): (3.9) (P Q 1 + UQ)A?1 h(y; x; t) (P Q 1 + UQ)A?1 (I? AA + )h(; x; t) (3.1) 9

Proof: Due to (3.9) we have Z 1 (P Q 1 + UQ)A?1 (h(x; y; t)? h(; x; t)) = (P Q 1 + UQ)A?1 h x(sy; x; t)yds = : On the other hand, (3.8) implies (P Q 1 + UQ)A?1 = (P Q 1 + UQ)A?1 (I? AA + ): Note that condition (3.9) further species the possible structure of (1.1). By Lemma 3.1, the equations (3.4) and (3.6) are much simpler now, namely Uw(t) + UQA?1 We put them together compactly to v(t) + P Q 1 A?1~ h(u(t) + v(t) + w(t); t) = ; Bu(t) + ~ UQA?1 h(u(t) + v(t) + w(t); t) = : y(t) + UQA?1 BP P 1z(t) + (P Q 1 + UQ)A?1 ~ h(y(t) + (P P 1 + V Q)z(t); t) = ; (3.11) where y := v + Uw; z := u + V w: (3.1) Clearly, if x() satises the original DAE (1.1), then (3.11) is satised by y() = P Q 1 x() + UQx() and z() = P P 1 x() + V Qx(). Equation (3.11) suggests to realize y as a function of z and t by applying the Implicit Function Theorem. Lemma 3. Let (A), (B) as well as (3.9) be given, indfa; Bg =. Then, for suf- ciently small " > and the corresponding (") > from (A), there is a uniquely determined function f : D(") [; 1)! IR m, D(") := n z IR m : (1 + juqa?1 BP j)j(p P 1 + V Q)zj 1 (")o satisfying (i) f(z; t) + (P Q 1 + UQ)A?1 ~ h(f(z; t) + (P P 1 + V Q)z; t) + UQA?1 BP P 1z =, z D("), t [; 1). (ii) f is continuous and has continuous partial derivatives f z, f zz, f t, f zt. (iii) It holds that f(; t) = ; f t(; t) = ; f z(; t) =?UQA?1 BP P 1; (P Q 1 + UQ)f(z; t) = f(z; t) = f((p P 1 + V Q)z; t); z D("); t [; 1): 1

For the proof we refer to Section 4 below. Lemma 3. enables us to rewrite (3.11) locally equivalently as and, in more detail, as y(t) = f(u(t) + V w(t); t) v(t) = P y(t) = P f(u(t) + V w(t); t); (3.13) V w(t) = Qy(t) = Qf(u(t) + V w(t); t): Considering once more equation (3.5) we know that v (t) is needed. Our concept of C 1 N-solvability means in terms of the decomposition used that u() = P P 1 x(), v() = P Q 1 x() are from C 1 and w() = Qx() is just continuous. The idea is now to further specify the structure of (1.1) by supposing that P f(z; t) = P f(p P 1 z; t); z D("); t [; 1) (3.14) or, equivalently, P f z(z; t)v Q. In other words, f z(z; t) is forced to map N N 1 into N. By this we meet the natural smoothness of the solution, and we are allowed then to dierentiate equation (3.13) with respect to t. We derive v (t) = P f z(u(t); t)u (t) + P f t(u(t); t): (3.15) Now, expressions for v(t), v (t), Uw(t) in terms of u(t), u (t), V w(t) are available. Inserting them into the equations (3.3) and (3.7), there results a system u (t) =?P P 1 A?1 Bu(t) + '(u (t); u(t); V w(t); t); V w(t) = (u (t); u(t); V w(t); t) that could be transformed locally into a system that reads Together with u (t) =?P P 1 A?1 Bu(t) + g(u(t); t); (3.16) V w(t) = k(u(t); t): (3.17) v(t) + Uw(t) = f(u(t) + V w(t); t) (3.18) provided by Lemma 3., we arrive at a local decoupling of (1.1). If x(; x ; t ) solves the initial value problem for (1.1) and the initial condition P P 1 x(t ) = P P 1 x ; x IR m ; (3.19) with suciently small jp P 1 x j, then u() = P P 1 x(; x ; t ) satises the regular ODE (3.16), but also u(t ) = P P 1 x : Moreover, the components v() = P Q 1 x(; x ; t ) and w() = Qx(; x ; t ) satisfy (3.17), (3.18). The resulting idea is to use such decouplings to construct all neighbouring solutions of the zero-solution. 11

Let us recall once more the structural conditions used for (1.1) and denote them by (C). Assumption (C): (i) im h x (x ; x; t) im A, (x ; x; t) D [; 1). (ii) f z(z; t) maps N N 1 into N for z D("), t [; 1). Now, we are ready to formulate our main result that sounds as clear as its classical model. Theorem 3.3 Given the Assumptions (A), (B), (C) and indfa; Bg=, fa; Bg IC?. Then the trivial solution of (1.1) is asymptotically stable in the sense of Lyapunov with := P P 1. The proof will be carried out in Section 4. Remarks: 1) Theorem 3.3 generalizes the results for autonomous DAEs in []. This is not as trivial as one might think when coming from the regular ODE case. ) The nullspace ker(p P 1 ) = N N 1 is nothing else but the innite eigenspace of the pencil fa; Bg. Instead of P P 1 for stating the initial conditions we can use any matrix C that has the kernel N N 1. 3) Concerning condition (C), it is somewhat dicult to check its second part in practice. The function y = f(z; t) is only implicitly given by the equation y + UQA?1 BP P 1z + (P Q 1 + UQ)A?1 ~ h(y + (P P 1 + V Q)z; t) = : (3.) We close this section by providing sucient criteria for (C)(ii) to be valid, which are given in terms of the original data of (1.1). For dierent index- DAEs those criteria are proposed in [4] and [1], respectively. Lemma 3.4 Let (A), (B) as well as (C)(i) be given, indfa; Bg =. Then each of the following 4 conditions implies condition (C)(ii) to be satised: (i) ~ h(x; t) = ~ h(p x; t) for (; x; t) D [; 1). (ii) ~ h(x; t)? ~ h((p P 1 + P Q 1 + UQ)x; t) im A for (; x; t) D [; 1). (iii) ~ h(x; t)? ~ h(p x; t) im A 1, (; x; t) D [; 1). (iv) fz IR m : Bz + h x(y; x; t)z im Ag \ N = fz IR m : Bz im Ag \ N for (y; x; t) D [; 1). 1

The proof is given in Section 4. Although criterion (iv) related to certain subspaces looks somewhat strange, it seems to be very useful in practice, e.g. in circuit simulation ([11]). Systems in Hessenberg form are often of special interest, that is x 1 + B 11 x 1 + B 1 x + g 1 (x 1 ; x ; t) = ; (3.1) B 1 x 1 + g (x 1 ; t) = : (3.) System (3.1), (3.) is a Hessenberg form equation of size if B 1 B 1 is assumed to be nonsingular. With A = diag(i; ), AA + = diag(i; ) we nd ~h(x; t) =! ; g (x 1 ; t) further N = n z 1 z IRm : z 1 = o and thus ~ h(x; t) = ~ h(p x; t). Applying Lemma 3.4(i) we conclude the following assertion. Corollary 3.5 Condition (C) is valid in case of Hessenberg form equations (1.1) of size. 4 Proofs Proof of Lemma 3.: Let the conditions (A) and (B) be satised, further indfa; Bg =. The structural condition (C)(i) (which is the same as relation (3.9)) is also assumed to be valid. Since A?1 is a constant nonsingular matrix, the smallness conditions (A)(ii), (B)(iii) apply also to A?1 h, ~ A?1 h. Hence, there is a (") > to each small " > such that jxj ("), jyj ("), t [; 1) yield ja?1 h x(y; x; t)j "; ja?1 h x(y; x; t)j "; ja?1~ (4.1) h xt(y; x; t)j ": Because of A?1 h(y; x; t) = A?1 fh(y; x; t)? h(; ; t)g = Z 1 fa?1 h x(sy; sx; t)y + A?1 h x(sy; sx; t)xgds and A?1~ h t(x; t) = A?1 f h ~ t(x; t)? h ~ Z 1 t(; t)g = A?1 ~ h xt(sx; t)xds; 13

from (4.1) it follows immediately that and ja?1 h(y; x; t)j "(jyj + jxj) (4.) ja?1 ~ h t(x; t)j "jxj (4.3) hold true for jxj ("), jyj ("), t [; 1). Consider the function (cf. (3.11)) F (y; z; t) :=?(P Q 1 + UQ)A?1~ h(y + (P P 1 + V Q)z; t)? UQA?1 BP P 1z =?(P Q 1 + UQ)A?1 h(; y + (P P 1 + V Q)z; t)? UQA?1 BP (P P 1 + V Q)z mapping from IR m IR m IR into IR m. Denote 1 := jp Q 1 + UQj and choose " small enough to realize 1 " < 1. Then, F is well-dened on B(") D (") [; 1), where B(") := n y IR m : jyj 1 (")g; D (") := n z IR m : jp P 1 z + V Qzj 1 (")o : More precisely, for all y; y B("), z D ("), t [; 1), we have F (; ; t) = ; jf (y; z; t)? F (y; z; t)j 1 "jy? yj; jf (y; z; t)j 1 "jy + (P P 1 + V Q)zj + juqa?1 BP j jp P 1z + V Qzj Form the set D(") D ("), 1 jyj + 1 (1 + juqa?1 BP j)jp P 1z + V Qzj: D(") := n z IR m : (1 + juqa?1 BP j)jp P 1z + V Qzj 1 (")o ; such that for each xed z D("), t [; 1), F (; z; t) maps the closed ball B(") into itself. Since F (; z; t) is contractive with " 1 < 1 due to Banach's Fixed Point Theorem, there is a uniquely determined function such that, for all z D("), t [; 1), f : D(") [; 1)?! B(") IR m f(z; t) F (f(z; t); z; t); (4.4) f(; t) = ; jf(z; t)j 1 ("); (P Q 1 + UQ)f(z; t) = f(z; t) = f((p P 1 + V Q)z; t) hold true. By the Implicit Function Theorem, the smoothness of F is passed on to the function 14

f. Since F is continuous together with its partial derivatives Fy, Fz, Ft, F Fzz, Fzt (cf. (A), (B)), the implicitly given function f is also continuous and possesses continuous partial derivatives fz, ft, fzz, fzt. Further with we have F y(y; z; t) =?(P Q 1 + UQ)A?1 ~ h x(y + (P P 1 + V Q)z; t) F y(; ; t) = ; jf y(y; z; t)j 1 "; yy, Fyz, Fyt, hence the matrix I? F y(y; x; t) remains nonsingular uniformly for y B("), z D("), t [; 1). The relations f z(; t) =?UQA?1 BP P 1; f t(; t) = ; t [; 1) are obtained immediately by dierentiating (4.4) and considering F z(; ; t) =?UQA?1 BP P 1; F t(; ; t) = : Next we derive some further properties of the function f to be used in the proof of Theorem 3.3 below. Corollary 4.1 With 3 := jp j jp P 1 +V Q+UQA?1 BP P 1j, for all z D("), t [; 1), it holds that jf t(z; t)j " 1 1? " 1 ("); (4.5) jp fz(z; t)j " 1 3 ; 1? " 1 (4.6) fzt(; t) = : (4.7) Moreover, there are constants 4, 5 such that jf zz(z; t)j 4 and jf zt(z; t)j " 5 for z D("); t [; 1): (4.8) Proof: First of all we have j(i? F y(y; z; t))?1 j 1 1? " 1 for all y B("), z D("), t [; 1). From f t = (I? F y)?1 F t and (4.3) we conclude jf t(z; t)j " 1 1? " 1 jf(z; t) + (P P 1 + V Q)zj " 1 1? " 1 ("): 15

From f z =(I?F y)?1 F z =?(I?F y)?1 (P Q 1 +UQ)A?1 ~ h x(p P 1 +V Q)?(I?F y)?1 UQA?1 BP P 1 we obtain jf z(z; t) + UQA?1 BP P 1j 1 1?" 1 j(p Q 1 +UQ)A?1 ~ h x(f(z; t) + (P P 1 +V Q)z; t)fp P 1 +V Q?UQA?1 BP P 1gj " 1 1? " 1 jp P 1 + V Q? UQA?1 BP P 1j; hence, jp fz(z; t)j = jp (fz(z; t) + UQA?1 BP P 1)j 1 jp j " 1 jp P 1 + V Q + UQA?1 1? " BP P 1j " 1 3 : 1 1? " 1 Since f z = (P Q 1 + UQ)f z = f z(p P 1 + V Q) and (P P 1 + V Q)(P Q 1 + UQ) =, we may express the second derivative f zz simply as f zz = (I? F y)?1 F zz : Due to (B)(iv), F zz is uniformly bounded, therefore f zz is so, too. Finally, using the above arguments once more we nd the expression f zt = (I? F y)?1 ff yt f z + F ztg =?(I? F y)?1 (P Q 1 + UQ)A?1 ~ h xtfp P 1 + V Q? f zg: In particular, ~ h xt(; t) = (cf. (4.1)) leads now to f zt(; t) =. Moreover, we may estimate jfzt(z; 1 t)j "(jp P 1 + V Qj + jfz(z; t)j) 1? " 1 5 ": Let us stress once more that, if a C 1 N-function x() in the neighbourhood of the origin solves the DAE (1.1), then it satises also the identity (P Q 1 + UQ)x(t) = f((p P 1 + V Q)x(t); t): With the denotations u := P P 1 x, v := P Q 1 x, w = Qx this reads In particular, it holds that v(t) + Uw(t) = f(u(t) + V w(t); t): v(t) = P f(u(t) + V w(t); t): Now, the structural condition (C)(ii) casts the nullspace component out from the function P f such that v(t) = P f(u(t); t) (4.9) 16

results. Dierentiating yields the expression v (t) = P f z(u(t); t)u (t) + P f t(u(t); t): (4.1) Rewrite equation (1.1) scaled by A?1, that is equation (3.), as P P 1 x (t) + A?1 BP P 1x(t) + (I? P P 1 )x(t) + QQ 1 (P Q 1 x)(t)?qq 1 (P Q 1 x) (t) + A?1 h(p P 1x (t) + (P Q 1 (x) (t); x(t); t) = : (4.11) This formulation suggests to replace the terms P Q 1 x, (P Q 1 x) by means of (4.9) and (4.1), respectively. Then we arrive at the DAE P P 1 x (t) + A?1 BP P 1x(t) + (I? P P 1 )x(t) + H(x (t); x(t); t) = ; (4.1) where the nonlinearity H is introduced as the following map from IR m IR m IR into IR m : H(x ; x; t) := A?1 h(p P 1x + P f z(x; t)p P 1 x + P f t(x; t); x; t)?qq 1 fp f z(x; t)p P 1 x + P f t(x; t)? P f(x; t)g: Recall that P f(x; t) = P f(p P 1 x; t) due to (C)(ii). For t [; 1), jp P 1 x j 1 ("), P P 1 x D(") the inequalities (4.5), (4.6) yield the estimation jp P 1 x + P f z(x; t)p P 1 x + P f t(x; t)j supposed " is chosen small enough to realize 1 (") + " 1 ( 3 + jp j) 1 (") ("); 1? " 1 " 1 1? " 1 ( 3 + jp j) 1; " 1 < 1 : (4.13) By this, the function H is well-dened for jp P 1 x j 1 ("), jxj ("), P P 1x D("), t [; 1). Lemma 4. Given a regular index- matrix pencil fa; Bg, ~ A := P P1, ~ B := A?1 BP P 1+ (I? P P 1 ). Then, f ~ A; ~ Bg is a regular pencil of index 1 the nite spectrum of which coincides with that of fa; Bg, i.e., f ~ A; ~ Bg = fa; Bg: Proof: Obviously, ~ Az =, ~ Bz im ~ A imply z =, i.e., f ~ A; ~ Bg is regular and indf ~ A; ~ Bg = 1. Consider (A + B)z =, or equivalently, (A?1 A + A?1 B)z = ; i.e.; ((P P 1? QQ 1 ) + A?1 BP P 1 + Q 1 + Q)z = : (4.14) 17

Multiplying by P Q 1 gives P Q 1 z =, further QQ 1 z = QQ 1 P Q 1 z =. Thus (4.14) simplies to P P 1 z + A?1 BP P 1z + Qz = : On the other hand, starting with ( ~ A + ~ B)z = ; i.e., P P 1 z + A?1 BP P 1z + P Q 1 z + Qz = ; we also nd that P Q 1 z = has to be true. At this place let us mention that the eigenvectors associated to nite eigenvalues have the property z = P P 1 z + Qz = (I? QP 1 A?1 BP )P P 1 z, and im(i? QP 1 A?1 BP )P P 1 represents the nite eigenspace that has dimension dim imp P 1 = m?dim N?dim N\S. Proof of Theorem 3.3: As we have shown above, each small CN-solution 1 of (1.1) satises (4.1). On the contrary, if x() is a CN-solution 1 of (4.1), by multiplying by (P Q 1 + UQ) we nd the relation (P Q 1 + UQ)x(t) + (P Q 1 + UQ)A?1 ~ h(x(t); t) + UQA?1 BP P 1x(t) = to be satised, hence and due to the structural condition (C)(ii) Then, we have (P Q 1 + UQ)x(t) = f((p P 1 + UQ)x(t); t) P Q 1 x(t) = P f(p P 1 x(t); t); (4.15) (P Q 1 x) (t) = P f z(p P 1 x(t); t)(p P 1 x) (t) + P f t(p P 1 x(t); t): H(x (t); x(t); t) = A?1 h(p P 1x (t) + P Q 1 x (t); x(t); t)? QQ 1 fp Q 1 x (t)? P Q 1 x(t)g = A?1 h(x (t); x(t); t)? QQ 1 x (t) + QQ 1 x(t): Inserting this expression into (4.1) we obtain (4.11), i.e., each small CN 1 solution of (4.1) satises the original DAE (1.1). The function H has a priori the property N N 1 = ker P P 1 ker H x (x ; x; t) such that the identity H(x ; x; t) H(P P 1 x ; x; t) is valid. Naturally, the solutions of (4.1) belong to the class C 1 N N 1 := fx C : P P 1 x C 1 g; 18

which is larger than C 1 N = fx C : P P 1 x C 1 ; P Q 1 x C 1 g. From this point of view, the solutions of (1.1) seem to be smoother than those of (4.1). However, the representation (4.15) shows clearly that due to the smoothness of f, which results from (A), (B), (C), each solutions of (4.1) has a continuously dierentiable component P Q 1 x(t), too, i.e. each solution of (4.1) has C 1 N-regularity. Consequently, (1.1) and (4.1) are equivalent. Now we show that the respective stability result for the index-1 case ([7], Lemma 4.1) applies to equation (4.1). By construction, it holds that H(; ; t) = ; t [; 1): H is continuous with continuous derivatives and H x = A?1 h x fp P 1 + P f z P P 1g? QQ 1 P f z P P 1 (4.16) Hx = A?1 h xfp fzz P P 1x P P 1 + P ftz P P 1g + A?1 h x?qq 1 fp f zz P P 1x P P 1 + P f tz P P 1? P f z P P 1g: (4.17) It remains to show that the nonlinearity H is small in the sense of (A)(ii). In the following, if we apply the inequalities (4.1), we choose (") small enough such that (") " becomes true. For " > satisfying (4.13) and jp P 1 x j 1 ("), jxj ("), P P 1 x D("), t [; 1), we nd " jh 1 x (x ; x; t)j " jp P1 j + 3 1? " 1 + jqq1 j " 1 1? " 1 c 1 "; "(jp P 1 j + 3 + 1 jqq 1 j) = c 1 "; jhx(x ; x; t)j " + "jp j 4 1 (")+" 5 + jqq1 j 1 4 (")+" " 1 5 + jqq1 j 3 1?" 1 " 1 + "jp j 4 1 + 5 + jqq1 j 4 1 + 5 + jqq1 j 3 c ": The condition P P 1 x D(") means (1 + juqp 1 A?1 BP j)jp P 1xj 1 ("). If P P 1 =, it is satised trivially for all x IR m. Denote (") := 1 (") if P P 1 =, but otherwise (") := minf1; (1 + juqp 1 A?1 BP )?1 jp P 1 j?1 g 1 ("): Then, with c := maxfc 1 ; c g, the inequalities are fullled for all jh x (x ; x; t)j c "; jh x(x ; x; t)j c " jp P 1 x j ("); jxj ("); t [; 1): 19

Now, [8], Lemma 4.1 applies to (4.1). Since f ~ A; ~ Bg IC? by Lemma 4. above, the trivial solution of (4.1) is asymptotically stable. Thereby, we can put = P P 1. In any case, has to have the nullspace ker = N N 1, which represents the innite eigenspace of the index-1 pencil f ~ A; ~ Bg as well as the index- pencil fa; Bg. Proof of Lemma 3.4: (i) Recall that for the projectors V L(IR m ) onto N \ S = im QQ 1 we have QV = V, V Q = QV Q. The property ~ h(x; t) ~ h(p x; t) simplies equation (3.) to y =?(P Q 1 + UQ)A?1 ~ h(p y + P P 1 z; t)? UQA?1 BP P 1z; that means, the function y = f(z; t) implicitly given by (3.) depends on P P 1 z only, i.e. f(z; t) f(p P 1 z; t); thus P f(z; t) P f(p P 1 z; t). (ii) Since ~ h(x; t)? ~ h((i? V Q)x; t) belongs to im A we may use the identity (P Q 1 + UQ)A?1 ~ h(x; t) (P Q 1 + UQ)A?1 ~ h((p P 1 + P Q 1 + UQ)x; t): Now, (3.) reads y =?(P Q 1 + UQ)A?1 ~ h((p P 1 + P Q 1 + UQ)y + P P 1 z; t)? UQA?1 BP P 1z: Again it results that f(z; t) f(p P 1 z; t) holds a priori. (iii) h(x; ~ t)? h(p ~ x; t) im A1 yields P Q 1 A?1~ h(x; t) P Q 1 A?1~ h(p x; t), hence P Q 1 A?1~ h x P Q 1 A?1~ h x P. Multiplying the equation for fz by P we obtain f z =?(P Q 1 + UQ)A?1 ~ h xff z + (P P 1 + V Q)g P f z =?P Q 1 A?1 ~ h xff z + (P P 1 + V Q)g =?P Q 1 A?1 ~ h xfp f z + P P 1 g: Consequently, we have P f z(z; t) P f z(z; t)p P 1 : (iv) Denote w := V Qx and compute h(x ; x; t)? h(x ; (I? V Q)x; t) = Z 1 h x(x ; x? (1? s)w; t)dsw: Because of w N \ S, N \ S = N \ fz IR m : Bz + h x(x ; x? (1? s)w; t)z im Ag we know that also Bw + h x(x ; x? (1? s)w; t)w im A; s [; 1];

is fullled. Therefore hence h(x ; x; t)? h(x ; (I? V Q)x; t) + BV Qx im A; (P Q 1 +UQ)A?1 h(x ; x; t) = (P Q 1 +UQ)A?1 h(x ; (I?V Q)x; t) + (P Q 1 +UQ)A?1 BV Qx: Because of (P Q 1 + UQ)A?1 BV Q = we may conclude (P Q 1 + UQ)A?1 ~ h(x; t) = (P Q 1 + UQ)A?1 ~ h((i? V Q)x; t) and nally use the same arguments as we did for (ii). References [1] E.A. Coddington and N. Levinson: Theory of ordinary dierential equations. McGraw Hill, New York, 1955 [] M. Farkas: Periodic motions. Springer-Verlag, New York Inc., 1994 [3] E. Griepentrog and R. Marz: Dierential-algebraic equations and their numerical treatment. Teubner Texte Math. 88, Leipzig 1986 [4] R. Marz: On quasilinear index dierential-algebraic equations. Seminarberichte der Humboldt-Univ. Berlin, Fachbereich Mathematik, Nr. 9-1, 39-6, 199 [5] R. Marz: Practical Lyapunov stability criteria for dierential algebraic equations. Banach Center Publications 9, 45-66, 1994 [6] C. Tischendorf: On the stability of solutions of autonomous index-1 tractable and quasilinear index- tractable DAEs. Circuits Systems Signal Process 13, 139-154, 1994 [7] R. Lamour, R. Marz and R. Winkler: How Floquet-theory applies to index-1 dierential-algebraic equations. Preprint 96-15, Humboldt-Univ. Berlin, Institut fur Mathem., 1996 [8] R. Marz: Canonical projectors for linear dierential algebraic equations. Comput. Math. Appl. [9] E. Griepentrog and R. Marz: Basic properties of some dierential-algebraic equations. Z. Anal. Anwendungen 8, 5-4, 1989 [1] C. Tischendorf: Solution of index- dierential-algebraic equations and its application in circuit simulation. Dissertation, Humboldt-Univ. Berlin, Institut fur Mathem., 1996 [11] C. Tischendorf: personal communication [1] F.L. Lewis: A survey of linear singular systems. Circuits Systems Signal Process 5, 3-36, 1986 1