Numerical Treatment of Unstructured. Differential-Algebraic Equations. with Arbitrary Index

Similar documents
Differential-algebraic equations. Control and Numerics I

Optimal control of nonstructured nonlinear descriptor systems

Stability analysis of differential algebraic systems

NUMERICAL SOLUTION OF HYBRID SYSTEMS OF DIFFERENTIAL-ALGEBRAIC EQUATIONS

CHAPTER 10: Numerical Methods for DAEs

On linear quadratic optimal control of linear time-varying singular systems

Technische Universität Berlin Institut für Mathematik

Observability. Dynamic Systems. Lecture 2 Observability. Observability, continuous time: Observability, discrete time: = h (2) (x, u, u)

A relaxation of the strangeness index

Differential-algebraic equations. Control and Numerics V

The Kalman-Yakubovich-Popov Lemma for Differential-Algebraic Equations with Applications

Modelling, Simulation and Control in Key Technologies

Delay Differential-Algebraic Equations

Linear algebra properties of dissipative Hamiltonian descriptor systems

Logistic Map, Euler & Runge-Kutta Method and Lotka-Volterra Equations

Bohl exponent for time-varying linear differential-algebraic equations

Algebraic Constraints on Initial Values of Differential Equations

A BOUNDARY VALUE PROBLEM FOR LINEAR PDAEs

Stabilität differential-algebraischer Systeme

Two Results About The Matrix Exponential

Defect-based a-posteriori error estimation for implicit ODEs and DAEs

Defining Equations for Bifurcations and Singularities

Numerical Algorithms as Dynamical Systems

DIFFERENTIAL-ALGEBRAIC EQUATIONS FROM A FUNCTIONAL-ANALYTIC VIEWPOINT: A SURVEY

Lecture Notes 6: Dynamic Equations Part A: First-Order Difference Equations in One Variable

MATH 4211/6211 Optimization Constrained Optimization

1. Find the solution of the following uncontrolled linear system. 2 α 1 1

Linear Algebra Review (Course Notes for Math 308H - Spring 2016)

Impulse free solutions for switched differential algebraic equations

Foundations of Matrix Analysis

Model order reduction of electrical circuits with nonlinear elements

MCE693/793: Analysis and Control of Nonlinear Systems

Discontinuous Collocation Methods for DAEs in Mechanics

An Introduction to Numerical Continuation Methods. with Application to some Problems from Physics. Eusebius Doedel. Cuzco, Peru, May 2013

A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Lecture 6. Numerical methods. Approximation of functions

Numerical Methods for Differential Equations Mathematical and Computational Tools

On the closures of orbits of fourth order matrix pencils

Matrices and systems of linear equations

LOCALLY POSITIVE NONLINEAR SYSTEMS

Lecture 1: Review of linear algebra

SIAM Conference on Applied Algebraic Geometry Daejeon, South Korea, Irina Kogan North Carolina State University. Supported in part by the

Chap. 3. Controlled Systems, Controllability

The Additional Dynamics of the Least Squares Completions of Linear Differential Algebraic Equations. by Irfan Okay

YURI LEVIN, MIKHAIL NEDIAK, AND ADI BEN-ISRAEL

1 Continuous-time Systems

Stability and Inertia Theorems for Generalized Lyapunov Equations

Gramians based model reduction for hybrid switched systems

Krylov Subspace Methods for Nonlinear Model Reduction

Direct and indirect methods for optimal control problems and applications in engineering

A new look at pencils of matrix valued functions

Chapter 2. Vector Spaces

MATH 511 ADVANCED LINEAR ALGEBRA SPRING 2006

INRIA Rocquencourt, Le Chesnay Cedex (France) y Dept. of Mathematics, North Carolina State University, Raleigh NC USA

S N. hochdimensionaler Lyapunov- und Sylvestergleichungen. Peter Benner. Mathematik in Industrie und Technik Fakultät für Mathematik TU Chemnitz

The Plan. Initial value problems (ivps) for ordinary differential equations (odes) Review of basic methods You are here: Hamiltonian systems

Linear ODEs. Existence of solutions to linear IVPs. Resolvent matrix. Autonomous linear systems

Modeling and Analysis of Dynamic Systems

MATH JORDAN FORM

Discontinuity Propagation in Delay Differential- Algebraic Equations

MODULE 8 Topics: Null space, range, column space, row space and rank of a matrix

Technische Universität Berlin

Spectral Collocation Methods for Differential-Algebraic Equations with Arbitrary Index

Computation of State Reachable Points of Second Order Linear Time-Invariant Descriptor Systems

A Constraint-Reduced MPC Algorithm for Convex Quadratic Programming, with a Modified Active-Set Identification Scheme

Differential-Algebraic Equations (DAEs)

7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved.

Ordinary Differential Equation Theory

1 The Observability Canonical Form

Inverse Dynamics of Flexible Manipulators

Nonlinear Observers. Jaime A. Moreno. Eléctrica y Computación Instituto de Ingeniería Universidad Nacional Autónoma de México

Scattered Data Interpolation with Polynomial Precision and Conditionally Positive Definite Functions

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

1.2 Derivation. d p f = d p f(x(p)) = x fd p x (= f x x p ). (1) Second, g x x p + g p = 0. d p f = f x g 1. The expression f x gx

that determines x up to a complex scalar of modulus 1, in the real case ±1. Another condition to normalize x is by requesting that

Zeros and zero dynamics

Chapter 2 Optimal Control Problem

Definition 5.1. A vector field v on a manifold M is map M T M such that for all x M, v(x) T x M.

Symmetry Methods for Differential and Difference Equations. Peter Hydon

A note on linear differential equations with periodic coefficients.

Solvability of Linear Matrix Equations in a Symmetric Matrix Variable

LINEAR STOCHASTIC DIFFERENTIAL-ALGEBRAIC EQUATIONS WITH CONSTANT COEFFICIENTS

Theory, Solution Techniques and Applications of Singular Boundary Value Problems

Algebra Homework, Edition 2 9 September 2010

Discontinuous Galerkin methods for fractional diffusion problems

THEODORE VORONOV DIFFERENTIABLE MANIFOLDS. Fall Last updated: November 26, (Under construction.)

Defect-based a posteriori error estimation for index-1 DAEs

A Study of the Van der Pol Equation

Linear equations in linear algebra

Summary of topics relevant for the final. p. 1

Systems of Linear Equations

Algebraic Properties of Solutions of Linear Systems

Identification and Estimation for Models Described by Differential-Algebraic Equations Markus Gerdin

Analysis and reformulation of linear delay differential-algebraic equations

Nonlinear Control Systems

On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination

Least Squares Approximation

16. Local theory of regular singular points and applications

EXTERNALLY AND INTERNALLY POSITIVE TIME-VARYING LINEAR SYSTEMS

Transcription:

Numerical Treatment of Unstructured Differential-Algebraic Equations with Arbitrary Index Peter Kunkel (Leipzig) SDS2003, Bari-Monopoli, 22. 25.06.2003

Outline Numerical Treatment of Unstructured Differential-Algebraic Equations with Arbitrary Index I Theoretical Basis 1 Problem Description 2 Linear Problems with Constant Coefficients 3 Linear Problems with Variable Coefficients 4 Nonlinear Problems II Numerical Procedures 5 Discretizations 6 Linear Initial Value Problems 7 Nonlinear Initial Value Problems 8 Boundary Value Problems 9 Literature/Software Note The slides may be more detailed than one would usually expect and prefer. However, we hope that in this way they will provide a kind of terse manuscript for the interested reader. The latest version of the slides can be found at http://www.math.uni-leipzig.de/~kunkel/bari.html

1.1 Application 1 Electronic circuits

1.2 Application 2 Multibody systems

1.3 Problem Description The equations for the timely development of a (discrete) system in general have the form F (t, x, ẋ) = 0, so-called differential-algebraic equation (DAE), with F : I D x Dẋ R m, sufficiently smooth. I R compact interval, D x, Dẋ R n open, This includes over- and underdetermined problems arising, e. g., from 1. control problems, 2. redundancies in the model generation. In addition, we typically have initial conditions x(t 0 ) = x 0 with t 0 I, x 0 D x, or boundary conditions r(x(t), x(t)) = 0 with r : D x D x R d sufficiently smooth for some problem specific d and I = [t, t].

1.4 Problem Description (cont.) Linearization of F along a given x C 1 (I, R n ) yields E(t)ẋ = A(t)x + f(t) with E(t) = Fẋ(t, x (t), ẋ (t)), A(t) = F x (t, x (t), ẋ (t)), f(t) = F (t, x (t), ẋ (t)). If F does not depend explicitly on t and if x is constant, then E and A are constant, yielding a problem of the form Eẋ = Ax + f(t) with E, A R m,n.

1.5 Basic Notions We use the following basic definitions. Definition 1. A function x C 1 (I, R n ) is called a solution of the DAE if x satisfies the DAE pointwise. 2. The function x is called a solution of the initial or boundary value problem if x furthermore satisfies the initial or boundary condition. 3. A DAE is called solvable if it has at least one solution. 4. An initial condition is called consistent with the DAE if the associated IVP has at least one solution. 5. In the linear case, an inhomogeneity f is called consistent with (E, A) if the associated DAE is solvable. Remark In the discussion of linear DAEs we use C instead of R.

2.1 Linear DAEs with Constant Coefficients In this section, we consider Eẋ = Ax + f(t) with E, A C m,n and f C(I, C m ) sufficiently smooth. Rescaling of the equation and the unknown by nonsingular matrices does not change the solution behaviour. Definition Two matrix pairs (E 1, A 1 ) and (E 2, A 2 ) are called (strongly) equivalent if there are nonsingular matrices P C m,m and Q C n,n such that E 2 = P E 1 Q, A 2 = P A 1 Q. A corresponding normal form is the so-called Kronecker canonical form (KCF). It exhibits all properties of a linear DAE with constant coefficients.

2.2 Kronecker Canonical Form Theorem Let E, A C m,n. There are nonsingular matrices P C m,m and Q C n,n such that (for all λ C) P (λe A)Q = diag (L ε1,..., L εp, M η1,..., M ηq, J ϱ1,..., J ϱr, N σ1,..., N σs ), where 1. L εj C ε j,ε j +1, ε j N 0, with 0 1 1 0 λ............, 0 1 1 0 2. M ηj C η j+1,η j, η j N 0, with 0 1 λ 1...... 0 1 0...... 1 0 3. J ϱj C ϱ j,ϱ j, ϱ j N, λ j C, with 1... λ... 1 4. N σj C σ j,σ j, σ j N, with 0 1 1......... λ... 1 0, λ j 1......... 1 λ j,.... 1

2.3 Regularity Definition Let E, A C m,n. The matrix pair (E, A) is called regular if p(λ) = det(λe A) 0. Theorem Let (E, A) be regular. Then the KCF has the form ([ ] [ ]) I 0 J 0,, 0 N 0 I where N is nilpotent and J and N are in Jordan canonical form. Definition The nilpotency index ν of N is called the index of (E, A). Remark The solution of the DAE is given by Nẋ = x + f(t) x(t) = ν 1 i=0 N i f (i) (t). Theorem Let (E, A) be regular of index ν and f C ν (I, C n ). Then every associated IVP with consistent initial condition is uniquely solvable.

3.1 Linear DAEs with Variable Coefficients In this section, we consider E(t)ẋ = A(t)x + f(t) with E, A C(I, C m,n ), f C(I, C m ) sufficiently smooth. Rescaling of the equation and the unknown can now be done by pointwise nonsingular matrix functions. Definition Two pairs (E 1, A 1 ) and (E 2, A 2 ) of matrix functions are called (globally) equivalent if there are pointwise nonsingular matrix functions P C 0 (I, C m,m ) and Q C 1 (I, C n,n ) such that E 2 = P E 1 Q, A 2 = P A 1 Q P E 1 Q. For fixed ˆt I and given P (ˆt), Q(ˆt) and Q(ˆt), we can always find suitable functions P and Q that assume these values. This gives the following local version of global equivalence. Definition Two matrix pairs (E 1, A 1 ) and (E 2, A 2 ) are called (locally) equivalent if there are nonsingular P C m,m, Q C n,n and an arbitrary R C n,n such that E 2 = P E 1 Q, A 2 = P A 1 Q P E 1 R.

3.2 Local Canonical Form Theorem Let E, A C m,n and T basis von kernel E, Z basis von corange E = kernel E H, T basis von cokernel E = range E H, V basis von corange(z H AT ). Then the quantities r = rank E, a = rank(z H AT ), s = rank(v H Z H AT ), d = r s, u = n r a, v = m r a s (rank) (algebraic part) (strangeness) (differential part) (undefined part) (vanishing part) are invariant under local equivalence and (E, A) is locally equivalent to I s 0 0 0 0 I d 0 0 0 0 0 0 0 0 0 0 0 0 0 0, 0 0 0 0 0 0 0 0 0 0 I a 0 I s 0 0 0 0 0 0 0.

3.3 Global Canonical Form Turning back to pairs of matrix functions, we get functions of characteristic values. Theorem r, a, s: I N 0 Let E, A C(I, C m,n ) be sufficiently smooth and suppose that r(t) r, a(t) a, s(t) s for the local invariants of (E(t), A(t)). Then (E, A) is globally equivalent to I s 0 0 0 0 I d 0 0 0 0 0 0 0 0 0 0 0 0 0 0, 0 A 12 0 A 14 0 0 0 A 24 0 0 I a 0 I s 0 0 0 0 0 0 0.

3.4 Reduction Process The DAE is thus transformed to ẋ 1 = A 12 (t)x 2 + A 14 (t)x 4 + f 1 (t), ẋ 2 = A 24 (t)x 4 + f 2 (t), 0 = x 3 + f 3 (t), 0 = x 1 + f 4 (t), 0 = f 5 (t). Differentiating the fourth equation and eliminating ẋ 1 gives a modified DAE with the same solution set. The corresponding pair of matrix functions reads 0 0 0 0 0 I d 0 0 0 0 0 0 0 0 0 0 0 0 0 0, 0 A 12 0 A 14 0 0 0 A 24 0 0 I a 0 I s 0 0 0 0 0 0 0. Thus we can define a sequence (E i, A i ) starting with (E 0, A 0 ) = (E, A) by iteratively transforming to the above canonical form (assuming constant ranks) and performing the elimination. Theorem The invariants r i, a i, s i of (E i, A i ) are characteristic for (E, A). Because of r i+1 = r i s i, the iterative process becomes stationary when the index µ = min{i N 0 s i = 0}, the so-called strangeness index, is reached.

3.5 Existence and Uniqueness Theorem Let µ be well-defined for (E, A) and let f C µ (I, C m ). Then the solutions of the associated DAE correspond one-to-one (via a pointwise nonsingular matrix function) to the solutions of a DAE of the form ẋ 1 = A 13 (t)x 3 + f 1 (t), 0 = x 2 + f 2 (t), 0 = f 3 (t), (d µ differential equations) (a µ algebraic equations) (v µ consistency conditions) where the f i depend on f, f,..., f (µ). If f C µ+1 (I, C m ), we have: 1. The given DAE is solvable iff f 3 = 0. 2. An initial condition is consistent iff in addition it implies x 2 (t 0 ) = f 2 (t 0 ). 3. The corresponding IVP is uniquely solvable iff in addition u µ = 0.

3.6 Global Canonical Form (cont.) Theorem Let µ be well-defined for (E, A) with characteristic values r i, a i, s i, i = 0,..., µ. Furthermore, let w 0 = v 0, w i+1 = v i+1 v i and c 0 = a 0 + s 0, c i+1 = s i w i+1. Then (E, A) is globally equivalent to I d µ 0 0 0 F, 0 0 0 0 0 0 0 G 0 0 I aµ with F = 0 F µ......... F 1 0, G = 0 G µ......... G 1 0. The entries F i and G i are of the size (w i, c i 1 ) and (c i, c i 1 ) and satisfy [ ] Fi rank = c i + w i = s i 1 c i 1. G i In particular, the matrix functions F i and G i together have full row rank.

3.7 Exceptional Points Theorem Let M C(I, C m,n ). Then there are open intervals I j I, j N, with j N I j = I, I i I j = for i j, and integers r j N 0, j N, with rank M(t) = r j for all t I j. Applying this property to the above construction of the sequence (E i, A i ), we immediately obtain the following result. Theorem Let E, A C(I, C m,n ) be sufficiently smooth. Then there are open intervals I j, j N, as above such that the strangeness index is well-defined for (E, A) restricted to I j for every j N.

3.8 Derivative Arrays For higher index problems the solution depends on derivatives not only of f but also of E and A. In addition to the original DAE we therefore use derivatives of it yielding augmented DAEs M l (t)ż l = N l (t)z l + g l (t), l N 0 for z l = (x, ẋ,..., x (l+1) ) with the so-called derivative arrays N l = E Ė A E M l =....... E (l) la (l 1) lė A E A 0 0 A 0 0... A (l) 0 0., Theorem Let the sufficiently smooth (E, A) and (Ẽ, Ã) be globally equivalent with well-defined strangeness index µ. Then the corresponding matrix pairs (M l (t), N l (t)) and ( M l (t), Ñl(t)) are locally equivalent for every t I.

3.9 Differentiation Index The idea leading to the differentiation index is based on the question whether it is possible to extract an ODE for x from some augmented DAE. Obviously we must restrict ourselves to the case m = n. Definition A block matrix M C kn,ln of (n, n)-blocks is called 1-full if there is a nonsingular R C kn,kn such that RM = [ ] In 0. 0 H Definition Let (E, A) be sufficiently smooth with derivative arrays (M l, N l ). The smallest number ν N 0, if it exists, for which M ν is pointwise 1-full and has constant rank, is called the differentiation index of (E, A).

3.10 Differentiation Index (cont.) Theorem The differentiation index is invariant under global equivalence transformations. Remark If the differentiation index ν is well-defined, then there is a pointwise nonsingular smooth R C(I, C (ν+1)n,(ν+1)n ) with RM ν = [ ] In 0 0 H implying the so-called underlying ODE ẋ = [ I n 0 ]R(t)N ν (t)[ I n 0 ] H x + [ I n 0 ]R(t)g ν (t). Theorem Let E, A C(I, C n,n ) be sufficiently smooth and suppose 1. for every sufficiently smooth f C(I, C n ) the corresponding DAE is solvable, 2. the solution is unique for every consistent initial condition, and 3. depends smoothly on the data. Then the differentiation index of (E, A) is well-defined.

3.11 Hypothesis Hypothesis Let (E, A) be sufficiently smooth. There are integers µ, a and d with a + d = n such that 1. rank M µ (t) = (µ + 1)n a for all t I = Ẑ 2 smooth, max. rank a, orth. columns, Ẑ H 2 M µ = 0 on I, 2. rank ẐH 2 (t)n µ(t)[ I n 0 0 ] H = a for all t I = ˆT 2 smooth, max. rank d, orth. columns, Ẑ T 2 N µ(t)[ I n 0 0 ] H ˆT2 = 0 on I, 3. rank E(t) ˆT 2 (t) = d for all t I = Ẑ 1 smooth, max. rank d, orth. columns, Ẑ T 1 E ˆT 2 nonsingular on I. Theorem The above hypothesis is invariant under global equivalence transformations.

3.12 Properties Theorem Let (E, A) be sufficiently smooth with well-defined differentiation index ν. Then (E, A) satisfies the above hypothesis with d = n a and µ = { 0 for ν = 0, ν 1 otherwise, a = { 0 for ν = 0, corank M ν 1 otherwise. Remark Let (E, A) satisfy the above hypothesis. Setting Ê 1 = ẐH 1 E, Â 1 = ẐH 1 A, Â 2 = ẐH 2 N µ[ I n 0 0 ] H, ˆf 1 = ẐH 1 f, ˆf 2 = ẐH 2 g µ, the original DAE implies the reduced DAE Ê 1 (t)ẋ = Â1(t)x + ˆf 1 (t), 0 = Â2(t)x + ˆf 2 (t), (d differential equations) (a algebraic equations) which is strangeness-free, i. e., has vanishing strangeness index. Theorem Let (E, A) satisfy the above hypothesis. Then the reduced DAE has the same solutions as the original DAE.

3.13 Properties (cont.) Remark The reduced DAE has differentiation index at most one due to the fact that there is a splitting x = (x 1, x 2 ) such that 0 = Â21(t)x 1 + Â22(t)x 2 + ˆf 2 (t) can be solved for x 2, i. e., x 2 = Â22(t) 1 (Â21(t)x 1 + ˆf 2 (t)), and the remaining Ê 11 (t)ẋ 1 + Ê12(t)ẋ 2 = Â11(t)x 1 + Â12(t)x 2 + ˆf 1 (t) with eliminated x 2, ẋ 2 can be solved for ẋ 1.

3.14 Additional Topics Further research included (will include) DAE operators, generalized inverses, weak (distributional) solutions, global canonical form for regular singular problems, control problems, regularization by feedback.

4.1 Nonlinear DAEs In this section, we consider F (t, x, ẋ) = 0 with F C(I D x Dẋ, R n ), D x, Dẋ R n open, sufficiently smooth. Generalizing the linear case, we aim for a hypothesis that is invariant under (parametrized) diffeomorphisms in the domain and image of F. The corresponding augmented DAEs are given by F l (t, x, ẋ,..., x (l+1) ) = 0 with F l (t, x, ẋ,..., x (l+1) ) = F (t, x, ẋ) F (t, x, ẋ) d dt. ( d dt )l F (t, x, ẋ).

4.2 Hypothesis Hypothesis Let F be sufficiently smooth. There are integers µ, a and d with a + d = n such that and L µ = F 1 µ ({0}) 1. rank F µ;ẋ,...,x (µ+1) = (µ + 1)n a on L µ = Ẑ 2 smooth, max. rank a, orth. columns, Ẑ T 2 F µ;ẋ,...,x (µ+1) = 0 on L µ, 2. rank ẐT 2 F µ;x = a on L µ = ˆT 2 smooth, max. rank d, orth. columns, Ẑ T 2 F µ;x ˆT 2 = 0 on L µ, 3. rank Fẋ ˆT2 = d on L µ = Ẑ 1 smooth, max. rank d, orth. columns, Ẑ T 1 F ẋ ˆT 2 nonsingular on L µ.

4.3 Invariances Theorem Let F satisfy the above hypothesis with µ, a and d, and let F be given by F (t, x, x) = F (t, Q(t, x), Q t (t, x) + Q x (t, x) x) with sufficiently smooth Q C(I R n, R n ), where Q(t, ) is bijective for every t I and Q x (t, x) is nonsingular for every (t, x) I R n. Then F satisfies the above hypothesis with µ, a and d. Theorem Let F satisfy the above hypothesis with µ, a and d, and let F be given by F (t, x, ẋ) = P (t, x, ẋ, F (t, x, ẋ)) with sufficiently smooth P C(I R n R n R n, R n ), where P (t, x, ẋ, ) is bijective with P (t, x, ẋ, 0) = 0 and P w (t, x, ẋ, 0) nonsingular for every (t, x, ẋ) I R n R n. Then F satisfies the above hypothesis with µ, a and d.

4.4 Reduced Problem Let (t 0, x 0, y 0 ) L µ and T 2,0 = ˆT 2 (t 0, x 0, y 0 ), Z 1,0 = Ẑ1(t 0, x 0, y 0 ), Z 2,0 = Ẑ2(t 0, x 0, y 0 ). Let [Z 2,0, Z 2,0] be orthogonal and T 1,0 basis of kernel Z T 2,0 F µ;ẋ,...,x (µ+1)(t 0, x 0, y 0 ). The nonlinear equation F µ (t, x, y) Z 2,0 α = 0, T T 1,0 (y y 0) = 0 for (t, x, y, α) is locally solvable for (y, α). defines a function ˆF 2 according to In particular, it α = ˆF (t, x). Furthermore, set ˆF 1 (t, x, ẋ) = Z1,0 T F (t, x, ẋ). The DAE ˆF 1 (t, x, ẋ) = 0, ˆF 2 (t, x) = 0 (d differential equations) (a algebraic equations) is called the associated reduced DAE.

4.5 Solvability Remark The reduced DAE has differentiation index at most one in the sense that there is a splitting x = (x 1, x 2 ) such that ˆF 2 (t, x 1, x 2 ) = 0 can be solved for x 2, say by x 2 = G 2 (t, x 1 ), and the remaining ˆF 1 (t, x 1, G 2 (t, x 1 ), ẋ 1, G 2;t (t, x 1 ) + G 2;x1 (t, x 1 )ẋ 1 ) = 0 can be solved for ẋ 1, say by ẋ 1 = G 1 (t, x 1 ). Remark Together with x 1 (t 0 ) = x 0,1, x 0 = (x 0,1, x 0,2 ) the reduced DAE yields a (locally defined) unique solution. This solution can be extended until the boundary of L µ is reached.

4.6 Solvability (cont.) Remark An initial condition x(t 0 ) = x 0 is consistent with the reduced DAE if there is a y 0 with F µ (t 0, x 0, y 0 ) = 0. Theorem Let F satisfy the above hypothesis with µ, a and d. Then every solution of the original DAE also solves the reduced DAE. If F furthermore satisfies the hypothesis with µ + 1, a and d, then every solution of the reduced DAE also solves the original DAE. Remark In the latter case, for a given solution x C 1 (I, R n ) there exists a function P C(I, R (µ+1)n ) that coincides with ẋ in the first n components and satisfies F µ (t, x(t), P (t)) = 0 for all t I.

4.7 Example Let F (t, x, ẋ) = 0 be given by ẋ 1 = x 4, ẋ 2 = x 5, ẋ 3 = x 6, ẋ 4 = 2x 1 x 7, ẋ 5 = 2x 2 x 7, ẋ 6 = 1 x 7, 0 = x 3 x 2 1 x2 2, describing the motion of a mass point on a parabola under the influence of gravity. It satisfies the above hypothesis with µ = 2, d = 4, and a = 3. The equation F µ = 0 implies the constraints 0 = x 3 x 2 1 x2 2, 0 = x 6 2x 1 x 4 2x 2 x 5, 0 = 1 x 7 2x 2 4 4x 2 1x 7 2x 2 5 4x 2 2x 7. A possible reduced DAE reads ẋ 1 = x 4, ẋ 2 = x 5, 0 = x 6 2x 1 x 4 2x 2 x 5, ẋ 4 = 2x 1 x 7, ẋ 5 = 2x 2 x 7, 0 = 1 x 7 2x 2 4 4x 2 1x 7 2x 2 5 4x 2 2x 7, 0 = x 3 x 2 1 x2 2.

4.8 Additional Topics Further research included (will include) over- and underdetermined systems, control problems, regularization by (piecewise linear) feedback, structured problems, conservation laws.

5.1 Discretizations Direct discretization of higher index problems (i. e., problems with µ 1) that have no special structure usually does not lead to convergent methods. A DAE that satisfies one of the above hypotheses can (at least theoretically) be transformed to a DAE with µ = 0 having the same solutions. Definition The k-step BDF discretization of F (t, x, ẋ) = 0 has the form F (t i, x i, D h x i ) = 0, where t i = t 0 + ih with given stepsize h > 0 and D h x i = 1 k α l x i l. h l=0 The coefficients α i are fixed by the condition ẋ(t i ) 1 k α l x(t i l ) Ch k h l=0 for sufficiently smooth x, C independent of h. Theorem Given a DAE with µ = 0 and a consistent initial condition x(t 0 ) = x 0 with corresponding sufficiently smooth solution x C 1 (I, R n ). Then the k-step BDF method with 1 k 6 is convergent of order k, i. e., for x N with N = (t t 0 )/h, t I fixed, we have x(t) x N Ch k with C independent of h.

5.2 Discretizations (cont.) Remark Runge-Kutta methods and their derivates in general require the reformulation of F (t, x, ẋ) = 0 with µ = 0 into ẋ = y, 0 = F (t, x, y), which then has µ = 1 but special structure. Remark There is a class of partitioned Runge-Kutta methods which use the special structure of the reduced DAEs, see the treatment of boundary value problems. In the numerical treatment of initial value problems we shall concentrate on the BDF discretization. Idea Discretize the reduced DAE.

6.1 Linear Initial Value Problems Let E(t)ẋ = A(t)x + f(t) (with real data) satisfy the corresponding hypothesis. The reduced DAE reads Ê 1 (t)ẋ = Â1(t)x + ˆf 1 (t), 0 = Â2(t)x + ˆf 2 (t) (d differential equations) (a algebraic equations) with Ê 1 = ẐT 1 E, Â 1 = ẐT 1 A, Â 2 = ẐT 2 N µ[ I n 0 0 ] T, ˆf 1 = ẐT 1 f, ˆf 2 = ẐT 2 g µ.

6.2 Linear Initial Value Problems (cont.) Consistency of initial values Let x 0 R n be an estimate for an initial value at t 0 I. Then x 0 R n fixed by i. e. x 0 x 0 2 = min! s. t. 0 = Â2(t 0 )x 0 + ˆf 2 (t 0 ), x 0 = (I Â2(t 0 ) + A 2 (t 0 )) x 0 Â2(t 0 ) + ˆf2 (t 0 ) is consistent at t 0. Integration step BDF-discretization of the reduced DAE yields Ê 1 (t i )D h x i = Â1(t i )x i + ˆf 1 (t i ), 0 = Â2(t i )x i + ˆf 2 (t i ), which has to be solved for x i. The corresponding coefficient matrix reads [ α0 h Ê 1 (t i ) Â1(t ] i ) Â2(t i ) and is nonsingular for sufficiently small h. Remark We cannot produce (globally) smooth functions Ẑ1 and Ẑ2 by numerical techniques. The methods presented here, however, are not influenced by possibly non-smooth choices of orthogonal bases.

7.1 Nonlinear Initial Value Problems Let F (t, x, ẋ) = 0 satisfy the corresponding hypothesis. The (locally defined) reduced DAE reads ˆF 1 (t, x, ẋ) = 0, ˆF 2 (t, x) = 0, (d differential equations) (a algebraic equations) with ˆF 1 (t, x, ẋ) = Z T 1,0F (t, x, ẋ) and F µ (t, x, y) Z 2,0 ˆF2 (t, x) = 0.

7.2 Nonlinear Initial Value Problems (cont.) Consistency of initial values Let (t 0, x 0, ỹ 0 ) R (µ+2)n+1 be an estimate for a point in L µ. Solve F µ (t 0, x 0, y 0 ) = 0 for (x 0, y 0 ), say by the Gauß-Newton method, starting with ( x 0, ỹ 0 ). Integration step BDF-discretization of the reduced DAE yields Z T 1,0F (t i, x i, D h x i ) = 0, F µ (t i, x i, y i ) = 0, which has to be solved for (x i, y i ), say by the Gauß-Newton method. Remark In both cases it is known that the Jacobian at the solution has full row rank (provided h is sufficiently small in the second case) implying quadratic convergence of the Gauß-Newton method. Moreover, in the second case the part x i of the solution is uniquely determined.

8.1 Boundary Value Problems In this section, we consider F (t, x, ẋ) = 0, r(x(t), x(t)) = 0, where F satisfies the corresponding hypothesis. Let x C 1 ([t, t], R n ) be a solution of the BVP, i. e., let F (t, x (t), ẋ (t)) = 0 on [t, t], F µ (t, x (t), P (t)) = 0 on [t, t], r(x (t), x (t)) = 0, with some P : I R (µ+1)n.

8.2 Regularity We can globally define a reduced BVP ˆF 1 (t, x, ẋ) = 0, ˆF2 (t, x) = 0, r(x(t), x(t)) = 0. Linearization around x C 1 ([t, t], R n ) yields Ê 1 (t)ẋ = Â1(t)x, 0 = Â2(t)x, Ĉx(t) + ˆDx(t) = 0, where Ê 1 (t) = ˆF 1;ẋ (t, x (t), ẋ (t)), Â 1 (t) = ˆF 1;x (t, x (t), ẋ (t)), Â 2 (t) = ˆF 2;x (t, x (t)), Ĉ = r xa (x (t), x (t)), ˆD = r xb (x (t), x (t)). Definition The solution x C 1 ([t, t], R n ) is called regular if the above linearized BVP only admits the trivial solution.

8.3 Multiple Shooting Given a grid t = t 0 < t 1 < < t N 1 < t N = t, N N together with (sufficient good) estimates (x i, y i ) R (µ+2)n, i = 0,..., N. The nonlinear systems F µ (t i, ˆx i, ŷ i ) = 0, T T 2,i (ˆx i x i ) = 0, T T 1,i(ŷ i y i ) = 0 (d equations) (a equations) for (x i, y i, ˆx i, ŷ i ) with appropriately chosen T 1,i, T 2,i define local projections S i, i = 0,..., N, with (t i, ˆx i, ŷ i ) L µ, (ˆx i, ŷ i ) = S i (x i, y i ). Moreover, because of the unique solvability of initial value problems F (t, x, ẋ) = 0, x(t i ) = ˆx i with ˆx i sufficiently close to x (t i ) we have transfer functions Φ i : (ˆx i, ŷ i ) x(t i+1 ).

8.4 Multiple Shooting (cont.) The multiple shooting system then reads F µ (t i, x i, y i ) = 0, T T 2,i+1 (x i+1 Φ(S i (x i, y i ))) = 0, r(x 0, x N ) = 0. i = 0,..., N, i = 0,..., N 1, This system can be solved by a Gauß-Newton-like method such that 1. we only must integrate d trajectories for the approximation of the Jacobian, 2. the only arising global linear system is cyclic with size (N + 1)d. If the solution is regular, the convergence rate is superlinear.

8.5 Collocation Given knots 0 < ϱ 1 < < ϱ k < 1, 0 = σ 0 < < σ k = 1, k N, and collocation points t ij = t i + h i ϱ j, s ij = t i + h i σ j, j = 1,..., k, j = 0,..., k, i = 0,..., N 1, the collocation system reads Z T 1,ijF (t ij, x π (t ij ), ẋ π (t ij )) = 0, j = 1,..., k, i = 0,..., N 1, F µ (s ij, x π (s ij ), y ij ) = 0, j = 1,..., k, i = 0,..., N 1, j = 0, i = 0, r(x π (t), x π (t)) = 0 for (x π, y ij ) P k+1,π C 0 ([t, t], R n ) R (kn+1)(µ+1)n. This system can be solved by a Gauß-Newton-like method such that the main part of an iteration consists of the solution of a linear BVP where no quantities y ij are needed. If the solution is regular, the convergence rate is superlinear.

8.6 Collocation (cont.) Existence and uniqueness of collocation solutions Let x be a regular solution of the BVP. Then for sufficiently small h = min i=0,...,n 1 (t i+1 t i ) the collocation system has a locally unique solution x π with x x π C 0 Chk. Superconvergence For Gauß knots ϱ j and Lobatto knots σ j we even have and x x π C 0 Ch k+1 max i=0,...,n x (t i ) x π (t i) Ch 2k.

9.1 Literature/Software Related literature [1] S. L. Campbell: A general form for solvable linear time varying singular systems of differential equations. SIAM J. Sci. Stat. Comput. 6, 334 348 (1985) [2] S. L. Campbell, E. Griepentrog: Solvability of general differential algebraic equations. SIAM J. Sci. Comput. 16, 257 270 (1995) [3] P. Kunkel, V. Mehrmann: Canonical forms for linear differential-algebraic equations with variable coefficients. J. Comput. Appl. Math. 56, 225 251 (1994) [4] P. Kunkel, V. Mehrmann: A new class of discretization methods for the solution of linear differential-algebraic equations. SIAM J. Numer. Anal. 33, 1941 1961 (1996) [5] P. Kunkel, V. Mehrmann: Local and global invariants of linear differential algebraic equations and their relation. Electr. Trans. Numer. Anal. 4, 138 157 (1996) [6] P. Kunkel, V. Mehrmann: Regular solutions of nonlinear differential-algebraic equations and their numerical determination. Numer. Math. 79, 581 600 (1998) [7] P. Kunkel, V. Mehrmann: Analysis of over- and underdetermined nonlinear differential-algebraic systems with application to nonlinear control problems. Math. Contr. Sign. Syst. 14, 233 256 (2001)

9.2 Literature/Software (cont.) [8] P. Kunkel, V. Mehrmann, R. Stöver: Multiple shooting for unstructured nonlinear differential-algebraic equations of arbitrary index. Institut für Mathematik, TU Berlin, Techn. Report 751-02 (2002) [9] P. Kunkel, V. Mehrmann, R. Stöver: Symmetric collocation for unstructured nonlinear differential-algebraic equations of arbitrary index. Zentrum für Technomathematik, Uni Bremen, Techn. Report 02-12 (2002) Software [1] P. Kunkel, V. Mehrmann, W. Rath, J. Weickert: GELDA: A software package for the solution of GEneral Linear Differential Algebraic equations. Fachbereich Mathematik, TU Chemnitz, Techn. Report SPC 95 8 (1995) [2] P. Kunkel, V. Mehrmann, I. Seufer: GENDA: A software package for the solution of GEneral Nonlinear Differential-Algebraic equations, Institut für Mathematik, TU Berlin, Techn. Report 730-02 (2002)