Control Problems in DAE. with Application to Path Planning Problems for. Volker H. Schulz. published as: Preprint 96-12,

Size: px
Start display at page:

Download "Control Problems in DAE. with Application to Path Planning Problems for. Volker H. Schulz. published as: Preprint 96-12,"

Transcription

1 Reduced SQP Methods for Large-Scale Optimal Control Problems in DAE with Application to Path Planning Problems for Satellite Mounted Robots Volker H Schulz PhD thesis at the University of Heidelberg published as: Preprint 96-12, Interdisziplinares Zentrum fur Wissenschaftliches Rechnen, Universitat Heidelberg, 1996 Gutachter: Prof Dr Hans Georg Bock Prof Dr Gabriel Wittum Tag der Einreichung: 13 November 1995 Tag der mundlichen Prufung: 6 Februar 1996

2 Acknowledgements This thesis has been prepared during my work at the Interdisciplinary Center for Scientic Computing (IWR) of the Ruprecht-Karls-Universitat Heidelberg I would like to thank Prof HG Bock for providing a very creative and inspiring atmosphere within his research group, of which I am lucky enough to be a member I am indebted to him and especially to Dr JP Schloder for much invaluable advice and many stimulating discussions I would also like to thank Prof RW Longman for the collaboration on satellite mounted robotics, the outcome of which is part of this thesis I'd like to give thanks to my colleagues and friends Dr Marc Steinbach, Reinhold von Schwerin and Michael Winckler for fruitful joint work and many useful suggestions and critical comments I would like to thank Dr M Zedd for providing realistic data of the shuttle and its remote manipulator system and all other people who helped me during my work on this thesis Finally I wish to thank the graduate program \Modellierung und Wissenschaftliches Rechnen in Mathematik und Naturwissenschaften" of the IWR both for travel support and additional hardware This book is dedicated to my wife Petra She not only proofread the nal text, but helped very much to get things done with her continued and loving encouragement

3 Contents 1 Introduction 3 11 The mathematical problem formulation : : : : : : : : : : : : : : : : : : : : 7 12 Notational conventions : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 9 2 The Collocation Discretization Collocation for two point BVP in ODE : : : : : : : : : : : : : : : : : : : : Choice of collocation points : : : : : : : : : : : : : : : : : : : : : : The polynomial representation : : : : : : : : : : : : : : : : : : : : : A tempting combination : : : : : : : : : : : : : : : : : : : : : : : : Collocation for BVP in DAE with invariants : : : : : : : : : : : : : : : : : DAE models from mechanics : : : : : : : : : : : : : : : : : : : : : : Collocation discretization of two point BVP in DAE : : : : : : : : : Exploiting invariants : : : : : : : : : : : : : : : : : : : : : : : : : : 21 3 Partially reduced SQP methods From feasible path methods to reduced SQP : : : : : : : : : : : : : : : : : The reduced gradient seen as a directional derivative : : : : : : : : : : : : RSQP methods and their convergence properties : : : : : : : : : : : : : : : Position of function evaluations : : : : : : : : : : : : : : : : : : : : Information used for the update of the reduced Hessian : : : : : : : The concept of partially reduced SQP methods : : : : : : : : : : : : : : : Local convergence of PRSQP methods : : : : : : : : : : : : : : : : : : : : Basic assumptions : : : : : : : : : : : : : : : : : : : : : : : : : : : Linear convergence : : : : : : : : : : : : : : : : : : : : : : : : : : : Superlinear convergence : : : : : : : : : : : : : : : : : : : : : : : : Local convergence of PRSQP methods with structure preserving updates : Globalization strategies : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 61 4 Direct PRSQP methods for optimal control problems in DAE The indirect approach : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Direct methods and the boundary value problem approach : : : : : : : : : The discretized problem : : : : : : : : : : : : : : : : : : : : : : : : A new family of direct methods for optimal control problems : : : : : : : : 73 1

4 2 Contents 44 Aspects of the numerical realization : : : : : : : : : : : : : : : : : : : : : : A PRSQP algorithm : : : : : : : : : : : : : : : : : : : : : : : : : : Recursions : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : PRSQP with block updates : : : : : : : : : : : : : : : : : : : : : : Mesh selection for the states : : : : : : : : : : : : : : : : : : : : : : : : : : Mesh selection for the controls : : : : : : : : : : : : : : : : : : : : : : : : : 87 5 Path Planning for Satellite Mounted Robots Modeling a Satellite Mounted Robot : : : : : : : : : : : : : : : : : : : : : Theoretical Results : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Minimum attitude disturbance paths : : : : : : : : : : : : : : : : : Minimum time paths : : : : : : : : : : : : : : : : : : : : : : : : : : Minimum maximal acceleration paths : : : : : : : : : : : : : : : : : Principal plane paths : : : : : : : : : : : : : : : : : : : : : : : : : : Circles as shortest paths in principal planes : : : : : : : : : : : : : Numerical results Technical preliminaries : : : : : : : : : : : : : : : : : : : : : : : : : : : : : Optimal control of ground based robots : : : : : : : : : : : : : : : : : : : Optimal paths for the shuttle RMS : : : : : : : : : : : : : : : : : : : : : : D paths : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : D paths : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : : 120 Summary 131 A Quaternions 133 B Dimensions of the shuttle and its RMS 135 Bibliography 139

5 Chapter 1 Introduction Er-Kennen Er-Rechnung einer Heinz von Foerster 1973 [44] Large scale nonlinear optimization is a vivid research area of growing importance A main source of large scale optimization problems is the optimization of systems whose behavior can be described by dierential equations A typical eld of application lies in engineering Due to recent developments in numerical simulation techniques and increasing capacities of computers there is a natural demand for optimization, which some think of as the ultimate aim of all simulation Mathematical optimization may therefore be considered a new key technology [26] General purpose optimization codes from software libraries mostly work best for small scale problems which are not too nonlinear There is a lack of ecient general purpose solvers for medium and large scale problems The reason is that such problems possess certain structures which have to be considered for eciency reasons, but are generalizable only to a certain extent Therefore special optimization methods have to be developed for special problem classes Such structures arise, eg, from discretizations of dierential equations Thus the successful solution of optimization problems in such applications requires a blend of several methodologies and techniques: discretization and solution techniques for discretized problems, optimization theory and techniques, modeling and theoretical investigations of the application This exhibits the interdisciplinary character of work in this eld, which thus is a distinctive part of scientic computing 3

6 discretization optimization modeling 4 1 Introduction In the present thesis new ecient optimization methods are developed for optimal control problems in dierential algebraic equations (DAE), and they are applied to path planning problems for satellite mounted robots Each constituent of the solving of large optimization problems from applications mentioned above is taken into consideration For each item and their combinations new aspects and for some of them completely new points of view are presented The main topic of this thesis is the introduction of the new class of partially reduced sequential quadratic programming (PRSQP) methods for large scale optimization problems and their application to optimal control problems in DAE A second central point of this thesis is the investigation of path planning problems for satellite mounted robots using the new optimization methods and the provision of a basic understanding of the nature of the solutions to these path planning problems The whole thesis is motivated by the above application problem, which is reected on in this introduction The mathematical formulation, however, is much easier, if the theoretical ground is laid down rst, to be built on up until the solution of the motivating application problem So the main results of the thesis are now briey described in a top down manner, where gure 11 may help to keep track of the actual position within the thesis Figure 11: Dependency structure of the thesis' chapters The topic of path planning for robots, which receives considerable attention for ground based robots [77, 76, 113, 114], is signicantly more complicated in space The reason is that the process of manipulating a load at one end of the robot can cause

7 the base of the robot to change its attitude the robot-satellite system is nonholonomic Therefore one aims at the construction of maneuvers of the robot so that the nal attitude of the satellite matches its initial attitude The rst solutions for this problem presented by Longman in [85] are feasible, but complex and undesirable Together with Bock and Longman the author of this thesis developed in [108] the optimal control problem approach, to overcome this decit Its basic idea is to formulate an optimal control problem with the conguration constraints for the satellite attitude as boundary conditions and an objective functional describing the properties of desired paths Though this approach seems to be quite natural from a mathematical point of view, it is unusual from an engineering standpoint, mainly because of a supposed lack of ecient solution methods for optimal control problems In chapter 6 numerical results for optimal paths are presented using realistic data of the space shuttle Numerical results, however, are hardly intelligible, if there is no basic theoretical understanding of the nature of optimal paths Therefore in chapter 5 we discuss several models and present novel theoretical results about the properties of optimal paths for satellite mounted robots, which are necessary for a thorough understanding of the numerical results presented in chapter 6 Chapter 5 and chapter 4 serve as the groundwork for the numerical results in chapter 6, which on the other hand presents also performance results of the numerical methods developed in chapter 4 There are two distinct approaches to the treatment of optimal control problems in DAE, which are both investigated in chapter 4 The indirect approach formulates necessary conditions for the solution of optimal control problems in terms of a boundary value problem Based on classical conditions for ordinary dierential equations (ODE) [31, 72, 23], necessary conditions for DAE of index 1 are derived From a practical point of view the direct approach is much more preferable Its fundamental principle is to discretize the control function and to employ nonlinear programming techniques for the solution of a resulting nite dimensional optimization problem The rst direct methods were typically gradient-type algorithms based on single shooting, see, eg, Breakwell, Speyer, and Bryson [30] or Miele [88] A major breakthrough in solving real-life problems by direct methods has been achieved by Bock and coworkers with the development of the boundary value problem approach [20, 21, 24, 102] This approach is characterized by a discretization of the system states using, eg, multiple shooting or collocation The discretization equations are treated as side conditions in the optimization problem and thus simultaneously solved together with the optimization problem The code MUSCOD [94, 24, 108, 27] is based on multiple shooting in connection with a structured SQP (Sequential Quadratic Programming) method and high-rank block updates of the Hessian of the Lagrangian It has been successfully applied, eg, to trajectory optimization problems in robotics [77, 76, 114, 108] Another multiple shooting code, based on this algorithm and its theory, has later been developed in [73] Some direct collocation methods are described by Betts [16] and von Stryk [118] Both use a collocation discretization critically discussed in section 213 Direct reduced SQP methods for 5

8 6 1 Introduction discretized optimal control problems are described, eg, in [80, 17, 70] These methods do not consider inequality constraints or allow only for box constraints on the controls Thus MUSCOD is among the codes dening the state of the art in this area In chapter 4 direct methods are presented, which employ reduced SQP methods based on the partial reduction concept introduced in chapter 3 These methods are able to take into account inequality constraints for the controls as well as for the states In fact a whole new family of such methods is introduced, where one implementation (OCPRSQP) is described in detail It is compared with MUSCOD in section 62 Furthermore, the practically important issue of adaptivity is addressed in chapter 4 An analysis of the inuence of the discretization error of the states of the dynamical system is performed and new mesh selection strategies for the control discretization are presented Chapter 4 is based on discretization considerations in chapter 2 and optimization considerations in chapter 3 Chapter 3 contains the description of PRSQP methods as the main new concept at the mathematical core of this thesis It is concerned with the numerical solution of large structured nonlinear optimization problems, as they result, eg, from discretizations of optimal control problems as described in chapter 4 In contrast to SQP methods, reduced SQP methods deal only with the projected Hessian of the Lagrangian, which is a matrix of the size of the \true" degrees of freedom of the problem, and which is known to be positive semidenite at a solution point Depending on the structure of the optimization problem, reduced SQP methods possess a great eciency potential compared to SQP methods The disadvantage is that problems with inequalities or without a global chart of the \true" degrees of freedom are scarcely tractable by these methods After a thorough review of existing reduced SQP methods, PRSQP methods are introduced in chapter 3, which avoid these diculties by combining the advantage of reduced SQP methods small quadratic subproblems with the advantage of (full) SQP methods convenient treatment of inequality constraints The basic idea of these methods is to formulate the reduced SQP method only wrt those constraints, which allow a straight forward parameterization and to treat the remaining constraints in the same way as usual SQP methods do, but reduced onto the kernel of the rst mentioned constraints Indeed SQP and reduced SQP methods can be considered as special cases of PRSQP methods Convergence proofs are given The application of structure preserving update formula for the reduced Hessian is considered as well Based on the theory developed in this chapter, a large scale shape optimization problem [106, 111] has already been successfully solved, which is briey referred to in section 32 In chapter 2 an overview about the collocation discretization of boundary value problems in ODE and DAE is given, mainly in a formulation established by Ascher and coworkers (eg, [6, 8, 10]) Furthermore, the notion of a structure preserving collocation implementation is informally introduced Typical implementations [16, 118] for optimal control problems are shown to be not structure preserving and thus potentially increase the degree of nonlinearity of the optimization problem Finally, the techniques for the ecient exploitation of invariants, which frequently appear if higher index DAE are reduced to

9 11 The mathematical problem formulation 7 lower index, developed in [110], are discussed and compared with related techniques The remainder of this introductory chapter is devoted to the denition of the mathematical problem class to be treated in this thesis and to the explanation of some notational conventions 11 The mathematical problem formulation There are various formulations of optimal control problems for dynamical systems The basic problem treated in this thesis is of the form (t 2 [0; T ]): subject to min (y(t ); T ) (1:1) _y 1 (t) = f(y 1 (t); y 2 (t); u(t)); (12) 0 = g(y 1 (t); y 2 (t); u(t)); (13) r(y 1 (0); y 2 (0); y 1 (T ); y 2 (T )) = 0; (1:4) u min i u i (t) u max i ; i = 1; : : : ; n u ; (1:5) s(y 1 (t); y 2 (t); u(t)) 0: (1:6) Here denotes the real valued objective criterion to be minimized The semi-explicit dierential algebraic equation (DAE) (12, 13) describes the system dynamics for the time dependent dierential (y 1 (t) 2 R ny 1 ) and algebraic (y2 (t) 2 R ny 2 ) state variables and the time dependent control variables (u(t) 2 R n u ) It is of index l 2 N, if l is the minimal number of dierentiations of (13) wrt time, t, so that the resulting system can be algebraically transformed to an ordinary dierential equation We only consider DAE of index 1, ie DAE 2 is always nonsingular In (14) n r boundary conditions are formulated, which have to be satised by the solution of the optimal control problem Equation (16) species n s path constraints All problem functions ; f; g; r; s are assumed to be at least twice continuously differentiable, and to possess higher order derivatives when required: f 2 C 2 (R ny 1 R ny 2 R nu ; R ny 1 ); g 2 C 2 (R ny 1 R ny 2 R nu ; R ny 2 ); r 2 C 2 (R ny 1 R ny 2 R ny 1 R ny 2 ; R n r ); s 2 C 2 (R ny 1 R ny 2 R nu ; R ns ) 2 C 2 (R ny R;R): This formulation of the optimal control problem is not the most general one In the following we comment on possible generalizations and reformulations: Non-autonomous problems can be included in the context above by dening an additional dierential state variable y 1;ny1 +1 satisfying the initial value problem _y 1;ny1 +1(t) = 1, y 1;ny1 +1(0) = 0 In a similar way parameters p 2 R np can be included by using a trivial ordinary dierential equation _p = 0

10 8 1 Introduction In theoretical considerations the nal time T can be xed or free For numerical computations it is preferable to reformulate a problem with free nal time to one with xed nal time, eg, in the following way: The nal time T is treated as a new system parameter and the system time is substituted by t := T, where 2 [0; 1] Thus also the right hand side in (12) has to be multiplied with T in order to describe the dynamics in the new time variable The Mayer cost function in (11) is one of the most typical cost functions Another type of similar importance is the Lagrange cost function min Z T 0 L(y 1 (t); y 2 (t); u(t))dt for xed nal time T, which models a continuous dependency of the cost on the whole trajectory Sometimes a combination of both appears in the literature, the Bolza cost function min (y 1 (T ); y 2 (T )) + Z T 0 L(y 1 (t); y 2 (t); u(t))dt: Problems formulated with these alternative cost functions can be equivalently formulated as optimal control problems with a Mayer cost functional Many dynamical systems are formulated as DAE of index > 1 In that case we assume that index reduction is applied employing successive dierentiations of the algebraic constraint (13) wrt time t Resulting invariants can be exploited in the numerical solution of the optimal control problem as described in section 22 and in [110] The index of a DAE as dened above is the dierential index There are other denitions like the perturbation index [65], which are equivalent to the dierential one in the case of semi-explicit DAE Typical DAE from chemical engineering [27] are quasi-linear and semi-explicit B(y 1 (t); y 2 (t); u(t)) _y 1 (t) = f(y 1 (t); y 2 (t); u(t)); B nonsingular 0 = g(y 1 (t); y 2 (t); u(t)): This form can be conveniently implemented in a collocation discretization, but it makes the formulas very clumsy The box constraints on the controls (15) can be considered as special cases of the conditions (16) Since they are usually treated dierently, they are formulated explicitly

11 12 Notational conventions 12 Notational conventions 9 Here we summarize some notational conventions, which are used throughout the thesis The vector spaces used are of the type R n with a norm k:k Linear mappings A : If not otherwise indicated the identity R n! R m are considered matrices A 2 R mn matrix is denoted by I 2 R nn The kernel and the range of a matrix A are denoted by N (A) and R(A) respectively The norms for matrices are assumed to be induced by the corresponding vector space norms The derivative of a dierentiable function, g : R n! R m : x 7! g(x), in many cases is denoted by an index or by the corresponding capital letter (here G), so that g x =: G The derivative of a scalar function, f : R n! R : x 7! f(x), is also called a gradient and then considered a column vector, so that r x f := f > x = (@f=@x)>

12 10 1 Introduction

13 Chapter 2 The Collocation Discretization Collocation is one of the two main discretization methods for boundary value problems (BVP) in ordinary dierential equations (ODE) and dierential algebraic equations (DAE) In this chapter we recall basic denitions and results for collocation, as established by Ascher and coworkers Furthermore, critical aspects of special well known implementations are discussed and the numerical exploitation of invariants is demonstrated 21 Collocation for two point BVP in ODE In this section the basic properties of the collocation method for ODE are recalled following the lines of [6] We consider the following parameter dependent two point BVP in the interval [0; T ]: where y(t) 2 R ny ; p 2 R np _y(t) = f(y(t); p); (21) r(y(0); y(t ); p) = 0; (22) and f 2 C 2 (R ny R np ; R ny ), r 2 C 2 (R ny R ny ; R nr ), with n r = n y +n p The collocation discretization is based on the approximation of the solution of the ODE (21) by a piecewise polynomial of degree k on a mesh : 0 = 1 < : : : < m?1 < m = T : (2:3) On each of these subintervals the solution is approximated by a polynomial yj (t; j) = k+1 P s=1 j;l l t?j j+1? j ; t 2 [j ; j+1 ] (24) j;s 2 R ny ; s = 1; : : : ; k + 1; j = 1; : : : ; m? 1; where f s g k+1 s=1 is an appropriately chosen basis of the space P k+1 [0; 1] of polynomials of order k + 1 (b= degree k) on the intervall [0; 1] To determine the coecients f j;s g, the 11

14 12 2 The Collocation Discretization approximate solution is required to satify the ODE (21) on a subdivision of this mesh, the collocation points t jl := j + l ( j+1? j ); l 2 [0; 1]: _y j (t jl ; j ) = f(y j (t jl ; j ); p); l = 1; : : : ; k; j = 1; : : : ; m? 1; (2:5) and to be continuous at the mesh points y j ( j+1 ; j )? y j+1( j+1 ; j+1 ) = 0; j = 1; : : : ; m? 1: (2:6) Here the continuity condition in the case j = m? 1 means that we dene variables y m approximating y(t ) and require y m?1( m ; m?1 )? y m = 0 Additionally, the boundary conditions have to be satised as well r(y ( 1 1; 1 ); ym ) = 0: (2:7) In each collocation interval yj ( j+1 ; j ) can be interpreted as the result of one step of an implicit Runge-Kutta-method starting from yj ( j ; j ) The corresponding IRK-scheme is 1 a 11 : : : a 1k R a il = i R L l (s)ds; b i = 1 L i (s)ds; 1 i; l k; k a k1 : : : a kk b 1 : : : b k 0 L l (s) = k Q i=1 i6=l 0 (s? i )= k Q i=1 i6=l ( l? i ) These special Runge-Kutta-schemes are called collocation schemes The convergence properties of the collocation discretization are determined by the convergence properties of the corresponding IRK-method, which results from a related quadrature formula observing the fact that in the interval [ j ; j+1 ] with length h j := j+1? j y( j+1 ) = y( j ) + h j Z 1 0 = y( j ) + h j kx l=1 _y( j + sh j )ds f( j + l h j ; y( j + l h j ); p) Z 1 L l (s)ds 0 {z } b l q +h j O(hj): Here q is the maximal order of polynomials that are integrated exactly by the quadrature formula dened by fb l g and the collocation points f l g If the collocation points are well chosen, the order of the error, q, is greater than k + 1, which is referred to as the superconvergence property, since k + 1 is the order which can be expected for any quadrature formula dened by collocation points with i 6= j ; 8i 6= j Due to Radau [99] the highest order achievable is q = 2k This is the case for Gaussian points, which are dened as the roots of the polynomials fp k g that are dened by the recursion: (k + 1)P k+1 (t) = (2k + 1)(2t? 1)P k (t)? kp k?1 (t); P 0 (t) 1; P 1 (t) = 2t? 1: (2:8) The roots of P k are symmetric wrt the midpoint of the intervall [0,1] For k = 1 ( 1 = 1=2) this results in the implicit midpoint rule For k = 3 the IRK-scheme is:

15 21 Collocation for two point BVP in ODE 13 1? p p p p Gauss 2? p p ? p ? p The order q = 2k? 1 is achieved for the roots of the polynomials fp k + P k?1 ; 2 Rg, which gives room to x one of the collocation points to a preassigned value For 1 = 0 (collocation points = roots of P k +P k?1 ) or k = 1 (collocation points = roots of P k?p k?1 ) this results in the Radau-collocation-schemes, whose simplest members (k = 1) are the Euler-method and the backward-euler-method For k = 3 the IRK-schemes are 0 6? p p ?1? p p p p 6 36 Radau IA?1+ p ?43 p ?7 p ? p ? p p ?7 p p ? p ? p ?169 p p p p 6 36 Radau IIA?2+3 p 6 225?2?3 p For the next lower order q = 2k? 2 it is possible to choose values for two collocation points If one assigns 1 = 0 and k = 1, this results in the Lobatto points, which are the roots of P k? P k?2 The corresponding IRK-scheme for k = 3 is ? Lobatto IIIA For a more thorough discussion of quadrature formulas see [29] Under mild regularity assumptions concerning unique solvability and conditioning of the BVP, the following convergence properties can be proved (for details see [3]): convergence for all t 2 [ j ; j+1 ]:! t? y (i) (t)? y (i) (t) = h k?i+1 j y (k+1) ( j ) (i) j + O(h k?i+2 j ) + O(h q ); (29) h j 0 i k; () = 1 Z ky (s? ) (s? l )ds; k! 0 l=

16 14 2 The Collocation Discretization superconvergence at mesh points j and for the parameters: y( j )? y ( j ) = O(h q ); p? p = O(h q ); (2:10) where h = max h j and the superscripts in parentheses mean derivatives wrt time j 211 Choice of collocation points The choice of the collocation points inuences the stability and convergence properties of the resulting collocation method Algebraic stability is one of the relevant stability criteria For collocation schemes algebraic stability is equivalent to AN-stability All three alternatives investigated here (Gauss, Radau, Lobatto) are A-stable But only Gauss- and Radau-schemes are algebraically stable (see [5]) On the other hand, collocation at Gaussian points is reported in [6] to loose its superconvergence properties if applied to singularly perturbed problems Furthermore, the schemes possess dierent decoupling properties of increasing and decreasing modes of a system of ODE Radau IIA yields a stable integration for rapidly increasing modes as dened in [6], while Radau IA does it for the reverse Since in nonlinear ODE explicit decoupling of increasing and decreasing modes is rarely applicable, symmetric schemes are preferable, since they read the same in both directions As Ascher [6] points out, collocation at Gaussian points oers better decoupling properties than collocation at Lobatto points Since there are no \optimal" collocation points for all cases, a practical collocation method should include a way to oer the user a selection of several collocation methods 212 The polynomial representation There are various possibilities to represent the piecewise polynomial collocation approximation The most popular ones are B-splines, Hermite-splines and the Runge-Kutta-basis [13] They have similar properties wrt storage requirements and conditioning of the resulting linear system We choose the so called Runge-Kutta-representation because of its notational ease The collocation solution in the interval [ j ; j+1 ] is represented in the following way: y (t) = y j + h j k P l=1 z jl l t?j h j 8t 2 [j ; j+1 ]; where (211) y j := y ( j ) 2 R ny ; z jl := _y ( j + l h j ) 2 R ny ; h j := j+1? j ; l 2 P k+1 [0; 1] : l (0) = 0; _ l( i ) = ( 1; if l = i; 0; else: The name Runge-Kutta-representation is motivated by the fact that the variables fz jl g k l=1 are the stages of the corresponding IRK-scheme, which can be written as:

17 21 Collocation for two point BVP in ODE ( 1 ) : : : k ( 1 ) k 1 ( k ) : : : k ( k ) 213 A tempting combination 1(1) : : : k (1) In particular in the context of optimization problems for DAE systems, a special combination of collocation points and local polynomial representation is widely used (see, eg, [14, 118, 16]) It is the combination of Hermite-splines of order 4 with 3 Lobatto points in a special formulation of the resulting discretization equations (H 1 L 3, see below) The tempting aspect of this combination lies in the fact that it yields a relatively small number of variables to be determined numerically Hermite-splines of order 4 in the intervall [ j ; j+1 ] are dened as the interpolating polynomial of the states y j := y( j ), y j+ := y( j+1 ) and the derivatives z j := _y( j ), z j+ := _y( j+1 ) at the mesh points The equations for collocation at Lobatto points are then: collocation conditions: () 0 = y j? y j+ + h j 6 continuity conditions: " z j = f(y j ; p); (212)! _y ( j + h j 2 ) = f y ( j + h j 2 ); p (213) z j + z j+ + 4f y j + y j+ 2 + h j 8 (z j? z j+ ); p!# z j+ = f(y j+ ; p); (214) y j+ = y j+1 : (215) In the following the system ( ) will be referred to as the H 4 L 3 -system Obviously equation (215) can be solved immediately by substituting y j+ with y j+1 in the equations (213) and (214) From the equations (213, 214) now follows: z j+ = z j+1 Using this in (213) leads to the system: " z j = f(y j ; p) (212) 0 = y j? y j+1 + h j y j + y j+1 z j + z j+1 + 4f + h j (z j? z j+1 ); p : (213 0 ) The system (212, ) will be referred to as the H 2 L 3 -system Again H 2 L 3 can be reduced by substituting z j from (212) in (213 0 ), leading to the equation: 0 = y j? y j+1 + h j 6 " z j + z j+1 + 4f y j + y j+1 2 ;!# + h j 8 (f(y j; p)? f(y j+1 ; p)) ; p!# ( )

18 16 2 The Collocation Discretization as the only equation to be solved in the intervall [ j ; j+1 ] This will be referred to as H 1 L 3 It is widely used because of its small number of necessary variables { only fy j g instead of fy j ; y j+ ; z j ; z j+ g But there are severe drawbacks in nonlinear systems with parameters where Newtontype iterations have to be applied, as is shown in the following example: Example 21: Consider the ODE-BVP _y = y 2 p; y(0) = 1; y(1) = 5 Its solution parameter is p = 0:8 Here we want to investigate the eect of the dierent formulations above on the nonlinearity of the resulting discretization equations Therefore the BVP is solved on an equidistant mesh with 20 meshpoints by using the dierent discretization formulations Then the iterations are performed again, but started for the y- and z-values at the discrete solution and for the p-value at the values p 0 in the following table The necessary numbers of Newton-iterations to solve the corresponding nonlinear equations are listed The iteration process is stopped, when knewton-incrementk 2 =(#variables) 10?4 # Newton-iterations p H 4 L H 2 L H 1 L Obviously H 4 L 3 and H 2 L 3 need only 1 Newton-iteration, while H 1 L 3 behaves completely dierent This indicates that in H 1 L 3 the smaller amount of variables is bought by an increase of the nonlinearity of the problem This eect is explained by the following theorem Theorem 21 Consider a parameter dependent ODE-BVP, which is linear wrt the parameter p, of the type _y = G(y)p + g(y); R(y(t 0 ); y(t f ))p + r(y(t 0 ); y(t f )) = 0; (2:16) where G(y) 2 R nnp ; r(y(t 0 ); y(t f ); p) 2 R n+np and R(y(t 0 ); y(t f )) 2 R (n+np)np The BVP is assumed to be well dened If the collocation equations dening the solution of the discretized BVP are linear wrt p as well, then the following is valid: A Newton iteration started on the collocation solution of the BVP, but with an arbitrary parameter value p 0, converges after one iteration step Proof: Dene x 2 R nx as the vector of all collocation variables The whole nonlinear system of discretization equations is assumed to be of the type: F (x; p) = A(x)p + a(x)! = 0; (2:17)

19 22 Collocation for BVP in DAE with invariants 17 where A(x) 2 R (nx+np)np Since the start values x 0 for x are assumed to be the solution values for the system, but p is assumed to be started at p 0 = ^p+p, where ^p is the solution parameter value, one gets the Taylor expansion F (x 0 ; p 0 ) = F (x 0 ; F (x0 ; ^p)p = A(x 0 )p: (2:18) {z } 0 The Newton step is determined by the linearization of F (x0 ; p 0 )x + A(x 0 )p =?F (x 0 ; p 0 ) (2:18) =?A(x 0 )p: (2:19) According to the regularity assumptions this linear system of equations has a unique solution, which is obviously (x; p) = (0;?p) Remark: This \one-step-convergence" theorem can be generalized to parameter identication problems in DAE [104] The theorem explains the dierences between the systems H 4 L 3, H 2 L 3 and H 1 L 3 in example 21 H 1 L 3 does not preserve the structure in (216) Additionally it can easily be seen that H 1 L 3 hardly preserves any structure possibly inherent in the right hand side of the ODE 22 Collocation for BVP in DAE with invariants Dierential-algebraic equations (DAE) are widely used to model dynamical systems Often the algebraic part of these equations models linking conditions or is used as a shortcut for otherwise rather complicated and expensive models The following sections give examples for DAE models in mechanics and summarize the discretization of DAE-BVP using collocation with the aim to use it in an optimization context 221 DAE models from mechanics Beginning with the Euler-Lagrange-equations for mechanical multibody systems, we examine some typical types of higher index DAE In mechanical systems, the dierential variables representing the states of the system are usually denoted by p, and the algebraic variables by Dening T = T (p; _p) as the kinetic energy of a mechanical system, the Euler-Lagrange-equations are of the form d (p; (p; = Q(p; _p) + G(p) > ; (220) g(p) = 0; (221)

20 18 2 The Collocation Discretization where p 2 R np are the generalized coordinates, _p the velocities, Q(p; _p) are exterior and interior forces and are the Lagrange multipliers associated with the constraint equations g, where G The constraints g model links in the model, which are not yet satised by the coordinates p Performing the dierentiations in (220) the system (220, 221) results in M(p)p = f(p; _p) + G(p) > ; (222) g(p) = 0; (223) where M(p) 2 T=@ _p 2 and f := Q? (@ 2 T=(@p@ _p)) _p This is a DAE of index 3, which is called the descriptor form for multibody systems Details about the proper numerical treatment of such systems for simulation purposes can be found in [1, 2, 9, 42, 46, 28] and in the context of optimization in [110, 112] If we introduce additional variables q _p in (220, 221), we get the system (p; Q(p; _p)? G(p) > = 0; (p; _p) _p = 0; (225) g(p) = 0; (226) which is due to the \Lagrangian approach" in [100] The algebraic variables are here and _p This formulation saves tedious calculations if the dynamical equations are to be derived by hand As it is pointed out in [100], it is also a very sparse formulation and therefore computationally less expensive Since the resulting equations are less involved, they may also give more theoretical insight, which is used, eg, in chapter 5 If there are no constraint equations (226) (and therefore no term G(p) > in 224), the DAE (224, 225) is of index Collocation discretization of two point BVP in DAE We consider the two point DAE boundary value problem in the interval [0; T ]: _y 1 = f(y 1 ; y 2 ); (227) 0 = g(y 1 ; y 2 ); (228) r(y 1 (0); y 2 (0); y 1 (T ); y 2 (T )) = 0; (2:29) where the time (t) dependent variables y 1 (t) 2 R ny 1 are the dierential variables, y 2 (t) 2 R ny 2 the algebraic variables and f 2 C 2 (R ny 1R ny 2 ; R n y1), g 2 C 2 (R ny 1R ny 2 ; R n y2), r 2 C 2 (R ny 1 R ny 2 R ny 1 R ny 2 ; R n r ) with n r = n y1 Following the lines of [10], again we dene a collocation grid : 0 = 1 < : : : < m?1 < m = T (2:30)

21 22 Collocation for BVP in DAE with invariants 19 and collocation points f 1 ; : : : k g [0; 1] The dierential variables are discretized in the interval [ j ; j+1 ] by y 1 (t) = y 1;j + h j k P l=1 z jl l t?j h j 8t 2 [j ; j+1 ]; where (231) y 1;j := y 1 ( j) 2 R ny 1 ; zjl := _y 1 ( j + l h j ) 2 R ny 1 ; hj := j+1? j ; l 2 P k+1 [0; 1] : l (0) = 0; _ l( i ) = The algebraic variables are discretized by the vectors ( 1; if l = i; 0; else: x jl 2 R ny 2 ; l = 1; : : : ; k; j = 1; : : : ; m? 1 representing the solution values y 2 (t jl ); t jl := j + l h j, at the collocation points A polynomial interpolation of fx j1 ; : : : ; x jk g yields an approximation y 2 (t) in the whole interval [ j ; j+1 ] In the following considerations and in chapter 4 the symbols \z" and \x" will also be used with only one index dened as: z j := (z > j1 ; : : : ; z> jk )> ; x j := (x > j1 ; : : : ; x> jk )> : The collocation discretization of the boundary value problem ( ) consists of the following system of equations: collocation conditions: z jl = f(y 1;j + h j kx s=1 0 = g(y 1;j + h j kx continuity conditions for y 1 : y 1;j + h j and boundary conditions: s=1 z js l ( s ); x jl ); l = 1; : : : ; k; j = 1; : : : ; m? 1 (232) z js l ( s ); x jl ); (233) kx z js l (1)? y 1;j+1 = 0; j = 1; : : : ; m? 1; (2:34) s=1 r(y 1;1 ; y 2 (0); y 1;m ; y 2 (T )) = 0: (2:35) As in (26), the continuity condition in the case j = m? 1 means that we dene variables y 1;m approximating y 1 (T ) and formulate the condition with these variables These equations are well dened for index 1 systems In the case of DAE of index 2 the boundary conditions have to be consistent with the derivative (wrt time t) of the algebraic condition Furthermore, for a straight forward implementation one has to require l 6= 0, which excludes Lobatto schemes In [66] a rather complicated workaround for this special case is suggested According to [4, 66], the following orders for the global error of the dierential (y 1 ) and algebraic (y 2 ) variables at the mesh points for a collocation discretization with k collocation points are achieved

22 20 2 The Collocation Discretization Gaussian points Radau points Lobatto points y 1 y 2 y 1 y 2 y 1 y 2 index 1 O(h 2k ) O(h k+1 ) O(h 2k?1 ) O(h 2k?1 ) O(h 2k?2 ) O(h 2k?2 ) index 2 (k odd) (k even) O(h k+1 ) O(h k ) O(h k?1 ) O(h k?2 ) O(h 2k?1 ) O(h k ) O(h 2k?2 ) O(h k?1 ) O(h k ) In the case of DAE of index 2 the superconvergence property for Gaussian collocation is lost The reason is that in this discretization the mesh points are no collocation points, where the algebraic conditions are required In order to retain the superconvergence property also for collocation schemes with l 6= 1, Ascher and Petzold [7, 8] introduce projected collocation methods These methods are dened in [8] only in the context of quasilinearization of the DAE In the concept of quasilinearization, however, the order of discretization and linearization is reverse, if compared with the boundary value approach (see below) Using quasilinearization, the DAE is rst linearized and then discretized Let y 1 ; y 2 be the solution of the linearized DAE of index 2 _y 1 (t) = F 1 (t)y 1 (t) + F 2 (t)y 2 (t) + f(t); (236) 0 = G(t)y 1 (t) + g(t): (237) Then the dierentiable part of the solution of the projected collocation method, ^y 1;j, at the meshpoints j is dened by the equations ^y 1;j = y 1;j + F 2 ( j ) j ; (238) 0 = G( j )^y 1;j + g( j ); (239) where y 1;j are the solution values of the unprojected collocation discretization of the linearized DAE and j are additional variables in order to dene the projection Each Newton-iteration of a projected collocation method therefore contains two steps: (1) compute the solution y of the linearized BVP, (2) project the values at the mesh points according to (238, 239) These iterations are shown to be locally quadratically convergent By using this special kind of projection which is not orthogonal to the manifold g(y) 0 the usual superconvergence properties (eg O(h 2k ) for Gaussian collocation) are shown to hold Collocation methods with k = 1 are equivalent to their projected versions dened above In the context of optimization the two step iteration above is not advisable, since one needs the DAE dicretization in the form of a system of equations and not of an algorithm However, it is not hard to see that for the DAE (227, 228) a nonlinear condition of the type ^y 1;j = y 1;j f(^y 1;j ; y 2 ( j )) 2 0 = g(^y 1;j )

23 22 Collocation for BVP in DAE with invariants 21 could serve to construct a system of equations which is equivalent to the original formulation of projected collocation But it includes additional variables (^y 1;j, j ) and involves second derivatives of f in the linearization process of these equations Another way of insuring that known equations for the states are satised at mesh points is explained in the following section for invariants But it can be applied to the enforcement of the algebraic equations at the collocation mesh points as well 223 Exploiting invariants Up to now we have only explained how to discretize DAE of index up to 2 However, in section 221 we mentioned that in practice there are models involving DAE of index at least 3 Direct discretization of higher index DAE does not lead to stable schemes Therefore in these cases index reduction is suggested This means that the algebraic part of the DAE is dierentiated wrt time as often as necessary in order to get a DAE of index 1 or 2 For multibody descriptor models repeated dierentiations of the position constraints (223) lead to the constraints on velocity (240) and acceleration level (241) 0 = G(p) p _ ; (240) 0 = G(p)p (G(p) _p) _p: Equations (222, 241) form a DAE of index 1 The original onstraint (223) and its derivative (240) serve as invariants There may, of course, be other sources of invariants like, eg, energy conservation, geometric invariants or the conservation of the Hamiltonian in optimal control problems Here we dene invariants of the solution of the DAE-BVP ( ) of index 1 as additional equations h(y 1 (t)) = 0; t 2 I [0; T ] (2:42) known to be satised in some subinterval I by the exact solution of the boundary value problem For the sake of clarity of the presentation, in (242) the invariant is formulated depending only on the dierentiable variables, which is of course true for invariants resulting from index reduction Using only the index 1 formulation (222, 241) for the numerical solution of multibody descriptor form DAE, one may observe the well known drift phenomenon (see, eg, [1, 2, 28, 42]) Due to an accumulation of discretization errors the numerical solution does no longer satisfy the invariants, which leads to inacceptable errors in the solution Therefore there have been undertaken many eorts to devise techniques for the numerical conservation of these invariants These range from Baumgarte techniques [15, 110, 9] over symplectic integration schemes for Hamiltonian systems [82] and additionally projecting Runge-Kutta-schemes [9] up to general projecting integration schemes [1, 42, 2] for initial value problems and boundary value problems [110] The adequate exploitation of invariants results in a stabilizing eect on the numerical solution of the DAE Besides, there is an additional payo with regard to conditioning in the case of multistage projection BVP methods This is motivated by the following lemma

24 22 2 The Collocation Discretization Lemma 22 Consider the linear system of equations for x 1 ; x 2 2 R n W x 1 = x 2 ; W 2 R nn nonsingular; (2:43) where x 1 is to be expressed in terms of x 2 Assume that x 1 and x 2 satisfy the relation Hx 1 = Hx 2 (2:44) for some matrix H 2 R n hn of full rank with n h < n Then there is an orthogonal matrix Q 2 R nn with Q > Q = I, so that Q > W Q = " # I 0 Z > ; (2:45) W Z where the columns of Z 2 R n(n?n h) form an orthogonal basis of N (H) Proof: Consider a QR-factorization of the matrix H > h i" # " H > R R = Y Z =: Q 0 0 Then there is a unique representation of the vectors x 1 and x 2 as From (244) follows y 1 = y 2 and therefore yielding # : x i = Y y i + Zz i ; i = 1; 2: (2:46) y 1 = Y > x 1 = Y > x 2 = Y > W x 1 = Y > W (Y y 1 + Zz 1 ); Y > W h Y Z i = h I 0 i : The linear system (243) can be interpreted as a linearized continuity condition for a discretized ODE, while (244) represents linear invariants By using the decomposition (246), the linear system (243) with the property (244) can be solved in the following way: Z > W Zz 1 = z 2? Z > W Y R?1 Hx 2 ; where y 1 = R?1 Hx 2 The condition number of Z > W Z is not worse than that of W itself, because only orthogonal transformations are involved On the other hand Z > W Z often is better conditioned than W, which is conrmed by numerical experience [110] Thus lemma 22 shows that the knowledge of an invariant can also be exploited for the conditioning of linear systems Similar systems as in lemma 22 arise in the multistage least squares approach proposed in [110] Since a DAE discretization consisting of collocation conditions, continuity conditions, boundary conditions and invariants is formally overdetermined, we introduce a

25 22 Collocation for BVP in DAE with invariants 23 ranking of the conditions leading to a weaker formulation of the continuity conditions in the form of least squares conditions By using the notation introduced in ( ), the original continuity condition (234) for collocation (26) is replaced by the condition min y 1;j+1 y 1;j + h j kp 2 z js l (1)? y 1;j+1 s=1 2 (247) st h(y 1;j+1 ) = 0; (248) if the invariant h (242) is known to hold at j+1, ie j+1 2 I This formulation is equivalent to the orthogonal projection of the continuity conditions onto the invariant manifold M = fy j h(y) = 0g Therefore its solution coincides with the solution of the invariant conservation technique proposed in [8], which involves an additional projection step as in (238, 239), instead of F 2 in (238) and h instead of g in (239) The least squares formulation above has two advantages It is generalizable to optimization boundary value problems in a straight forward way It exploits possible gains in the conditioning of the overall linear system due to the presence of invariants By employing a generalized Gauss-Newton-method [22, 102], the local least squares problems (247, 248) are linearized \under the norm" resulting in the linear least squares problems where c j = y 1;j + h j k P s=1 min y j+1 ky 1;j + jz j? y 1;j+1 + c j k 2 2 (249) st H j+1 y 1;j+1 + h j+1 = 0; (250) z js l (1)? y 1;j+1, j = h j h H j+1 := (@h=@y 1 )(y 1;j+1 ) QR-decomposition of leads to the equivalent linear system H > j+1 = h Y j+1 Z j+1 i " R j+1 0 1(1)I : : : k (1)I i, h j+1 := h(y j+1 ) and Z > j+1 y 1;j + Z > j+1 jz j? Z > j+1 y 1;j+1 + Z > j+1 c j = 0; (251) # H j+1 y 1;j+1 + h j+1 = 0: (252) This system replaces the usual linearized continuity conditions at each node If the invariant holds at j as well, we transform the variables y 1;l ; l = j; j + 1 so that y 1;l = Y l d Y l + Z l d Z l ; l = j; j + 1;

26 24 2 The Collocation Discretization where Y j ; Z j are dened by a QR-factorization of H j > the projected continuity conditions := (@h=@y 1 )(y 1;j ) > Thus we obtain Z > j+1z j d Z j + Z > j+1 jz j? d Z j+1 + Z > j+1c j+1? Z > j+1y j R?> j+1 h j+1 = 0; (2:53) where d Y j+1 is determined by R > j+1d Y j+1+ h j+1 = 0 Thus the projected continuity conditions have the same form as the original ones (234) Remark: The linearization \under the norm" in (249, 250) describes the tangential space of the nonlinear least-squares conditions (247, 248) up to an error of the order of the residual in (247), which is the discretization error Therefore, in the context of optimization the solution of the multistage optimization problem is disturbed by an additional error of the order of the discretization error

27 Chapter 3 Partially reduced SQP methods In this chapter the algorithmic concept of partially reduced SQP (PRSQP) methods is introduced First we motivate and review the algorithmic features of reduced SQP methods Then the partial reduction concept is formulated and the convergence properties of resulting optimization algorithms are investigated Basic concepts of nonlinear optimization, like necessary and sucient optimality conditions, are not covered They can be found in any textbook about nonlinear optimization (eg, [59, 43]) 31 From feasible path methods to reduced SQP The aim of this section is to give a brief introduction into the basic terminology of reduced SQP methods (RSQP) We consider the constrained optimization problem minf(x; p) x;p (31) st c(x; p) = 0 2 R nx ; (32) where p 2 R np and x 2 R nx, the problem functions f and c are at least twice continuously dierentiable and C x is nonsingular A common way to solve optimization problem (31, 32) is to consider x as a function of p and thus the optimization problem as an unconstrained one in only the variables p That means that one uses the implicit function theorem in order to state the existence of a function : R np! R nx, so that x = (p) Then the constrained optimization problem (31, 32) is transformed to an unconstrained optimization problem: min p ~ f(p); with ~ f(p) := f((p); p): (3:3) Methods based on this simple variable reduction process will be called feasible path methods in the sequel The advantages of this approach are the reduction of the number of optimization variables and the fact that the Hessian of ~ f(p) is positive semi-denite at the solution, if there exists a solution at all The drawback is that at each step of an 25

c Society for Industrial and Applied Mathematics

c Society for Industrial and Applied Mathematics To appear in SIAM J. Sci. Comp. Vol.?, No.?, c Society for Industrial and Applied Mathematics EXPLOITING INVARIANTS IN THE NUMERICAL SOLUTION OF MULTIPOINT BOUNDARY VALUE PROBLEMS FOR DAE * VOLKER H. SCHULZ

More information

CHAPTER 10: Numerical Methods for DAEs

CHAPTER 10: Numerical Methods for DAEs CHAPTER 10: Numerical Methods for DAEs Numerical approaches for the solution of DAEs divide roughly into two classes: 1. direct discretization 2. reformulation (index reduction) plus discretization Direct

More information

Direct Methods. Moritz Diehl. Optimization in Engineering Center (OPTEC) and Electrical Engineering Department (ESAT) K.U.

Direct Methods. Moritz Diehl. Optimization in Engineering Center (OPTEC) and Electrical Engineering Department (ESAT) K.U. Direct Methods Moritz Diehl Optimization in Engineering Center (OPTEC) and Electrical Engineering Department (ESAT) K.U. Leuven Belgium Overview Direct Single Shooting Direct Collocation Direct Multiple

More information

2 A. BARCLAY, P. E. GILL AND J. B. ROSEN where c is a vector of nonlinear functions, A is a constant matrix that denes the linear constraints, and b l

2 A. BARCLAY, P. E. GILL AND J. B. ROSEN where c is a vector of nonlinear functions, A is a constant matrix that denes the linear constraints, and b l SQP METHODS AND THEIR APPLICATION TO NUMERICAL OPTIMAL CONTROL ALEX BARCLAY y, PHILIP E. GILL y, AND J. BEN ROSEN z Abstract. In recent years, general-purpose sequential quadratic programming (SQP) methods

More information

Numerical Algorithms as Dynamical Systems

Numerical Algorithms as Dynamical Systems A Study on Numerical Algorithms as Dynamical Systems Moody Chu North Carolina State University What This Study Is About? To recast many numerical algorithms as special dynamical systems, whence to derive

More information

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St

Outline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St Structured Lower Rank Approximation by Moody T. Chu (NCSU) joint with Robert E. Funderlic (NCSU) and Robert J. Plemmons (Wake Forest) March 5, 1998 Outline Introduction: Problem Description Diculties Algebraic

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation

CS 450 Numerical Analysis. Chapter 8: Numerical Integration and Differentiation Lecture slides based on the textbook Scientific Computing: An Introductory Survey by Michael T. Heath, copyright c 2018 by the Society for Industrial and Applied Mathematics. http://www.siam.org/books/cl80

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 3 3.4 Differential Algebraic Systems 3.5 Integration of Differential Equations 1 Outline 3.4 Differential Algebraic Systems 3.4.1 Constrained Dynamics 3.4.2 First and Second

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Defect-based a-posteriori error estimation for implicit ODEs and DAEs

Defect-based a-posteriori error estimation for implicit ODEs and DAEs 1 / 24 Defect-based a-posteriori error estimation for implicit ODEs and DAEs W. Auzinger Institute for Analysis and Scientific Computing Vienna University of Technology Workshop on Innovative Integrators

More information

19.2 Mathematical description of the problem. = f(p; _p; q; _q) G(p; q) T ; (II.19.1) g(p; q) + r(t) _p _q. f(p; v. a p ; q; v q ) + G(p; q) T ; a q

19.2 Mathematical description of the problem. = f(p; _p; q; _q) G(p; q) T ; (II.19.1) g(p; q) + r(t) _p _q. f(p; v. a p ; q; v q ) + G(p; q) T ; a q II-9-9 Slider rank 9. General Information This problem was contributed by Bernd Simeon, March 998. The slider crank shows some typical properties of simulation problems in exible multibody systems, i.e.,

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 7 Interpolation Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Preliminary Examination in Numerical Analysis

Preliminary Examination in Numerical Analysis Department of Applied Mathematics Preliminary Examination in Numerical Analysis August 7, 06, 0 am pm. Submit solutions to four (and no more) of the following six problems. Show all your work, and justify

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space

Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) 1.1 The Formal Denition of a Vector Space Linear Algebra (part 1) : Vector Spaces (by Evan Dummit, 2017, v. 1.07) Contents 1 Vector Spaces 1 1.1 The Formal Denition of a Vector Space.................................. 1 1.2 Subspaces...................................................

More information

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr

only nite eigenvalues. This is an extension of earlier results from [2]. Then we concentrate on the Riccati equation appearing in H 2 and linear quadr The discrete algebraic Riccati equation and linear matrix inequality nton. Stoorvogel y Department of Mathematics and Computing Science Eindhoven Univ. of Technology P.O. ox 53, 56 M Eindhoven The Netherlands

More information

The Direct Transcription Method For Optimal Control. Part 2: Optimal Control

The Direct Transcription Method For Optimal Control. Part 2: Optimal Control The Direct Transcription Method For Optimal Control Part 2: Optimal Control John T Betts Partner, Applied Mathematical Analysis, LLC 1 Fundamental Principle of Transcription Methods Transcription Method

More information

Numerical Optimal Control Overview. Moritz Diehl

Numerical Optimal Control Overview. Moritz Diehl Numerical Optimal Control Overview Moritz Diehl Simplified Optimal Control Problem in ODE path constraints h(x, u) 0 initial value x0 states x(t) terminal constraint r(x(t )) 0 controls u(t) 0 t T minimize

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

Ordinary differential equations - Initial value problems

Ordinary differential equations - Initial value problems Education has produced a vast population able to read but unable to distinguish what is worth reading. G.M. TREVELYAN Chapter 6 Ordinary differential equations - Initial value problems In this chapter

More information

460 HOLGER DETTE AND WILLIAM J STUDDEN order to examine how a given design behaves in the model g` with respect to the D-optimality criterion one uses

460 HOLGER DETTE AND WILLIAM J STUDDEN order to examine how a given design behaves in the model g` with respect to the D-optimality criterion one uses Statistica Sinica 5(1995), 459-473 OPTIMAL DESIGNS FOR POLYNOMIAL REGRESSION WHEN THE DEGREE IS NOT KNOWN Holger Dette and William J Studden Technische Universitat Dresden and Purdue University Abstract:

More information

Theory, Solution Techniques and Applications of Singular Boundary Value Problems

Theory, Solution Techniques and Applications of Singular Boundary Value Problems Theory, Solution Techniques and Applications of Singular Boundary Value Problems W. Auzinger O. Koch E. Weinmüller Vienna University of Technology, Austria Problem Class z (t) = M(t) z(t) + f(t, z(t)),

More information

CS 257: Numerical Methods

CS 257: Numerical Methods CS 57: Numerical Methods Final Exam Study Guide Version 1.00 Created by Charles Feng http://www.fenguin.net CS 57: Numerical Methods Final Exam Study Guide 1 Contents 1 Introductory Matter 3 1.1 Calculus

More information

Fixed point iteration and root finding

Fixed point iteration and root finding Fixed point iteration and root finding The sign function is defined as x > 0 sign(x) = 0 x = 0 x < 0. It can be evaluated via an iteration which is useful for some problems. One such iteration is given

More information

SQP METHODS AND THEIR APPLICATION TO NUMERICAL OPTIMAL CONTROL

SQP METHODS AND THEIR APPLICATION TO NUMERICAL OPTIMAL CONTROL c Birkhäuser Verlag Basel 207 SQP METHODS AND THEIR APPLICATION TO NUMERICAL OPTIMAL CONTROL ALEX BARCLAY PHILIP E. GILL J. BEN ROSEN Abstract. In recent years, general-purpose sequential quadratic programming

More information

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 3: Interpolation and Polynomial Approximation. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 3: Interpolation and Polynomial Approximation Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 10, 2015 2 Contents 1.1 Introduction................................ 3 1.1.1

More information

Multistage Methods I: Runge-Kutta Methods

Multistage Methods I: Runge-Kutta Methods Multistage Methods I: Runge-Kutta Methods Varun Shankar January, 0 Introduction Previously, we saw that explicit multistep methods (AB methods) have shrinking stability regions as their orders are increased.

More information

Linear Regression and Its Applications

Linear Regression and Its Applications Linear Regression and Its Applications Predrag Radivojac October 13, 2014 Given a data set D = {(x i, y i )} n the objective is to learn the relationship between features and the target. We usually start

More information

The WENO Method for Non-Equidistant Meshes

The WENO Method for Non-Equidistant Meshes The WENO Method for Non-Equidistant Meshes Philip Rupp September 11, 01, Berlin Contents 1 Introduction 1.1 Settings and Conditions...................... The WENO Schemes 4.1 The Interpolation Problem.....................

More information

A Gauss Lobatto quadrature method for solving optimal control problems

A Gauss Lobatto quadrature method for solving optimal control problems ANZIAM J. 47 (EMAC2005) pp.c101 C115, 2006 C101 A Gauss Lobatto quadrature method for solving optimal control problems P. Williams (Received 29 August 2005; revised 13 July 2006) Abstract This paper proposes

More information

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces

Contents. 2.1 Vectors in R n. Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v. 2.50) 2 Vector Spaces Linear Algebra (part 2) : Vector Spaces (by Evan Dummit, 2017, v 250) Contents 2 Vector Spaces 1 21 Vectors in R n 1 22 The Formal Denition of a Vector Space 4 23 Subspaces 6 24 Linear Combinations and

More information

Order Results for Mono-Implicit Runge-Kutta Methods. K. Burrage, F. H. Chipman, and P. H. Muir

Order Results for Mono-Implicit Runge-Kutta Methods. K. Burrage, F. H. Chipman, and P. H. Muir Order Results for Mono-Implicit Runge-Kutta Methods K urrage, F H hipman, and P H Muir bstract The mono-implicit Runge-Kutta methods are a subclass of the wellknown family of implicit Runge-Kutta methods

More information

INRIA Rocquencourt, Le Chesnay Cedex (France) y Dept. of Mathematics, North Carolina State University, Raleigh NC USA

INRIA Rocquencourt, Le Chesnay Cedex (France) y Dept. of Mathematics, North Carolina State University, Raleigh NC USA Nonlinear Observer Design using Implicit System Descriptions D. von Wissel, R. Nikoukhah, S. L. Campbell y and F. Delebecque INRIA Rocquencourt, 78 Le Chesnay Cedex (France) y Dept. of Mathematics, North

More information

Ordinary Differential Equations

Ordinary Differential Equations Chapter 13 Ordinary Differential Equations We motivated the problem of interpolation in Chapter 11 by transitioning from analzying to finding functions. That is, in problems like interpolation and regression,

More information

PARTIAL DIFFERENTIAL EQUATIONS

PARTIAL DIFFERENTIAL EQUATIONS MATHEMATICAL METHODS PARTIAL DIFFERENTIAL EQUATIONS I YEAR B.Tech By Mr. Y. Prabhaker Reddy Asst. Professor of Mathematics Guru Nanak Engineering College Ibrahimpatnam, Hyderabad. SYLLABUS OF MATHEMATICAL

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

1.2 Derivation. d p f = d p f(x(p)) = x fd p x (= f x x p ). (1) Second, g x x p + g p = 0. d p f = f x g 1. The expression f x gx

1.2 Derivation. d p f = d p f(x(p)) = x fd p x (= f x x p ). (1) Second, g x x p + g p = 0. d p f = f x g 1. The expression f x gx PDE-constrained optimization and the adjoint method Andrew M. Bradley November 16, 21 PDE-constrained optimization and the adjoint method for solving these and related problems appear in a wide range of

More information

Introduction. Chapter One

Introduction. Chapter One Chapter One Introduction The aim of this book is to describe and explain the beautiful mathematical relationships between matrices, moments, orthogonal polynomials, quadrature rules and the Lanczos and

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

Numerical Algorithms 0 (1998)?{? 1. Position Versus Momentum Projections for. Constrained Hamiltonian Systems. Werner M. Seiler

Numerical Algorithms 0 (1998)?{? 1. Position Versus Momentum Projections for. Constrained Hamiltonian Systems. Werner M. Seiler Numerical Algorithms 0 (1998)?{? 1 Position Versus Momentum Projections for Constrained Hamiltonian Systems Werner M. Seiler Lehrstuhl I fur Mathematik, Universitat Mannheim, 68131 Mannheim, Germany E-mail:

More information

Computational Methods. Least Squares Approximation/Optimization

Computational Methods. Least Squares Approximation/Optimization Computational Methods Least Squares Approximation/Optimization Manfred Huber 2011 1 Least Squares Least squares methods are aimed at finding approximate solutions when no precise solution exists Find the

More information

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f

290 J.M. Carnicer, J.M. Pe~na basis (u 1 ; : : : ; u n ) consisting of minimally supported elements, yet also has a basis (v 1 ; : : : ; v n ) which f Numer. Math. 67: 289{301 (1994) Numerische Mathematik c Springer-Verlag 1994 Electronic Edition Least supported bases and local linear independence J.M. Carnicer, J.M. Pe~na? Departamento de Matematica

More information

Chapter 2 Optimal Control Problem

Chapter 2 Optimal Control Problem Chapter 2 Optimal Control Problem Optimal control of any process can be achieved either in open or closed loop. In the following two chapters we concentrate mainly on the first class. The first chapter

More information

The collocation method for ODEs: an introduction

The collocation method for ODEs: an introduction 058065 - Collocation Methods for Volterra Integral Related Functional Differential The collocation method for ODEs: an introduction A collocation solution u h to a functional equation for example an ordinary

More information

Technische Universität Dresden Fachrichtung Mathematik. Memory efficient approaches of second order for optimal control problems

Technische Universität Dresden Fachrichtung Mathematik. Memory efficient approaches of second order for optimal control problems Technische Universität Dresden Fachrichtung Mathematik Institut für Wissenschaftliches Rechnen Memory efficient approaches of second order for optimal control problems Dissertation zur Erlangung des zweiten

More information

CHAPTER 5: Linear Multistep Methods

CHAPTER 5: Linear Multistep Methods CHAPTER 5: Linear Multistep Methods Multistep: use information from many steps Higher order possible with fewer function evaluations than with RK. Convenient error estimates. Changing stepsize or order

More information

Contents 1 Introduction and Preliminaries 1 Embedding of Extended Matrix Pencils 3 Hamiltonian Triangular Forms 1 4 Skew-Hamiltonian/Hamiltonian Matri

Contents 1 Introduction and Preliminaries 1 Embedding of Extended Matrix Pencils 3 Hamiltonian Triangular Forms 1 4 Skew-Hamiltonian/Hamiltonian Matri Technische Universitat Chemnitz Sonderforschungsbereich 393 Numerische Simulation auf massiv parallelen Rechnern Peter Benner Ralph Byers Volker Mehrmann Hongguo Xu Numerical Computation of Deating Subspaces

More information

Numerical Integration for Multivariable. October Abstract. We consider the numerical integration of functions with point singularities over

Numerical Integration for Multivariable. October Abstract. We consider the numerical integration of functions with point singularities over Numerical Integration for Multivariable Functions with Point Singularities Yaun Yang and Kendall E. Atkinson y October 199 Abstract We consider the numerical integration of functions with point singularities

More information

t x 0.25

t x 0.25 Journal of ELECTRICAL ENGINEERING, VOL. 52, NO. /s, 2, 48{52 COMPARISON OF BROYDEN AND NEWTON METHODS FOR SOLVING NONLINEAR PARABOLIC EQUATIONS Ivan Cimrak If the time discretization of a nonlinear parabolic

More information

Fraction-free Row Reduction of Matrices of Skew Polynomials

Fraction-free Row Reduction of Matrices of Skew Polynomials Fraction-free Row Reduction of Matrices of Skew Polynomials Bernhard Beckermann Laboratoire d Analyse Numérique et d Optimisation Université des Sciences et Technologies de Lille France bbecker@ano.univ-lille1.fr

More information

Max-Planck-Institut fur Mathematik in den Naturwissenschaften Leipzig H 2 -matrix approximation of integral operators by interpolation by Wolfgang Hackbusch and Steen Borm Preprint no.: 04 200 H 2 -Matrix

More information

MODELLING OF FLEXIBLE MECHANICAL SYSTEMS THROUGH APPROXIMATED EIGENFUNCTIONS L. Menini A. Tornambe L. Zaccarian Dip. Informatica, Sistemi e Produzione

MODELLING OF FLEXIBLE MECHANICAL SYSTEMS THROUGH APPROXIMATED EIGENFUNCTIONS L. Menini A. Tornambe L. Zaccarian Dip. Informatica, Sistemi e Produzione MODELLING OF FLEXIBLE MECHANICAL SYSTEMS THROUGH APPROXIMATED EIGENFUNCTIONS L. Menini A. Tornambe L. Zaccarian Dip. Informatica, Sistemi e Produzione, Univ. di Roma Tor Vergata, via di Tor Vergata 11,

More information

QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE. Denitizable operators in Krein spaces have spectral properties similar to those

QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE. Denitizable operators in Krein spaces have spectral properties similar to those QUASI-UNIFORMLY POSITIVE OPERATORS IN KREIN SPACE BRANKO CURGUS and BRANKO NAJMAN Denitizable operators in Krein spaces have spectral properties similar to those of selfadjoint operators in Hilbert spaces.

More information

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method. Optimization Unconstrained optimization One-dimensional Multi-dimensional Newton s method Basic Newton Gauss- Newton Quasi- Newton Descent methods Gradient descent Conjugate gradient Constrained optimization

More information

The Lifted Newton Method and Its Use in Optimization

The Lifted Newton Method and Its Use in Optimization The Lifted Newton Method and Its Use in Optimization Moritz Diehl Optimization in Engineering Center (OPTEC), K.U. Leuven, Belgium joint work with Jan Albersmeyer (U. Heidelberg) ENSIACET, Toulouse, February

More information

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by 1. QUESTION (a) Given a nth degree Taylor polynomial P n (x) of a function f(x), expanded about x = x 0, write down the Lagrange formula for the truncation error, carefully defining all its elements. How

More information

Final Examination. CS 205A: Mathematical Methods for Robotics, Vision, and Graphics (Fall 2013), Stanford University

Final Examination. CS 205A: Mathematical Methods for Robotics, Vision, and Graphics (Fall 2013), Stanford University Final Examination CS 205A: Mathematical Methods for Robotics, Vision, and Graphics (Fall 2013), Stanford University The exam runs for 3 hours. The exam contains eight problems. You must complete the first

More information

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing

More information

Numerical Methods for Embedded Optimization and Optimal Control. Exercises

Numerical Methods for Embedded Optimization and Optimal Control. Exercises Summer Course Numerical Methods for Embedded Optimization and Optimal Control Exercises Moritz Diehl, Daniel Axehill and Lars Eriksson June 2011 Introduction This collection of exercises is intended to

More information

MATHEMATICAL METHODS INTERPOLATION

MATHEMATICAL METHODS INTERPOLATION MATHEMATICAL METHODS INTERPOLATION I YEAR BTech By Mr Y Prabhaker Reddy Asst Professor of Mathematics Guru Nanak Engineering College Ibrahimpatnam, Hyderabad SYLLABUS OF MATHEMATICAL METHODS (as per JNTU

More information

Review for Exam 2 Ben Wang and Mark Styczynski

Review for Exam 2 Ben Wang and Mark Styczynski Review for Exam Ben Wang and Mark Styczynski This is a rough approximation of what we went over in the review session. This is actually more detailed in portions than what we went over. Also, please note

More information

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9

TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1. Chapter Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9 TABLE OF CONTENTS INTRODUCTION, APPROXIMATION & ERRORS 1 Chapter 01.01 Introduction to numerical methods 1 Multiple-choice test 7 Problem set 9 Chapter 01.02 Measuring errors 11 True error 11 Relative

More information

Lecture 4: Numerical solution of ordinary differential equations

Lecture 4: Numerical solution of ordinary differential equations Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor

More information

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory

More information

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search

More information

Hot-Starting NLP Solvers

Hot-Starting NLP Solvers Hot-Starting NLP Solvers Andreas Wächter Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu 204 Mixed Integer Programming Workshop Ohio

More information

Spurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics

Spurious Chaotic Solutions of Dierential. Equations. Sigitas Keras. September Department of Applied Mathematics and Theoretical Physics UNIVERSITY OF CAMBRIDGE Numerical Analysis Reports Spurious Chaotic Solutions of Dierential Equations Sigitas Keras DAMTP 994/NA6 September 994 Department of Applied Mathematics and Theoretical Physics

More information

A THEORETICAL INTRODUCTION TO NUMERICAL ANALYSIS

A THEORETICAL INTRODUCTION TO NUMERICAL ANALYSIS A THEORETICAL INTRODUCTION TO NUMERICAL ANALYSIS Victor S. Ryaben'kii Semyon V. Tsynkov Chapman &. Hall/CRC Taylor & Francis Group Boca Raton London New York Chapman & Hall/CRC is an imprint of the Taylor

More information

Lecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University

Lecture Note 3: Polynomial Interpolation. Xiaoqun Zhang Shanghai Jiao Tong University Lecture Note 3: Polynomial Interpolation Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 24, 2013 1.1 Introduction We first look at some examples. Lookup table for f(x) = 2 π x 0 e x2

More information

1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r

1. Introduction Let the least value of an objective function F (x), x2r n, be required, where F (x) can be calculated for any vector of variables x2r DAMTP 2002/NA08 Least Frobenius norm updating of quadratic models that satisfy interpolation conditions 1 M.J.D. Powell Abstract: Quadratic models of objective functions are highly useful in many optimization

More information

Mathematics Research Report No. MRR 003{96, HIGH RESOLUTION POTENTIAL FLOW METHODS IN OIL EXPLORATION Stephen Roberts 1 and Stephan Matthai 2 3rd Febr

Mathematics Research Report No. MRR 003{96, HIGH RESOLUTION POTENTIAL FLOW METHODS IN OIL EXPLORATION Stephen Roberts 1 and Stephan Matthai 2 3rd Febr HIGH RESOLUTION POTENTIAL FLOW METHODS IN OIL EXPLORATION Stephen Roberts and Stephan Matthai Mathematics Research Report No. MRR 003{96, Mathematics Research Report No. MRR 003{96, HIGH RESOLUTION POTENTIAL

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

Exam in TMA4215 December 7th 2012

Exam in TMA4215 December 7th 2012 Norwegian University of Science and Technology Department of Mathematical Sciences Page of 9 Contact during the exam: Elena Celledoni, tlf. 7359354, cell phone 48238584 Exam in TMA425 December 7th 22 Allowed

More information

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization

AM 205: lecture 6. Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization AM 205: lecture 6 Last time: finished the data fitting topic Today s lecture: numerical linear algebra, LU factorization Unit II: Numerical Linear Algebra Motivation Almost everything in Scientific Computing

More information

arxiv: v2 [math.na] 21 May 2018

arxiv: v2 [math.na] 21 May 2018 SHORT-MT: Optimal Solution of Linear Ordinary Differential Equations by Conjugate Gradient Method arxiv:1805.01085v2 [math.na] 21 May 2018 Wenqiang Yang 1, Wenyuan Wu 1, and Robert M. Corless 2 1 Chongqing

More information

Time Integration Methods for the Heat Equation

Time Integration Methods for the Heat Equation Time Integration Methods for the Heat Equation Tobias Köppl - JASS March 2008 Heat Equation: t u u = 0 Preface This paper is a short summary of my talk about the topic: Time Integration Methods for the

More information

GENG2140, S2, 2012 Week 7: Curve fitting

GENG2140, S2, 2012 Week 7: Curve fitting GENG2140, S2, 2012 Week 7: Curve fitting Curve fitting is the process of constructing a curve, or mathematical function, f(x) that has the best fit to a series of data points Involves fitting lines and

More information

Outline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation

Outline. 1 Interpolation. 2 Polynomial Interpolation. 3 Piecewise Polynomial Interpolation Outline Interpolation 1 Interpolation 2 3 Michael T. Heath Scientific Computing 2 / 56 Interpolation Motivation Choosing Interpolant Existence and Uniqueness Basic interpolation problem: for given data

More information

Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn

Review. Numerical Methods Lecture 22. Prof. Jinbo Bi CSE, UConn Review Taylor Series and Error Analysis Roots of Equations Linear Algebraic Equations Optimization Numerical Differentiation and Integration Ordinary Differential Equations Partial Differential Equations

More information

August Progress Report

August Progress Report PATH PREDICTION FOR AN EARTH-BASED DEMONSTRATION BALLOON FLIGHT DANIEL BEYLKIN Mentor: Jerrold Marsden Co-Mentors: Claire Newman and Philip Du Toit August Progress Report. Progress.. Discrete Mechanics

More information

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02)

Linear Algebra (part 1) : Matrices and Systems of Linear Equations (by Evan Dummit, 2016, v. 2.02) Linear Algebra (part ) : Matrices and Systems of Linear Equations (by Evan Dummit, 206, v 202) Contents 2 Matrices and Systems of Linear Equations 2 Systems of Linear Equations 2 Elimination, Matrix Formulation

More information

Discontinuous Collocation Methods for DAEs in Mechanics

Discontinuous Collocation Methods for DAEs in Mechanics Discontinuous Collocation Methods for DAEs in Mechanics Scott Small Laurent O. Jay The University of Iowa Iowa City, Iowa, USA July 11, 2011 Outline 1 Introduction of the DAEs 2 SPARK and EMPRK Methods

More information

The Bock iteration for the ODE estimation problem

The Bock iteration for the ODE estimation problem he Bock iteration for the ODE estimation problem M.R.Osborne Contents 1 Introduction 2 2 Introducing the Bock iteration 5 3 he ODE estimation problem 7 4 he Bock iteration for the smoothing problem 12

More information

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY

A MULTIGRID ALGORITHM FOR. Richard E. Ewing and Jian Shen. Institute for Scientic Computation. Texas A&M University. College Station, Texas SUMMARY A MULTIGRID ALGORITHM FOR THE CELL-CENTERED FINITE DIFFERENCE SCHEME Richard E. Ewing and Jian Shen Institute for Scientic Computation Texas A&M University College Station, Texas SUMMARY In this article,

More information

Eigenvalue problems and optimization

Eigenvalue problems and optimization Notes for 2016-04-27 Seeking structure For the past three weeks, we have discussed rather general-purpose optimization methods for nonlinear equation solving and optimization. In practice, of course, we

More information

Lecture V: The game-engine loop & Time Integration

Lecture V: The game-engine loop & Time Integration Lecture V: The game-engine loop & Time Integration The Basic Game-Engine Loop Previous state: " #, %(#) ( #, )(#) Forces -(#) Integrate velocities and positions Resolve Interpenetrations Per-body change

More information

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2.

APPENDIX A. Background Mathematics. A.1 Linear Algebra. Vector algebra. Let x denote the n-dimensional column vector with components x 1 x 2. APPENDIX A Background Mathematics A. Linear Algebra A.. Vector algebra Let x denote the n-dimensional column vector with components 0 x x 2 B C @. A x n Definition 6 (scalar product). The scalar product

More information

A Stable Finite Dierence Ansatz for Higher Order Dierentiation of Non-Exact. Data. Bob Anderssen and Frank de Hoog,

A Stable Finite Dierence Ansatz for Higher Order Dierentiation of Non-Exact. Data. Bob Anderssen and Frank de Hoog, A Stable Finite Dierence Ansatz for Higher Order Dierentiation of Non-Exact Data Bob Anderssen and Frank de Hoog, CSIRO Division of Mathematics and Statistics, GPO Box 1965, Canberra, ACT 2601, Australia

More information

Time-Optimal Automobile Test Drives with Gear Shifts

Time-Optimal Automobile Test Drives with Gear Shifts Time-Optimal Control of Automobile Test Drives with Gear Shifts Christian Kirches Interdisciplinary Center for Scientific Computing (IWR) Ruprecht-Karls-University of Heidelberg, Germany joint work with

More information

Chapter 3 Least Squares Solution of y = A x 3.1 Introduction We turn to a problem that is dual to the overconstrained estimation problems considered s

Chapter 3 Least Squares Solution of y = A x 3.1 Introduction We turn to a problem that is dual to the overconstrained estimation problems considered s Lectures on Dynamic Systems and Control Mohammed Dahleh Munther A. Dahleh George Verghese Department of Electrical Engineering and Computer Science Massachuasetts Institute of Technology 1 1 c Chapter

More information

Department of Applied Mathematics Preliminary Examination in Numerical Analysis August, 2013

Department of Applied Mathematics Preliminary Examination in Numerical Analysis August, 2013 Department of Applied Mathematics Preliminary Examination in Numerical Analysis August, 013 August 8, 013 Solutions: 1 Root Finding (a) Let the root be x = α We subtract α from both sides of x n+1 = x

More information

Congurations of periodic orbits for equations with delayed positive feedback

Congurations of periodic orbits for equations with delayed positive feedback Congurations of periodic orbits for equations with delayed positive feedback Dedicated to Professor Tibor Krisztin on the occasion of his 60th birthday Gabriella Vas 1 MTA-SZTE Analysis and Stochastics

More information

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ).

(0, 0), (1, ), (2, ), (3, ), (4, ), (5, ), (6, ). 1 Interpolation: The method of constructing new data points within the range of a finite set of known data points That is if (x i, y i ), i = 1, N are known, with y i the dependent variable and x i [x

More information

Integration, differentiation, and root finding. Phys 420/580 Lecture 7

Integration, differentiation, and root finding. Phys 420/580 Lecture 7 Integration, differentiation, and root finding Phys 420/580 Lecture 7 Numerical integration Compute an approximation to the definite integral I = b Find area under the curve in the interval Trapezoid Rule:

More information

Merging Optimization and Control

Merging Optimization and Control Merging Optimization and Control Bjarne Foss and Tor Aksel N. Heirung September 4, 2013 Contents 1 Introduction 2 2 Optimization 3 2.1 Classes of optimization problems.................. 3 2.2 Solution

More information

Numerical Analysis of Electromagnetic Fields

Numerical Analysis of Electromagnetic Fields Pei-bai Zhou Numerical Analysis of Electromagnetic Fields With 157 Figures Springer-Verlag Berlin Heidelberg New York London Paris Tokyo Hong Kong Barcelona Budapest Contents Part 1 Universal Concepts

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information