The estimation problem ODE stability The embedding method The simultaneous method In conclusion. Stability problems in ODE estimation

Size: px
Start display at page:

Download "The estimation problem ODE stability The embedding method The simultaneous method In conclusion. Stability problems in ODE estimation"

Transcription

1

2 Mathematical Sciences Institute Australian National University HPSC Hanoi 2006

3 Outline The estimation problem ODE stability The embedding method The simultaneous method In conclusion

4 Estimation Given the ODE: dx dt = f (t, x, β), where x R m, β R p, f R R m R p R m smooth enough, together with data y i = Hx(t i, β ) + ε i, i = 1, 2,, n, where H : R m R k, ε i N ( 0, σ 2 I ), estimate β. [ ] [ x(t) f(t, x) Equivalent smoothing problem: x, f β 0 Assume problem has a well determined solution for n, the number of observations, large enough. ].

5 The problem setting Mesh selection for integrating the ODE system is conditioned by two important considerations: The asymptotic analysis of the effects of noisy data on the parameter estimates shows that this gets small no faster than O ( n 1/2). It is not difficult to obtain ODE discretizations that give errors at most O ( n 2).

6 The problem setting Mesh selection for integrating the ODE system is conditioned by two important considerations: The asymptotic analysis of the effects of noisy data on the parameter estimates shows that this gets small no faster than O ( n 1/2). It is not difficult to obtain ODE discretizations that give errors at most O ( n 2). This suggests: That the trapezoidal rule provides an adequate integration method. That it should be possible even to integrate the ODE on a mesh coarser than that provided by the observation points {t i } (here we wont!).

7 The objective Estimation principles (least squares, maximum likelihood) consider the objective: F (x c, β) = n y i Hx (t i, β) 2. i=1 Methods differ in manner of generating comparison function values x(t i, β), i = 1, 2,, n. Embedding: x(t i, β, b) satisfies BVP dx dt = f(t, x, β), B 0 x(0) + B 1 x(1) = b. Introduces extra parameters b. Needs method for choosing B 0, B 1. Must solve boundary value problem at each step. go GNM

8 The objective Estimation principles (least squares, maximum likelihood) consider the objective: F (x c, β) = n y i Hx (t i, β) 2. i=1 Methods differ in manner of generating comparison function values x(t i, β), i = 1, 2,, n. Simultaneous: ODE discretization information added as constraints c i (x c ) = x i+1 x i h 2 (f i+1 + f i ), i = 1, 2,, n 1, with x i = x(t i, β). Methods typically correct solution and parameter estimates simultaneously. go SQP

9 Initial value stability (IVS) Here the problem considered is: dx dt = f (t, x), x(0) = b. The stability requirement is that solutions with close initial conditions x 1 (0), x 2 (0) remain close in an appropriate sense. x 1 (t) x 2 (t) 0, t. strong IVS.

10 Initial value stability (IVS) Here the problem considered is: dx dt = f (t, x), x(0) = b. The stability requirement is that solutions with close initial conditions x 1 (0), x 2 (0) remain close in an appropriate sense. x 1 (t) x 2 (t) 0, t. strong IVS. x 1 (t) x 2 (t) remains bounded as t. weak IVS.

11 Initial value stability (IVS) Here the problem considered is: dx dt = f (t, x), x(0) = b. The stability requirement is that solutions with close initial conditions x 1 (0), x 2 (0) remain close in an appropriate sense. x 1 (t) x 2 (t) 0, t. strong IVS. x 1 (t) x 2 (t) remains bounded as t. weak IVS. Computation introduces idea of stiff discretizations which preserve the stability characteristics of the original equation. Computations not limited by IVS. Important for multiple shooting - permits reasonably accurate fundamental matrices to be computed over short enough time intervals in relatively unstable problems by taking h small enough.

12 Constant coefficient case Here f (t, x) = Ax q If A is non-defective then weak IVS requires the eigenvalues λ i (A) to satisfy Reλ i 0 while this inequality must be strict for strong IVS. A one-step discretization of the ODE (ignoring q contribution) can be written x i+1 = T h (A) x i. where T h (A) is the amplification matrix. Here a stiff discretization requires the stability inequalities to map into the condition λ i (T h ) 1. For the trapezoidal rule λ i (T h ) = 1 + hλ i (A)/2 1 hλ i (A)/2, 1 if Re {λ i (A)} 0.

13 Boundary value stability (BVS) Here the problem is dx dt = f (t, x), B (x) = B 0 x(0) + B 1 x(1) = b. Behaviour of perturbations about a solution trajectory x (t) is governed to first order by the linearized equation L (z) = dz dt xf (t, x (t)) z = 0. Here (computational) stability is closely related to existence of a modest bound for the Green s matrix: G (t, s) = Z (t) [B 0 Z (0) + B 1 Z (1)] 1 B 0 Z (0)Z 1 (s), t > s, = Z (t) [B 0 Z (0) + B 1 Z (1)] 1 B 1 Z (1)Z 1 (s), t < s. Where Z (t) is a fundamental matrix for the linearised equation. Let α be a bound for G(t, s).

14 Dichotomy Weak form: projection P depending on choice of Z such that, given S 1 {ZPw, w R m }, S 2 {Z (I P) w, w R m }, φ S 1 φ(t) κ, φ(s) t s, φ S 2 φ(t) κ, φ(s) t s. Computational context requires modest κ for t, s [0, 1]. If Z satisfies B 0 Z (0) + B 1 Z (1) = I then P = B 0 Z (0) is a suitable projection in sense that for separated boundary conditions can take κ = α. There is a basic equivalence between stability and dichotomy. Key paper is de Hoog and Mattheij.

15 BVS restricts possible discretizations Dichotomy projection separates increasing and decreasing solutions. Compatible BC s pin down decreasing solutions at 0, growing solutions at 1.

16 BVS restricts possible discretizations Dichotomy projection separates increasing and decreasing solutions. Compatible BC s pin down decreasing solutions at 0, growing solutions at 1. Discretization needs similar property so given BC s exercise same control.

17 BVS restricts possible discretizations Dichotomy projection separates increasing and decreasing solutions. Compatible BC s pin down decreasing solutions at 0, growing solutions at 1. Discretization needs similar property so given BC s exercise same control. This requires solutions of ODE which are increasing (decreasing) in magnitude to be mapped into solutions of discretization which are increasing (decreasing) in magnitude.

18 BVS restricts possible discretizations Dichotomy projection separates increasing and decreasing solutions. Compatible BC s pin down decreasing solutions at 0, growing solutions at 1. Discretization needs similar property so given BC s exercise same control. This requires solutions of ODE which are increasing (decreasing) in magnitude to be mapped into solutions of discretization which are increasing (decreasing) in magnitude. This property called di-stability by England and Mattheij who show the TR is di-stable in constant coefficient case. λ(a) > hλ(a)/2 1 hλ(a)/2 > 1.

19 Bob Mattheij s example Consider the differential system defined by 1 19 cos 2t sin 2t A(t) = , sin 2t cos 2t e t ( (cos 2t sin 2t)) q(t) = 18e t e t (1 19 (cos 2t + sin 2t)). Here the right hand side is chosen so that z(t) = e t e satisfies the differential equation. The fundamental matrix displays the fast and slow solutions: go OBC Z (t, 0) = e 18t cos t 0 e 20t sin t 0 e 19t 0 e 18t sin t 0 e 20t cos t.

20 Bob Mattheij s example For boundary data with two terminal conditions and one initial condition : e B 0 = 0 0 0, B 1 = 0 1 0, b = e, the trapezoidal rule discretization scheme gives the following results. t =.1 t =.01 x(0) x(1) Table: Boundary point values - stable computation These computations are apparently satisfactory.

21 Bob Mattheij s example For two initial and one terminal condition: B 0 = 0 0 0, B 1 = The results are given in following Table., b = 1 e 1. t =.1 t =.01 x(0) x(1) Table: Boundary point values - unstable computation The effects of instability are seen clearly in the first and third solution components.

22 Nonlinear stability - preprint Hooker et al FitzHugh-Nagumo equations α =.2, β =.2. dv = γ (V V 3 ) dt 3 + R, dr = 1 (V α βr). dt γ

23 Nonlinear stability - preprint Hooker et al FitzHugh-Nagumo equations α =.2, β =.2. dv = γ (V V 3 ) dt 3 + R, dr = 1 (V α βr). dt γ

24 System factorization go OPT1 First problem is to set suitable boundary conditions. Expect good boundary conditions should lead to a relatively well conditioned linear system. Assume the ODE discretization is c i (x i, x i+1 ) = c ii (x i ) + c i(i+1) (x i+1 ). Consider the factorization of the difference equation (gradient) matrix with first column permuted to end: C 12 C 11 C 21 C 22 C (n 1)(n 1) C (n 1)n 0 [ Q U V 0 H G ] This step is independent of the boundary conditions. go SVE

25 Optimal boundary conditions The boundary conditions can[ be inserted ] at this point. This H G gives the system with matrix to solve for x B 1 B 1, x n. 0 Orthogonal factorization again provides a useful strategy. [ ] [ ] [ S T ] H G = L 0 1 S2 T It follows that the system determining x 1, x n is best conditioned by choosing [ B1 B 0 ] = S T 2. The conditions depend only on the ODE.

26 BC s for Mattheij example go MatEx The optimal boundary matrices corresponding to h =.1 are given in the Table. These confirm the importance of weighting the boundary data to reflect the stability requirements of a mix of fast and slow solutions. The solution does not differ from that obtained when the split into fast and slow was correctly anticipated. B 1 B Table: Optimal boundary matrices when h =.1

27 Gauss-Newton details Let (β,b) x = of F is [ ] x β, x b, r i = y i Hx (t i, β, b) then the gradient (β,b) F = 2 n r T i H (β,b) x i. The gradient terms wrt β are found by solving the BVP s i=1 x B 0 β (0) + B x 1 (1) = 0, β d x dt β = xf x β + βf,

28 Gauss-Newton details Let (β,b) x = of F is [ ] x β, x b, r i = y i Hx (t i, β, b) then the gradient (β,b) F = 2 n r T i H (β,b) x i. i=1 while the gradient terms wrt b satisfy the BVP s x B 0 b (0) + B x 1 (1) = I, b d x dt b = xf x b.

29 Embedding: Again the Mattheij example Consider the modification of the Mattheij problem with parameters β1 = γ, and β 2 = 2 corresponding to the solution x (t, β ) = e t e: A(t) = q(t) = 1 β 1 cos β 2 t β 1 sin β 2 t 0 β β 1 sin β 2 t β 1 cos β 2 t e t ( 1 + γ (cos 2t sin 2t)) (γ 1)e t e t (1 γ (cos 2t + sin 2t)) In the numerical experiments optimal boundary conditions are set at the first iteration. The aim is to recover estimates of β, b from simulated data e t i He + ε i, ε i N(0,.01I) using Gauss-Newton, stopping when Fh < ,

30 Embedding: Again the Mattheij example H = [ 1/3 1/3 1/3 ] H = [ ] n = 51, γ = 10, σ =.1 14 iterations n = 51, γ = 20, σ =.1 11 iterations n = 251, γ = 10, σ =.1 9 iterations n = 251, γ = 20, σ =.1 8 iterations n = 51, γ = 10, σ =.1 5 iterations n = 51, γ = 20, σ =.1 9 iterations n = 251, γ = 10, σ =.1 4 iterations n = 251, γ = 20, σ =.1 5 iterations Here [ B 1 B 2 ] 1 [ B1 B 2 ] T k I F < 10 3, k > 1.

31 Lagrangian go OPT2 Associated with the equality constrained problem is the Lagrangian The necessary conditions give: n 1 L = F (x c ) + λ T i c i. i=1 xi L = 0, i = 1, 2,, n, c (x c ) = 0. The Newton equations determining corrections dx c, dλ c are: 2 xxldx c + 2 xλ Ldλ c = x L T, x c (x c ) dx c = Cdx c = c (x c ), Note sparsity! 2 xxl is block diagonal, 2 xλ L = CT is block bidiagonal.

32 SQP formulation The Newton equations also correspond to necessary conditions for the QP: min dx xfdx c dxt c Mdx c ; c + Cdx c = 0, in case M = 2 xxl, λ u = λ c + dλ c. A standard approach is to use the constraint equations to eliminate variables. go GNM dx i = v i + V i dx 1 + W i dx n, i = 2, 3,, n 1. The reduced constraint equation is Gdx 1 + Hdx n = w. Is this variable elimination restricted by BVS considerations?

33 Null space method Standard SQP approach. Let C T = [ Q 1 Newton equations can be written [ ] U QT 2 [ xxlq [ 0 U T 0 ] Q T ] dx c λ u 0 These can be solved in sequence U T Q T 1 dx c = c, Q 2 ] [ U 0 ] then [ Q = T x F T c Q T 2 2 xxlq 2 Q T 2 dx c = Q T 2 2 xxlq 1 Q T 1 dx c Q T 2 xf T, Uλ u = Q T 1 2 xxldx c Q T 1 xf T. ].

34 Stability test using Mattheij problem Q1 T dx c estimates Q1 T vec { e t } i when xc = 0. test results n = 11 particular integral Q1 T x

35 Conclusion Embedding makes use of carefully constructed, explicit boundary conditions. Thus BVS restrictions must apply. The system is special

36 Conclusion Embedding makes use of carefully constructed, explicit boundary conditions. Thus BVS restrictions must apply. The system is special The variable eliminations form of the simultaneous method partitions variables into sets {x 1, x n }, and {x 2,, x n 1 } which are found sequentially. It relies implicitly on a form of BVS although the system is special.

37 Conclusion Embedding makes use of carefully constructed, explicit boundary conditions. Thus BVS restrictions must apply. The system is special The variable eliminations form of the simultaneous method partitions variables into sets {x 1, x n }, and {x 2,, x n 1 } which are found sequentially. It relies implicitly on a form of BVS although the system is special. The null space variant partitions the variables into the sets { Q T 1 x c }, { Q T 2 x c }. It appears at least as stable as the variable elimination procedure. Sparsity preserving implementation is straightforward.

Numerical Questions in ODE Boundary Value Problems

Numerical Questions in ODE Boundary Value Problems Numerical Questions in ODE Boundary Value Problems M.R.Osborne Contents 1 Introduction 2 2 ODE stability 6 2.1 Initial value problem stability.................. 6 2.2 Boundary value problem stability................

More information

The Bock iteration for the ODE estimation problem

The Bock iteration for the ODE estimation problem he Bock iteration for the ODE estimation problem M.R.Osborne Contents 1 Introduction 2 2 Introducing the Bock iteration 5 3 he ODE estimation problem 7 4 he Bock iteration for the smoothing problem 12

More information

linear least squares methods nonlinear least squares asymptotically second order iterations M.R. Osborne Something old,something new, and...

linear least squares methods nonlinear least squares asymptotically second order iterations M.R. Osborne Something old,something new, and... Something old,something new, and... M.R. Osborne Mathematical Sciences Institute Australian National University Anderssen 70 Canberra, January 30, 2009 Outline linear least squares methods nonlinear least

More information

Numerical Algorithms as Dynamical Systems

Numerical Algorithms as Dynamical Systems A Study on Numerical Algorithms as Dynamical Systems Moody Chu North Carolina State University What This Study Is About? To recast many numerical algorithms as special dynamical systems, whence to derive

More information

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark Solution Methods Richard Lusby Department of Management Engineering Technical University of Denmark Lecture Overview (jg Unconstrained Several Variables Quadratic Programming Separable Programming SUMT

More information

Real-time Constrained Nonlinear Optimization for Maximum Power Take-off of a Wave Energy Converter

Real-time Constrained Nonlinear Optimization for Maximum Power Take-off of a Wave Energy Converter Real-time Constrained Nonlinear Optimization for Maximum Power Take-off of a Wave Energy Converter Thomas Bewley 23 May 2014 Southern California Optimization Day Summary 1 Introduction 2 Nonlinear Model

More information

Direct Methods. Moritz Diehl. Optimization in Engineering Center (OPTEC) and Electrical Engineering Department (ESAT) K.U.

Direct Methods. Moritz Diehl. Optimization in Engineering Center (OPTEC) and Electrical Engineering Department (ESAT) K.U. Direct Methods Moritz Diehl Optimization in Engineering Center (OPTEC) and Electrical Engineering Department (ESAT) K.U. Leuven Belgium Overview Direct Single Shooting Direct Collocation Direct Multiple

More information

Ordinary Differential Equations

Ordinary Differential Equations Ordinary Differential Equations Professor Dr. E F Toro Laboratory of Applied Mathematics University of Trento, Italy eleuterio.toro@unitn.it http://www.ing.unitn.it/toro September 19, 2014 1 / 55 Motivation

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Lecture 4: Numerical solution of ordinary differential equations

Lecture 4: Numerical solution of ordinary differential equations Lecture 4: Numerical solution of ordinary differential equations Department of Mathematics, ETH Zürich General explicit one-step method: Consistency; Stability; Convergence. High-order methods: Taylor

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

Parallelizing large scale time domain electromagnetic inverse problem

Parallelizing large scale time domain electromagnetic inverse problem Parallelizing large scale time domain electromagnetic inverse problems Eldad Haber with: D. Oldenburg & R. Shekhtman + Emory University, Atlanta, GA + The University of British Columbia, Vancouver, BC,

More information

Review for Exam 2 Ben Wang and Mark Styczynski

Review for Exam 2 Ben Wang and Mark Styczynski Review for Exam Ben Wang and Mark Styczynski This is a rough approximation of what we went over in the review session. This is actually more detailed in portions than what we went over. Also, please note

More information

Parameter Estimation of Mathematical Models Described by Differential Equations

Parameter Estimation of Mathematical Models Described by Differential Equations Parameter Estimation p.1/12 Parameter Estimation of Mathematical Models Described by Differential Equations Hossein Zivaripiran Department of Computer Science University of Toronto Outline Parameter Estimation

More information

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015 The test lasts 1 hour and 15 minutes. No documents are allowed. The use of a calculator, cell phone or other equivalent electronic

More information

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12,

LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, LINEAR ALGEBRA: NUMERICAL METHODS. Version: August 12, 2000 74 6 Summary Here we summarize the most important information about theoretical and numerical linear algebra. MORALS OF THE STORY: I. Theoretically

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

Suboptimal Open-loop Control Using POD. Stefan Volkwein

Suboptimal Open-loop Control Using POD. Stefan Volkwein Institute for Mathematics and Scientific Computing University of Graz, Austria PhD program in Mathematics for Technology Catania, May 22, 2007 Motivation Optimal control of evolution problems: min J(y,

More information

Gradient Methods Using Momentum and Memory

Gradient Methods Using Momentum and Memory Chapter 3 Gradient Methods Using Momentum and Memory The steepest descent method described in Chapter always steps in the negative gradient direction, which is orthogonal to the boundary of the level set

More information

Math Ordinary Differential Equations

Math Ordinary Differential Equations Math 411 - Ordinary Differential Equations Review Notes - 1 1 - Basic Theory A first order ordinary differential equation has the form x = f(t, x) (11) Here x = dx/dt Given an initial data x(t 0 ) = x

More information

Theory, Solution Techniques and Applications of Singular Boundary Value Problems

Theory, Solution Techniques and Applications of Singular Boundary Value Problems Theory, Solution Techniques and Applications of Singular Boundary Value Problems W. Auzinger O. Koch E. Weinmüller Vienna University of Technology, Austria Problem Class z (t) = M(t) z(t) + f(t, z(t)),

More information

MATH 215/255 Solutions to Additional Practice Problems April dy dt

MATH 215/255 Solutions to Additional Practice Problems April dy dt . For the nonlinear system MATH 5/55 Solutions to Additional Practice Problems April 08 dx dt = x( x y, dy dt = y(.5 y x, x 0, y 0, (a Show that if x(0 > 0 and y(0 = 0, then the solution (x(t, y(t of the

More information

Image restoration: numerical optimisation

Image restoration: numerical optimisation Image restoration: numerical optimisation Short and partial presentation Jean-François Giovannelli Groupe Signal Image Laboratoire de l Intégration du Matériau au Système Univ. Bordeaux CNRS BINP / 6 Context

More information

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games

Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Stability of Feedback Solutions for Infinite Horizon Noncooperative Differential Games Alberto Bressan ) and Khai T. Nguyen ) *) Department of Mathematics, Penn State University **) Department of Mathematics,

More information

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information

Adaptive and multilevel methods for parameter identification in partial differential equations

Adaptive and multilevel methods for parameter identification in partial differential equations Adaptive and multilevel methods for parameter identification in partial differential equations Barbara Kaltenbacher, University of Stuttgart joint work with Hend Ben Ameur, Université de Tunis Anke Griesbaum,

More information

Iterative Methods. Splitting Methods

Iterative Methods. Splitting Methods Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition

More information

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study

Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms: A Comparative Study International Journal of Mathematics And Its Applications Vol.2 No.4 (2014), pp.47-56. ISSN: 2347-1557(online) Determination of Feasible Directions by Successive Quadratic Programming and Zoutendijk Algorithms:

More information

NUMERICAL METHODS. lor CHEMICAL ENGINEERS. Using Excel', VBA, and MATLAB* VICTOR J. LAW. CRC Press. Taylor & Francis Group

NUMERICAL METHODS. lor CHEMICAL ENGINEERS. Using Excel', VBA, and MATLAB* VICTOR J. LAW. CRC Press. Taylor & Francis Group NUMERICAL METHODS lor CHEMICAL ENGINEERS Using Excel', VBA, and MATLAB* VICTOR J. LAW CRC Press Taylor & Francis Group Boca Raton London New York CRC Press is an imprint of the Taylor & Francis Croup,

More information

CHAPTER 10: Numerical Methods for DAEs

CHAPTER 10: Numerical Methods for DAEs CHAPTER 10: Numerical Methods for DAEs Numerical approaches for the solution of DAEs divide roughly into two classes: 1. direct discretization 2. reformulation (index reduction) plus discretization Direct

More information

Input-to-state stability and interconnected Systems

Input-to-state stability and interconnected Systems 10th Elgersburg School Day 1 Input-to-state stability and interconnected Systems Sergey Dashkovskiy Universität Würzburg Elgersburg, March 5, 2018 1/20 Introduction Consider Solution: ẋ := dx dt = ax,

More information

u n 2 4 u n 36 u n 1, n 1.

u n 2 4 u n 36 u n 1, n 1. Exercise 1 Let (u n ) be the sequence defined by Set v n = u n 1 x+ u n and f (x) = 4 x. 1. Solve the equations f (x) = 1 and f (x) =. u 0 = 0, n Z +, u n+1 = u n + 4 u n.. Prove that if u n < 1, then

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

ECE 422/522 Power System Operations & Planning/Power Systems Analysis II : 7 - Transient Stability

ECE 422/522 Power System Operations & Planning/Power Systems Analysis II : 7 - Transient Stability ECE 4/5 Power System Operations & Planning/Power Systems Analysis II : 7 - Transient Stability Spring 014 Instructor: Kai Sun 1 Transient Stability The ability of the power system to maintain synchronism

More information

Sparse, stable gene regulatory network recovery via convex optimization

Sparse, stable gene regulatory network recovery via convex optimization Sparse, stable gene regulatory network recovery via convex optimization Arwen Meister June, 11 Gene regulatory networks Gene expression regulation allows cells to control protein levels in order to live

More information

Lasso applications: regularisation and homotopy

Lasso applications: regularisation and homotopy Lasso applications: regularisation and homotopy M.R. Osborne 1 mailto:mike.osborne@anu.edu.au 1 Mathematical Sciences Institute, Australian National University Abstract Ti suggested the use of an l 1 norm

More information

APPLICATIONS OF FD APPROXIMATIONS FOR SOLVING ORDINARY DIFFERENTIAL EQUATIONS

APPLICATIONS OF FD APPROXIMATIONS FOR SOLVING ORDINARY DIFFERENTIAL EQUATIONS LECTURE 10 APPLICATIONS OF FD APPROXIMATIONS FOR SOLVING ORDINARY DIFFERENTIAL EQUATIONS Ordinary Differential Equations Initial Value Problems For Initial Value problems (IVP s), conditions are specified

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Convexity of the Reachable Set of Nonlinear Systems under L 2 Bounded Controls

Convexity of the Reachable Set of Nonlinear Systems under L 2 Bounded Controls 1 1 Convexity of the Reachable Set of Nonlinear Systems under L 2 Bounded Controls B.T.Polyak Institute for Control Science, Moscow, Russia e-mail boris@ipu.rssi.ru Abstract Recently [1, 2] the new convexity

More information

that determines x up to a complex scalar of modulus 1, in the real case ±1. Another condition to normalize x is by requesting that

that determines x up to a complex scalar of modulus 1, in the real case ±1. Another condition to normalize x is by requesting that Chapter 3 Newton methods 3. Linear and nonlinear eigenvalue problems When solving linear eigenvalue problems we want to find values λ C such that λi A is singular. Here A F n n is a given real or complex

More information

The Direct Transcription Method For Optimal Control. Part 2: Optimal Control

The Direct Transcription Method For Optimal Control. Part 2: Optimal Control The Direct Transcription Method For Optimal Control Part 2: Optimal Control John T Betts Partner, Applied Mathematical Analysis, LLC 1 Fundamental Principle of Transcription Methods Transcription Method

More information

Construction of Lyapunov functions by validated computation

Construction of Lyapunov functions by validated computation Construction of Lyapunov functions by validated computation Nobito Yamamoto 1, Kaname Matsue 2, and Tomohiro Hiwaki 1 1 The University of Electro-Communications, Tokyo, Japan yamamoto@im.uec.ac.jp 2 The

More information

The Lifted Newton Method and Its Use in Optimization

The Lifted Newton Method and Its Use in Optimization The Lifted Newton Method and Its Use in Optimization Moritz Diehl Optimization in Engineering Center (OPTEC), K.U. Leuven, Belgium joint work with Jan Albersmeyer (U. Heidelberg) ENSIACET, Toulouse, February

More information

AIMS Exercise Set # 1

AIMS Exercise Set # 1 AIMS Exercise Set #. Determine the form of the single precision floating point arithmetic used in the computers at AIMS. What is the largest number that can be accurately represented? What is the smallest

More information

Optimal control problems with PDE constraints

Optimal control problems with PDE constraints Optimal control problems with PDE constraints Maya Neytcheva CIM, October 2017 General framework Unconstrained optimization problems min f (q) q x R n (real vector) and f : R n R is a smooth function.

More information

Lecture 16: Relaxation methods

Lecture 16: Relaxation methods Lecture 16: Relaxation methods Clever technique which begins with a first guess of the trajectory across the entire interval Break the interval into M small steps: x 1 =0, x 2,..x M =L Form a grid of points,

More information

Do not turn over until you are told to do so by the Invigilator.

Do not turn over until you are told to do so by the Invigilator. UNIVERSITY OF EAST ANGLIA School of Mathematics Main Series UG Examination 216 17 INTRODUCTION TO NUMERICAL ANALYSIS MTHE612B Time allowed: 3 Hours Attempt QUESTIONS 1 and 2, and THREE other questions.

More information

Design and Analysis of Algorithms Lecture Notes on Convex Optimization CS 6820, Fall Nov 2 Dec 2016

Design and Analysis of Algorithms Lecture Notes on Convex Optimization CS 6820, Fall Nov 2 Dec 2016 Design and Analysis of Algorithms Lecture Notes on Convex Optimization CS 6820, Fall 206 2 Nov 2 Dec 206 Let D be a convex subset of R n. A function f : D R is convex if it satisfies f(tx + ( t)y) tf(x)

More information

Splitting methods with boundary corrections

Splitting methods with boundary corrections Splitting methods with boundary corrections Alexander Ostermann University of Innsbruck, Austria Joint work with Lukas Einkemmer Verona, April/May 2017 Strang s paper, SIAM J. Numer. Anal., 1968 S (5)

More information

, b = 0. (2) 1 2 The eigenvectors of A corresponding to the eigenvalues λ 1 = 1, λ 2 = 3 are

, b = 0. (2) 1 2 The eigenvectors of A corresponding to the eigenvalues λ 1 = 1, λ 2 = 3 are Quadratic forms We consider the quadratic function f : R 2 R defined by f(x) = 2 xt Ax b T x with x = (x, x 2 ) T, () where A R 2 2 is symmetric and b R 2. We will see that, depending on the eigenvalues

More information

Lecture 4 - The Gradient Method Objective: find an optimal solution of the problem

Lecture 4 - The Gradient Method Objective: find an optimal solution of the problem Lecture 4 - The Gradient Method Objective: find an optimal solution of the problem min{f (x) : x R n }. The iterative algorithms that we will consider are of the form x k+1 = x k + t k d k, k = 0, 1,...

More information

SF2822 Applied nonlinear optimization, final exam Wednesday June

SF2822 Applied nonlinear optimization, final exam Wednesday June SF2822 Applied nonlinear optimization, final exam Wednesday June 3 205 4.00 9.00 Examiner: Anders Forsgren, tel. 08-790 7 27. Allowed tools: Pen/pencil, ruler and eraser. Note! Calculator is not allowed.

More information

Self-Concordant Barrier Functions for Convex Optimization

Self-Concordant Barrier Functions for Convex Optimization Appendix F Self-Concordant Barrier Functions for Convex Optimization F.1 Introduction In this Appendix we present a framework for developing polynomial-time algorithms for the solution of convex optimization

More information

Inexact Newton-Type Optimization with Iterated Sensitivities

Inexact Newton-Type Optimization with Iterated Sensitivities Inexact Newton-Type Optimization with Iterated Sensitivities Downloaded from: https://research.chalmers.se, 2019-01-12 01:39 UTC Citation for the original published paper (version of record: Quirynen,

More information

CE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions

CE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions CE 191: Civil and Environmental Engineering Systems Analysis LEC : Optimality Conditions Professor Scott Moura Civil & Environmental Engineering University of California, Berkeley Fall 214 Prof. Moura

More information

Linear Algebra Exercises

Linear Algebra Exercises 9. 8.03 Linear Algebra Exercises 9A. Matrix Multiplication, Rank, Echelon Form 9A-. Which of the following matrices is in row-echelon form? 2 0 0 5 0 (i) (ii) (iii) (iv) 0 0 0 (v) [ 0 ] 0 0 0 0 0 0 0 9A-2.

More information

Exponential Integrators

Exponential Integrators Exponential Integrators John C. Bowman (University of Alberta) May 22, 2007 www.math.ualberta.ca/ bowman/talks 1 Exponential Integrators Outline Exponential Euler History Generalizations Stationary Green

More information

Lecture 4 - The Gradient Method Objective: find an optimal solution of the problem

Lecture 4 - The Gradient Method Objective: find an optimal solution of the problem Lecture 4 - The Gradient Method Objective: find an optimal solution of the problem min{f (x) : x R n }. The iterative algorithms that we will consider are of the form x k+1 = x k + t k d k, k = 0, 1,...

More information

EE363 homework 7 solutions

EE363 homework 7 solutions EE363 Prof. S. Boyd EE363 homework 7 solutions 1. Gain margin for a linear quadratic regulator. Let K be the optimal state feedback gain for the LQR problem with system ẋ = Ax + Bu, state cost matrix Q,

More information

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by

(f(x) P 3 (x)) dx. (a) The Lagrange formula for the error is given by 1. QUESTION (a) Given a nth degree Taylor polynomial P n (x) of a function f(x), expanded about x = x 0, write down the Lagrange formula for the truncation error, carefully defining all its elements. How

More information

POD for Parametric PDEs and for Optimality Systems

POD for Parametric PDEs and for Optimality Systems POD for Parametric PDEs and for Optimality Systems M. Kahlbacher, K. Kunisch, H. Müller and S. Volkwein Institute for Mathematics and Scientific Computing University of Graz, Austria DMV-Jahrestagung 26,

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Matrix stabilization using differential equations.

Matrix stabilization using differential equations. Matrix stabilization using differential equations. Nicola Guglielmi Universitá dell Aquila and Gran Sasso Science Institute, Italia NUMOC-2017 Roma, 19 23 June, 2017 Inspired by a joint work with Christian

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

34.3. Resisted Motion. Introduction. Prerequisites. Learning Outcomes

34.3. Resisted Motion. Introduction. Prerequisites. Learning Outcomes Resisted Motion 34.3 Introduction This Section returns to the simple models of projectiles considered in Section 34.1. It explores the magnitude of air resistance effects and the effects of including simple

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems

Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Stefania Bellavia Dipartimento di Energetica S. Stecco Università degli Studi di Firenze Joint work with Jacek

More information

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 18. HW 7 is posted, and will be due in class on 4/25.

Bindel, Fall 2011 Intro to Scientific Computing (CS 3220) Week 12: Monday, Apr 18. HW 7 is posted, and will be due in class on 4/25. Logistics Week 12: Monday, Apr 18 HW 6 is due at 11:59 tonight. HW 7 is posted, and will be due in class on 4/25. The prelim is graded. An analysis and rubric are on CMS. Problem du jour For implicit methods

More information

Def. (a, b) is a critical point of the autonomous system. 1 Proper node (stable or unstable) 2 Improper node (stable or unstable)

Def. (a, b) is a critical point of the autonomous system. 1 Proper node (stable or unstable) 2 Improper node (stable or unstable) Types of critical points Def. (a, b) is a critical point of the autonomous system Math 216 Differential Equations Kenneth Harris kaharri@umich.edu Department of Mathematics University of Michigan November

More information

Dynamical Systems & Lyapunov Stability

Dynamical Systems & Lyapunov Stability Dynamical Systems & Lyapunov Stability Harry G. Kwatny Department of Mechanical Engineering & Mechanics Drexel University Outline Ordinary Differential Equations Existence & uniqueness Continuous dependence

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 9 Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 9 Initial Value Problems for Ordinary Differential Equations Copyright c 2001. Reproduction

More information

Numerical Methods for Differential Equations Mathematical and Computational Tools

Numerical Methods for Differential Equations Mathematical and Computational Tools Numerical Methods for Differential Equations Mathematical and Computational Tools Gustaf Söderlind Numerical Analysis, Lund University Contents V4.16 Part 1. Vector norms, matrix norms and logarithmic

More information

Input to state Stability

Input to state Stability Input to state Stability Mini course, Universität Stuttgart, November 2004 Lars Grüne, Mathematisches Institut, Universität Bayreuth Part III: Lyapunov functions and quantitative aspects ISS Consider with

More information

Constrained Nonlinear Optimization Algorithms

Constrained Nonlinear Optimization Algorithms Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu Institute for Mathematics and its Applications University of Minnesota August 4, 2016

More information

Observability. Dynamic Systems. Lecture 2 Observability. Observability, continuous time: Observability, discrete time: = h (2) (x, u, u)

Observability. Dynamic Systems. Lecture 2 Observability. Observability, continuous time: Observability, discrete time: = h (2) (x, u, u) Observability Dynamic Systems Lecture 2 Observability Continuous time model: Discrete time model: ẋ(t) = f (x(t), u(t)), y(t) = h(x(t), u(t)) x(t + 1) = f (x(t), u(t)), y(t) = h(x(t)) Reglerteknik, ISY,

More information

MATH 819 FALL We considered solutions of this equation on the domain Ū, where

MATH 819 FALL We considered solutions of this equation on the domain Ū, where MATH 89 FALL. The D linear wave equation weak solutions We have considered the initial value problem for the wave equation in one space dimension: (a) (b) (c) u tt u xx = f(x, t) u(x, ) = g(x), u t (x,

More information

Numerical Analysis Comprehensive Exam Questions

Numerical Analysis Comprehensive Exam Questions Numerical Analysis Comprehensive Exam Questions 1. Let f(x) = (x α) m g(x) where m is an integer and g(x) C (R), g(α). Write down the Newton s method for finding the root α of f(x), and study the order

More information

The Steepest Descent Algorithm for Unconstrained Optimization

The Steepest Descent Algorithm for Unconstrained Optimization The Steepest Descent Algorithm for Unconstrained Optimization Robert M. Freund February, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 1 Steepest Descent Algorithm The problem

More information

Reduction to the associated homogeneous system via a particular solution

Reduction to the associated homogeneous system via a particular solution June PURDUE UNIVERSITY Study Guide for the Credit Exam in (MA 5) Linear Algebra This study guide describes briefly the course materials to be covered in MA 5. In order to be qualified for the credit, one

More information

Research Article A Note on the Solutions of the Van der Pol and Duffing Equations Using a Linearisation Method

Research Article A Note on the Solutions of the Van der Pol and Duffing Equations Using a Linearisation Method Mathematical Problems in Engineering Volume 1, Article ID 693453, 1 pages doi:11155/1/693453 Research Article A Note on the Solutions of the Van der Pol and Duffing Equations Using a Linearisation Method

More information

Ordinary Differential Equations

Ordinary Differential Equations Ordinary Differential Equations Part 4 Massimo Ricotti ricotti@astro.umd.edu University of Maryland Ordinary Differential Equations p. 1/23 Two-point Boundary Value Problems NRiC 17. BCs specified at two

More information

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach

More information

In this activity, we explore the application of differential equations to the real world as applied to projectile motion.

In this activity, we explore the application of differential equations to the real world as applied to projectile motion. Applications of Calculus: Projectile Motion ID: XXXX Name Class In this activity, we explore the application of differential equations to the real world as applied to projectile motion. Open the file CalcActXX_Projectile_Motion_EN.tns

More information

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.

CME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication. CME342 Parallel Methods in Numerical Analysis Matrix Computation: Iterative Methods II Outline: CG & its parallelization. Sparse Matrix-vector Multiplication. 1 Basic iterative methods: Ax = b r = b Ax

More information

Numerical solution of ODEs

Numerical solution of ODEs Numerical solution of ODEs Arne Morten Kvarving Department of Mathematical Sciences Norwegian University of Science and Technology November 5 2007 Problem and solution strategy We want to find an approximation

More information

Numerical Analysis of Differential Equations Numerical Solution of Parabolic Equations

Numerical Analysis of Differential Equations Numerical Solution of Parabolic Equations Numerical Analysis of Differential Equations 215 6 Numerical Solution of Parabolic Equations 6 Numerical Solution of Parabolic Equations TU Bergakademie Freiberg, SS 2012 Numerical Analysis of Differential

More information

CHAPTER 5: Linear Multistep Methods

CHAPTER 5: Linear Multistep Methods CHAPTER 5: Linear Multistep Methods Multistep: use information from many steps Higher order possible with fewer function evaluations than with RK. Convenient error estimates. Changing stepsize or order

More information

Linear ODEs. Existence of solutions to linear IVPs. Resolvent matrix. Autonomous linear systems

Linear ODEs. Existence of solutions to linear IVPs. Resolvent matrix. Autonomous linear systems Linear ODEs p. 1 Linear ODEs Existence of solutions to linear IVPs Resolvent matrix Autonomous linear systems Linear ODEs Definition (Linear ODE) A linear ODE is a differential equation taking the form

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

1.The anisotropic plate model

1.The anisotropic plate model Pré-Publicações do Departamento de Matemática Universidade de Coimbra Preprint Number 6 OPTIMAL CONTROL OF PIEZOELECTRIC ANISOTROPIC PLATES ISABEL NARRA FIGUEIREDO AND GEORG STADLER ABSTRACT: This paper

More information

Numerical Methods for Embedded Optimization and Optimal Control. Exercises

Numerical Methods for Embedded Optimization and Optimal Control. Exercises Summer Course Numerical Methods for Embedded Optimization and Optimal Control Exercises Moritz Diehl, Daniel Axehill and Lars Eriksson June 2011 Introduction This collection of exercises is intended to

More information

Elliptic Problems / Multigrid. PHY 604: Computational Methods for Physics and Astrophysics II

Elliptic Problems / Multigrid. PHY 604: Computational Methods for Physics and Astrophysics II Elliptic Problems / Multigrid Summary of Hyperbolic PDEs We looked at a simple linear and a nonlinear scalar hyperbolic PDE There is a speed associated with the change of the solution Explicit methods

More information

Chapter 7 Iterative Techniques in Matrix Algebra

Chapter 7 Iterative Techniques in Matrix Algebra Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition

More information

Semi-Lagrangian Formulations for Linear Advection Equations and Applications to Kinetic Equations

Semi-Lagrangian Formulations for Linear Advection Equations and Applications to Kinetic Equations Semi-Lagrangian Formulations for Linear Advection and Applications to Kinetic Department of Mathematical and Computer Science Colorado School of Mines joint work w/ Chi-Wang Shu Supported by NSF and AFOSR.

More information

1 Lyapunov theory of stability

1 Lyapunov theory of stability M.Kawski, APM 581 Diff Equns Intro to Lyapunov theory. November 15, 29 1 1 Lyapunov theory of stability Introduction. Lyapunov s second (or direct) method provides tools for studying (asymptotic) stability

More information

An Introduction to Numerical Continuation Methods. with Application to some Problems from Physics. Eusebius Doedel. Cuzco, Peru, May 2013

An Introduction to Numerical Continuation Methods. with Application to some Problems from Physics. Eusebius Doedel. Cuzco, Peru, May 2013 An Introduction to Numerical Continuation Methods with Application to some Problems from Physics Eusebius Doedel Cuzco, Peru, May 2013 Persistence of Solutions Newton s method for solving a nonlinear equation

More information