Derivative Based vs. Derivative Free Optimization Methods for Nonlinear Optimum Experimental Design
|
|
- Trevor O’Connor’
- 6 years ago
- Views:
Transcription
1 Derivative Based vs. Derivative Free Optimization Methods for Nonlinear Optimum Experimental Design Stefan Körkel 1, Huiqin Qu 2, Gerd Rücker 3, and Sebastian Sager 1 1 Interdisciplinary Center for Scientific Computing, University of Heidelberg, Im Neuenheimer Feld 368, D Heidelberg, Germany 2 Intelligent Information Processing Laboratory, Fudan University, Shanghai 3 Deutsche Börse AG, Frankfurt am Main 1 Introduction An important task in the procedure of the validation of dynamic process models is nonlinear optimum experimental design. It aims at computing experimental layouts, setups and controls in order to optimize the statistical reliability of parameter estimates from the resulting experimental data. The models we consider usually arise from applications in chemistry or chemical engineering and consist of nonlinear systems of differential equations, e.g. ordinary differential equations (ODE) or differential algebraic equations (DAE). In this paper we sketch our numerical approach (implemented in our software package VPLAN) which is based on sequential quadratic programming with a tailored derivative computation and compare it to the easy to implement but much less powerful derivative free approach. 2 Problem Statement 2.1 Model Validation In many practical and industrial applications dynamic processes play an important role. To simulate, understand, control and optimize these processes, they are described by dynamic mathematical models, usually systems of ordinary or partial differential equations. In this paper we concentrate on differential algebraic equations (DAE): 0 = g(t, y, z, p, q)
2 2 Stefan Körkel, Huiqin Qu, Gerd Rücker, and Sebastian Sager where t [t 0 ; t end ] is the time and x = (y, z) : [t 0 ; t end ] nx are the system states. The values of the quantities p np, the parameters, are known only roughly. The controls q describe the layout, the setup and the processing of experiments, we distinguish between time independent control variables q 1 nq 1 and time dependent control functions q 2 : [t 0 ; t end ] n q2, q = (q 1, q 2 ). To estimate the parameters p from experimental data, we minimize the weighted sum of the squares of the residuals between measurement values η i and model responses h i (t i, x(t i ), p, q)), i = 1,..., M: min p,x M i=1 w i (η i h i (t i, x(t i ), p, q)) 2 σ 2 i 0 = g(t, y, z, p, q) 0 = d(x(t 0 ),..., x(t f ), p, q) (1) The quantities σ i are the variances of the measurement errors. The weight w i [0; 1] specifies if measurement i is actually carried out or not. In parameter estimation we can assume that all the weights are 1, later in experimental design we will use the w i as variables to choose the placement of the measurements. For the numerical solution of this kind of problems, we use the boundary value problem optimization approach suggested by Bock [Boc87]. 2.2 Optimum Experimental Design The parameter estimation problem (1) depends on the randomly distributed experimental data, hence the solution ˆp are also random variables. Their uncertainty can be described by the variance-covariance matrix [Boc87] C = ( I 0 ) ( J T 1 J 1 J T 2 J 2 0 ) 1 ( ) ( J T 1 J 1 0 J T 1 J 1 J2 T 0 0 J 2 0 ) T ( ) I, 0 where J 1 resp. J 2 are the derivatives w.r.t. the parameters of the least squares terms resp. the constraints of problem (1) evaluated in the solution point. Our aim is to compute an experimental design, i.e. controls q for layout, setup and processing and weights w for measurement selection, which yields by parameter estimation from the experimental data estimates with minimal statistical uncertainty. For this purpose we minimize functions on the variance-covariance matrix, e.g. φ(c) = trace(c) or det(c) or max{λ : λ eigenvalue of C} or max C ii Let ξ := (q, w) denote the experimental design variables. Then we can formulate the experimental design optimization problem
3 Optimization and Derivatives for Nonlinear Experimental Design 3 min φ(c) q,w,x C is the variance-covariance matrix in the solution point of the problem M min w i (η i h i (t i, x(t i ), p, q)) 2 p,x σ 2 i=1 i 0 = g(t, y, z, p, q) 0 = d(x(t 0 ),..., x(t f ), p, q) Constraints: 0 = g(t, y, z, p, q) lo ψ(t, x, p, q, w) up 0 = χ(t, x, p, q, w) w {0, 1} M. (2) Remark 1( The ) objective function of problem (2) is defined on the Jacobian J = of the parameter estimation problem (1) which depends J1 on J 2 derivatives of the solution of the dynamic system w.r.t. the parameters p, see [Kör02]. The experimental design problem (2) is a nonlinear inequality-constrained optimal control problem. The objective function is implicitly defined on derivatives of the solution of the dynamic model equations and not separable. For the numerical solution, we apply the direct approach of optimal control, which consists of parameterization of the control functions q 2, discretization of the state constraints, relaxation of the 0-1 variables, and a finite-dimensional parameterization of the solution of the dynamic system. For details we refer to [Kör02]. We obtain a finite dimensional nonlinear constrained optimization problem: min φ(ξ) s.t. 0 = χ(ξ), 0 ψ(ξ) (3) 3 Derivative Based Optimization To solve the experimental design optimization problem (3) we choose the Newton-type method of Sequential Quadratic Programming (SQP). For details on this method we want to refer e.g. to the textbook [NW99]. We employ the SQP implementation SNOPT [GMS02]. For the optimization, first derivatives of objective and constraints with respect to the optimization variables ξ are required. We consider directional derivatives for directions ξ and apply the chain rule
4 4 Stefan Körkel, Huiqin Qu, Gerd Rücker, and Sebastian Sager φ := dφ φ(ξ + h ξ) φ(ξ) ξ := lim dξ h 0 h = dφ dc C, σ i dc C := dj J, dj J := ξ. (4) dξ The steps in (4) require intricate derivative computations. C and dc dj J mean the differentiation of functions on matrices w.r.t matrices, see [Kör02]. dj dξ ξ is the derivative w.r.t. ξ of the derivative w.r.t. p of the parameter estimation problem, e.g. ( ) ( dj 1 wi hi 2 x i q = diag dq x p q q + 2 h i x i x i x x p q q + 2 h i x i x q p q + 2 h i x i p x q q + 2 h i p q q ) i=1,...,m where x i := x(t i, p, q). Note that for the computation of (5) we not only need the solution x of the DAE, but also first and mixed second derivatives: dφ dc (5) x p (t i, p, q), x q (t i, p, q), 2 x p q (t i, p, q). We apply the backward differentiation formulae (BDF), a multistep integration method implemented in the code DAESOL [BBKS99], to solve the DAE systems. The derivatives of x are solutions of variational DAEs (VDAE). BDF schemes for these VDAE can also be considered as derivatives of the BDF scheme for the DAE if the same stepsize and order control is used. Hence we can compute the exact derivatives of the numerical approximation of x. Moreover, all these BDF schemes have the same structure, thus the matrix decompositions for solution of the implicit problems can be applied simultaneously to all required first and second derivatives. For details on this approach of internal numerical differentiation see [BBKS99] or [Kör02]. The diverse VDAEs for the first and second derivatives contain first and second derivatives of the model functions f and g of the right hand side of the DAE. Further, first and second derivatives of the measurement model response functions h i and of the nonlinear constraints are required. Usually, these functions are given as user defined subroutines. In our software VPLAN, we apply techniques of automatic differentiation based on the package ADI- FOR [BCKM94] to compute the needed derivatives automatically. 4 Derivative Free Optimization We use the derivative free multidirectional search method developed by Torczon [Tor89], which is based on the iterative change of a simplex with k + 1
5 Optimization and Derivatives for Nonlinear Experimental Design 5 points (where k is the number of the arguments in the function) so that the procedure converges to a minimal value point. Compared with the widely used simplex method [NM65, PTVF92], this method searches k distinct directions in parallel, i.e. only the best point is kept in each iteration, which makes it converge faster and more reliable than the simplex method in case the function has many arguments. The multidirectional search method uses only function values of the objective function and is only able to treat simple-bounds-constraints. 5 Numerical Results 5.1 The Test Problem We compare the two optimization approaches for an experimental design optimization problem for the Diels-Alder reaction [MB83]. It is a chemical reaction with a catalytic and non-catalytic reaction channel. Aim is to determine the reaction velocities of both reaction channels. The model of this process can be formulated as an ordinary differential equation system. where the state variables model the molar numbers of the species. The model contains 5 parameters, the steric factors and activation energies of the reaction velocities and catalyst deactivation rate. Experimental design variables are the initial molar numbers, the concentration of the catalyst and the temperature and the weights for the placement of 10 measurements. We want to plan two experiments for the most significant estimation of the model parameters. This leads to an optimization problem with simplebounds-constraints on the experimental design variables as only constraints. Thus it is also possible to treat it with the derivative free optimization method. For each experiment we have 17 degrees of freedom, thus altogether 34 optimization variables for two experiments. We start the optimization procedures with an experimental layout with objective value for the A criterion (trace(c)). 5.2 Results of the Derivative Based Optimization The computation with VPLAN using the SNOPT SQP optimizer needs 167 SQP iterations which require 168 function calls and 384 derivative evaluations of the objective function to achieve convergence to the optimal solution with A criterion = This computation runs 8.1 seconds user cpu time on a Pentium4 2.5 GHz under Linux. 5.3 Comparison to Derivative Free Optimization The multidirectional search method terminates with an objective value of for the A criterion. To achieve this result, function evaluations are necessary in a user cpu time of 5 minutes on the same computer as above.
6 6 Stefan Körkel, Huiqin Qu, Gerd Rücker, and Sebastian Sager The objective value of the derivative based result is slightly better than the objective value of the derivative free result. The computational time of the derivative based method is 8 seconds compared to 5 minutes for the derivative free method. 6 Conclusion The task of experimental design for dynamic processes yields complicated nonlinear optimization problems. Newton-type optimization methods such as the method of sequential quadratic programming require in particular derivatives of the objective function which is especially complicated for experimental design. Intricate computations are needed to provide the derivatives efficiently. Using this derivative based approach we have developed the software package VPLAN which can solve generally formulated problems of this class. It is inveigling to use easy to implement derivative free optimization methods instead. In this paper we have shown that, besides the drawback that such methods are restricted to problems with only simple-bounds-constraints, they require tremendously more computational time to achieve comparable results for nonlinear experimental design problems. References [BBKS99] I. Bauer, H. G. Bock, S. Körkel, and J. P. Schlöder. Numerical methods for initial value problems and derivative generation for DAE models with application to optimum experimental design of chemical processes. In F. Keil, W. Mackens, H. Voss, and J. Werther, editors, Scientific Computing in Chemical Engineering II, volume 2, pages , Berlin, Heidelberg, Springer-Verlag. [BCKM94] C. Bischof, A. Carle, P. Khademi, and A. Mauer. The ADIFOR 2.0 system for the automatic differentiation of fortran 77 programs. Technical Report CRPC-TR94491, Center for Research on Parallel Computation, Rice University, Houston, TX, [Boc87] H. G. Bock. Randwertproblemmethoden zur Parameteridentifizierung in Systemen nichtlinearer Differentialgleichungen. Bonner Mathematische [GMS02] Schriften 183, P. E. Gill, W. Murray, and M. A. Saunders. SNOPT: An SQP algorithm for large-scale constrained optimization. SIAM J. Opt., 12: , [Kör02] S. Körkel. Numerische Methoden für Optimale Versuchsplanungsprobleme bei nichtlinearen DAE-Modellen. PhD thesis, Universität Heidelberg, available at [MB83] R. T. Morrison and R. N. Boyd. Organic Chemistry. Allyn and Bacon, Inc., 4th edition, 1983.
7 Optimization and Derivatives for Nonlinear Experimental Design 7 [NM65] J. A. Nelder and R. Mead. A simplex method for function minimization. Comput. J., 8: , [NW99] J. Nocedal and S. J. Wright. Numerical Optimization. Springer Series in Operations Research. Springer-Verlag, New York, [PTVF92] William H. Press, Saul A. Teukolsky, William T. Vetterling, and Brian P. Flannery. Numerical Recipes in C: The Art of Scientific Computing. Cambridge University Press, [Tor89] Virginia J. Torczon. Multi-Directional Search: A Direct Search Algorithm for Parallel Machines. PhD thesis, Houston, TX, USA, 1989.
Numerical Methods for Nonlinear Experimental Design
Numerical Methods for Nonlinear Experimental Design Stefan Körkel Interdisciplinary Center for Scientific Computing, University of Heidelberg, Germany, email: stefan@koerkel.de Summary. Nonlinear experimental
More informationThe Lifted Newton Method and Its Use in Optimization
The Lifted Newton Method and Its Use in Optimization Moritz Diehl Optimization in Engineering Center (OPTEC), K.U. Leuven, Belgium joint work with Jan Albersmeyer (U. Heidelberg) ENSIACET, Toulouse, February
More informationOptimum Experimental Design for Nonlinear Dynamical Models Mathematical Basic Research of High Economic Impact
Optimum Experimental Design for Nonlinear Dynamical Models Mathematical Basic Research of High Economic Impact Hans Georg Bock, Interdisciplinary Center for Scientific Computing (IWR), Heidelberg University
More informationNumerical Optimization: Basic Concepts and Algorithms
May 27th 2015 Numerical Optimization: Basic Concepts and Algorithms R. Duvigneau R. Duvigneau - Numerical Optimization: Basic Concepts and Algorithms 1 Outline Some basic concepts in optimization Some
More informationAn SQP Method for the Optimal Control of Large-Scale Dynamical Systems
An SQP Method for the Optimal Control of Large-Scale Dynamical Systems Philip E. Gill a, Laurent O. Jay b, Michael W. Leonard c, Linda R. Petzold d and Vivek Sharma d a Department of Mathematics, University
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods
More informationA Symbolic Numeric Environment for Analyzing Measurement Data in Multi-Model Settings (Extended Abstract)
A Symbolic Numeric Environment for Analyzing Measurement Data in Multi-Model Settings (Extended Abstract) Christoph Richard 1 and Andreas Weber 2? 1 Institut für Theoretische Physik, Universität Tübingen,
More informationTransformed Companion Matrices as a Theoretical Tool in the Numerical Analysis of Differential Equations
Transformed Companion Matrices as a Theoretical Tool in the Numerical Analysis of Differential Equations Winfried Auzinger Vienna University of Technology (work in progress, with O. Koch and G. Schranz-Kirlinger)
More informationConvex Quadratic Approximation
Convex Quadratic Approximation J. Ben Rosen 1 and Roummel F. Marcia 2 Abstract. For some applications it is desired to approximate a set of m data points in IR n with a convex quadratic function. Furthermore,
More informationProcess optimization of reactives systems by partially reduced SQP methods
Computers and Chemical Engineering 24 (2000) 89 97 www.elsevier.com/locate/compchemeng Process optimization of reactives systems by partially reduced SQP methods Marianne von Schwerin a, Olaf Deutschmann
More informationtmax Ψ(t,y(t),p,u(t))dt is minimized
AN SQP METHOD FOR THE OPTIMAL CONTROL OF LARGE-SCALE DYNAMICAL SYSTEMS PHILIP E GILL, LAURENT O JAY, MICHAEL W LEONARD, LINDA R PETZOLD, AND VIVEK SHARMA Abstract We propose a sequential quadratic programming
More informationWhat s New in Active-Set Methods for Nonlinear Optimization?
What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for
More informationOptimal experimental design for parameter estimation in column outflow experiments
WATER RESOURCES RESEARCH, VOL. 38, NO. 10, 1186, doi:10.1029/2001wr000358, 2002 Optimal experimental design for parameter estimation in column outflow experiments Angelika E. Altmann-Dieses, 1 Johannes
More informationarxiv: v1 [math.oc] 10 Apr 2017
A Method to Guarantee Local Convergence for Sequential Quadratic Programming with Poor Hessian Approximation Tuan T. Nguyen, Mircea Lazar and Hans Butler arxiv:1704.03064v1 math.oc] 10 Apr 2017 Abstract
More informationNUMERICALLY COMPUTING THE LYAPUNOV EXPONENTS OF MATRIX-VALUED COCYCLES
NUMERICALLY COMPUTING THE LYAPUNOV EXPONENTS OF MATRIX-VALUED COCYCLES RODRIGO TREVIÑO This short note is based on a talk I gave at the student dynamical systems seminar about using your computer to figure
More informationDynamic Programming with Hermite Interpolation
Dynamic Programming with Hermite Interpolation Yongyang Cai Hoover Institution, 424 Galvez Mall, Stanford University, Stanford, CA, 94305 Kenneth L. Judd Hoover Institution, 424 Galvez Mall, Stanford University,
More informationAlgorithms for Constrained Optimization
1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic
More informationConjugate Directions for Stochastic Gradient Descent
Conjugate Directions for Stochastic Gradient Descent Nicol N Schraudolph Thore Graepel Institute of Computational Science ETH Zürich, Switzerland {schraudo,graepel}@infethzch Abstract The method of conjugate
More informationNumerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen
Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen
More informationNumerical Optimal Control Overview. Moritz Diehl
Numerical Optimal Control Overview Moritz Diehl Simplified Optimal Control Problem in ODE path constraints h(x, u) 0 initial value x0 states x(t) terminal constraint r(x(t )) 0 controls u(t) 0 t T minimize
More informationCHAPTER 10: Numerical Methods for DAEs
CHAPTER 10: Numerical Methods for DAEs Numerical approaches for the solution of DAEs divide roughly into two classes: 1. direct discretization 2. reformulation (index reduction) plus discretization Direct
More informationOnline Optimal Experiment Design: Reduction of the Number of Variables
Preprint, 11th IFAC Symposium on Dynamics and Control of Process Systems, including Biosystems Online Optimal Experiment Design: Reduction of the Number of Variables Roberto Lemoine-Nava, Sebastian F.
More informationHeteroscedastic T-Optimum Designs for Multiresponse Dynamic Models
Heteroscedastic T-Optimum Designs for Multiresponse Dynamic Models Dariusz Uciński 1 and Barbara Bogacka 2 1 Institute of Control and Computation Engineering, University of Zielona Góra, ul. Podgórna 50,
More informationSolving PDEs with PGI CUDA Fortran Part 4: Initial value problems for ordinary differential equations
Solving PDEs with PGI CUDA Fortran Part 4: Initial value problems for ordinary differential equations Outline ODEs and initial conditions. Explicit and implicit Euler methods. Runge-Kutta methods. Multistep
More informationAn External Active-Set Strategy for Solving Optimal Control Problems
IEEE TRANSACTIONS ON AUTOMATIC CONTROL, VOL. 54, NO. 5, MAY 009 119 An External Active-Set Strategy for Solving Optimal Control Problems Hoam Chung, Member, IEEE, Elijah Polak, Life Fellow, IEEE, Shankar
More informationA Newton two-stage waveform relaxation method for solving systems of nonlinear algebraic equations
MATHEMATICAL COMMUNICATIONS Math. Commun. 2(25), 5 A Newton two-stage waveform relaxation method for solving systems of nonlinear algebraic equations Davod Khojasteh Salkuyeh, and Zeinab Hassanzadeh Faculty
More informationOptimization 2. CS5240 Theoretical Foundations in Multimedia. Leow Wee Kheng
Optimization 2 CS5240 Theoretical Foundations in Multimedia Leow Wee Kheng Department of Computer Science School of Computing National University of Singapore Leow Wee Kheng (NUS) Optimization 2 1 / 38
More informationDensity Approximation Based on Dirac Mixtures with Regard to Nonlinear Estimation and Filtering
Density Approximation Based on Dirac Mixtures with Regard to Nonlinear Estimation and Filtering Oliver C. Schrempf, Dietrich Brunn, Uwe D. Hanebeck Intelligent Sensor-Actuator-Systems Laboratory Institute
More informationEstimating the Region of Attraction of Ordinary Differential Equations by Quantified Constraint Solving
Estimating the Region of Attraction of Ordinary Differential Equations by Quantified Constraint Solving Henning Burchardt and Stefan Ratschan October 31, 2007 Abstract We formulate the problem of estimating
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2015 Notes 11: NPSOL and SNOPT SQP Methods 1 Overview
More informationLINEAR AND NONLINEAR PROGRAMMING
LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico
More informationThe Newton-Raphson Algorithm
The Newton-Raphson Algorithm David Allen University of Kentucky January 31, 2013 1 The Newton-Raphson Algorithm The Newton-Raphson algorithm, also called Newton s method, is a method for finding the minimum
More informationNUMERICAL OPTIMIZATION. J. Ch. Gilbert
NUMERICAL OPTIMIZATION J. Ch. Gilbert Numerical optimization (past) The discipline deals with the classical smooth (nonconvex) problem min {f(x) : c E (x) = 0, c I (x) 0}. Applications: variable added
More informationCSC321 Lecture 2: Linear Regression
CSC32 Lecture 2: Linear Regression Roger Grosse Roger Grosse CSC32 Lecture 2: Linear Regression / 26 Overview First learning algorithm of the course: linear regression Task: predict scalar-valued targets,
More informationA NEW COMPUTATIONAL METHOD FOR OPTIMAL CONTROL OF A CLASS OF CONSTRAINED SYSTEMS GOVERNED BY PARTIAL DIFFERENTIAL EQUATIONS
A NEW COMPUTATIONAL METHOD FOR OPTIMAL CONTROL OF A CLASS OF CONSTRAINED SYSTEMS GOVERNED BY PARTIAL DIFFERENTIAL EQUATIONS Nicolas Petit Mark Milam Richard Murray Centre Automatique et Systèmes École
More informationOptimization Methods
Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available
More information1.2 Derivation. d p f = d p f(x(p)) = x fd p x (= f x x p ). (1) Second, g x x p + g p = 0. d p f = f x g 1. The expression f x gx
PDE-constrained optimization and the adjoint method Andrew M. Bradley November 16, 21 PDE-constrained optimization and the adjoint method for solving these and related problems appear in a wide range of
More informationGENERATION OF COLORED NOISE
International Journal of Modern Physics C, Vol. 12, No. 6 (2001) 851 855 c World Scientific Publishing Company GENERATION OF COLORED NOISE LORENZ BARTOSCH Institut für Theoretische Physik, Johann Wolfgang
More informationNumerical Analysis: Solving Systems of Linear Equations
Numerical Analysis: Solving Systems of Linear Equations Mirko Navara http://cmpfelkcvutcz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office
More informationConvergence Rates on Root Finding
Convergence Rates on Root Finding Com S 477/577 Oct 5, 004 A sequence x i R converges to ξ if for each ǫ > 0, there exists an integer Nǫ) such that x l ξ > ǫ for all l Nǫ). The Cauchy convergence criterion
More informationSensitivity analysis of differential algebraic equations: A comparison of methods on a special problem
Applied Numerical Mathematics 32 (2000) 161 174 Sensitivity analysis of differential algebraic equations: A comparison of methods on a special problem Shengtai Li a, Linda Petzold a,, Wenjie Zhu b a Department
More informationNumerical Integration of Equations of Motion
GraSMech course 2009-2010 Computer-aided analysis of rigid and flexible multibody systems Numerical Integration of Equations of Motion Prof. Olivier Verlinden (FPMs) Olivier.Verlinden@fpms.ac.be Prof.
More informationThe Squared Slacks Transformation in Nonlinear Programming
Technical Report No. n + P. Armand D. Orban The Squared Slacks Transformation in Nonlinear Programming August 29, 2007 Abstract. We recall the use of squared slacks used to transform inequality constraints
More informationScientific Data Computing: Lecture 3
Scientific Data Computing: Lecture 3 Benson Muite benson.muite@ut.ee 23 April 2018 Outline Monday 10-12, Liivi 2-207 Monday 12-14, Liivi 2-205 Topics Introduction, statistical methods and their applications
More informationCOMPUTE CENSORED EMPIRICAL LIKELIHOOD RATIO BY SEQUENTIAL QUADRATIC PROGRAMMING Kun Chen and Mai Zhou University of Kentucky
COMPUTE CENSORED EMPIRICAL LIKELIHOOD RATIO BY SEQUENTIAL QUADRATIC PROGRAMMING Kun Chen and Mai Zhou University of Kentucky Summary Empirical likelihood ratio method (Thomas and Grunkmier 975, Owen 988,
More informationThe Bock iteration for the ODE estimation problem
he Bock iteration for the ODE estimation problem M.R.Osborne Contents 1 Introduction 2 2 Introducing the Bock iteration 5 3 he ODE estimation problem 7 4 he Bock iteration for the smoothing problem 12
More informationSQP METHODS AND THEIR APPLICATION TO NUMERICAL OPTIMAL CONTROL
c Birkhäuser Verlag Basel 207 SQP METHODS AND THEIR APPLICATION TO NUMERICAL OPTIMAL CONTROL ALEX BARCLAY PHILIP E. GILL J. BEN ROSEN Abstract. In recent years, general-purpose sequential quadratic programming
More informationA Tutorial on Numerical Methods for State and Parameter Estimation in Nonlinear Dynamic Systems
A Tutorial on Numerical Methods for State and Parameter Estimation in Nonlinear Dynamic Systems Boris Houska 1, Filip Logist 2, Moritz Diehl 1, and Jan Van Impe 2 1 SCD & OPTEC, Electrical Engineering
More informationA Continuation Approach Using NCP Function for Solving Max-Cut Problem
A Continuation Approach Using NCP Function for Solving Max-Cut Problem Xu Fengmin Xu Chengxian Ren Jiuquan Abstract A continuous approach using NCP function for approximating the solution of the max-cut
More informationMULTIPOINT BOUNDARY VALUE PROBLEMS FOR ORDINARY DIFFERENTIAL EQUATIONS. Katalin Károlyi Department of Applied Analysis, Eötvös Loránd University
MULTIPOINT BOUNDARY VALUE PROBLEMS FOR ORDINARY DIFFERENTIAL EQUATIONS Katalin Károlyi Department of Applied Analysis, Eötvös Loránd University HU-1117 Budapest, Pázmány Péter sétány 1/c. karolyik@cs.elte.hu
More informationAn Inexact Newton Method for Optimization
New York University Brown Applied Mathematics Seminar, February 10, 2009 Brief biography New York State College of William and Mary (B.S.) Northwestern University (M.S. & Ph.D.) Courant Institute (Postdoc)
More informationAn Algorithmic Framework of Large-Scale Circuit Simulation Using Exponential Integrators
An Algorithmic Framework of Large-Scale Circuit Simulation Using Exponential Integrators Hao Zhuang 1, Wenjian Yu 2, Ilgweon Kang 1, Xinan Wang 1, and Chung-Kuan Cheng 1 1. University of California, San
More informationSolving the Inverse Toeplitz Eigenproblem Using ScaLAPACK and MPI *
Solving the Inverse Toeplitz Eigenproblem Using ScaLAPACK and MPI * J.M. Badía and A.M. Vidal Dpto. Informática., Univ Jaume I. 07, Castellón, Spain. badia@inf.uji.es Dpto. Sistemas Informáticos y Computación.
More informationA Smoothing Newton Method for Solving Absolute Value Equations
A Smoothing Newton Method for Solving Absolute Value Equations Xiaoqin Jiang Department of public basic, Wuhan Yangtze Business University, Wuhan 430065, P.R. China 392875220@qq.com Abstract: In this paper,
More information460 HOLGER DETTE AND WILLIAM J STUDDEN order to examine how a given design behaves in the model g` with respect to the D-optimality criterion one uses
Statistica Sinica 5(1995), 459-473 OPTIMAL DESIGNS FOR POLYNOMIAL REGRESSION WHEN THE DEGREE IS NOT KNOWN Holger Dette and William J Studden Technische Universitat Dresden and Purdue University Abstract:
More informationCONVEXIFICATION SCHEMES FOR SQP METHODS
CONVEXIFICAION SCHEMES FOR SQP MEHODS Philip E. Gill Elizabeth Wong UCSD Center for Computational Mathematics echnical Report CCoM-14-06 July 18, 2014 Abstract Sequential quadratic programming (SQP) methods
More informationMultidisciplinary System Design Optimization (MSDO)
Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential
More informationVisual SLAM Tutorial: Bundle Adjustment
Visual SLAM Tutorial: Bundle Adjustment Frank Dellaert June 27, 2014 1 Minimizing Re-projection Error in Two Views In a two-view setting, we are interested in finding the most likely camera poses T1 w
More informationTime-Optimal Automobile Test Drives with Gear Shifts
Time-Optimal Control of Automobile Test Drives with Gear Shifts Christian Kirches Interdisciplinary Center for Scientific Computing (IWR) Ruprecht-Karls-University of Heidelberg, Germany joint work with
More informationThe estimation problem ODE stability The embedding method The simultaneous method In conclusion. Stability problems in ODE estimation
Mathematical Sciences Institute Australian National University HPSC Hanoi 2006 Outline The estimation problem ODE stability The embedding method The simultaneous method In conclusion Estimation Given the
More informationDUAL REGULARIZED TOTAL LEAST SQUARES SOLUTION FROM TWO-PARAMETER TRUST-REGION ALGORITHM. Geunseop Lee
J. Korean Math. Soc. 0 (0), No. 0, pp. 1 0 https://doi.org/10.4134/jkms.j160152 pissn: 0304-9914 / eissn: 2234-3008 DUAL REGULARIZED TOTAL LEAST SQUARES SOLUTION FROM TWO-PARAMETER TRUST-REGION ALGORITHM
More informationarxiv: v1 [math.na] 8 Jun 2018
arxiv:1806.03347v1 [math.na] 8 Jun 2018 Interior Point Method with Modified Augmented Lagrangian for Penalty-Barrier Nonlinear Programming Martin Neuenhofen June 12, 2018 Abstract We present a numerical
More informationA Trust-region-based Sequential Quadratic Programming Algorithm
Downloaded from orbit.dtu.dk on: Oct 19, 2018 A Trust-region-based Sequential Quadratic Programming Algorithm Henriksen, Lars Christian; Poulsen, Niels Kjølstad Publication date: 2010 Document Version
More informationParallelism in Structured Newton Computations
Parallelism in Structured Newton Computations Thomas F Coleman and Wei u Department of Combinatorics and Optimization University of Waterloo Waterloo, Ontario, Canada N2L 3G1 E-mail: tfcoleman@uwaterlooca
More informationAn SR1/BFGS SQP algorithm for nonconvex nonlinear programs with block-diagonal Hessian matrix
Math. Prog. Comp. (2016) 8:435 459 DOI 10.1007/s12532-016-0101-2 FULL LENGTH PAPER An SR1/BFGS SQP algorithm for nonconvex nonlinear programs with block-diagonal Hessian matrix Dennis Janka 1 Christian
More informationContinuous methods for numerical linear algebra problems
Continuous methods for numerical linear algebra problems Li-Zhi Liao (http://www.math.hkbu.edu.hk/ liliao) Department of Mathematics Hong Kong Baptist University The First International Summer School on
More informationSyllabus for Applied Mathematics Graduate Student Qualifying Exams, Dartmouth Mathematics Department
Syllabus for Applied Mathematics Graduate Student Qualifying Exams, Dartmouth Mathematics Department Alex Barnett, Scott Pauls, Dan Rockmore August 12, 2011 We aim to touch upon many topics that a professional
More informationAn initialization subroutine for DAEs solvers: DAEIS
Computers and Chemical Engineering 25 (2001) 301 311 www.elsevier.com/locate/compchemeng An initialization subroutine for DAEs solvers: DAEIS B. Wu, R.E. White * Department of Chemical Engineering, Uniersity
More informationOutline Introduction: Problem Description Diculties Algebraic Structure: Algebraic Varieties Rank Decient Toeplitz Matrices Constructing Lower Rank St
Structured Lower Rank Approximation by Moody T. Chu (NCSU) joint with Robert E. Funderlic (NCSU) and Robert J. Plemmons (Wake Forest) March 5, 1998 Outline Introduction: Problem Description Diculties Algebraic
More informationA relaxation of the strangeness index
echnical report from Automatic Control at Linköpings universitet A relaxation of the strangeness index Henrik idefelt, orkel Glad Division of Automatic Control E-mail: tidefelt@isy.liu.se, torkel@isy.liu.se
More informationEstimating the Region of Attraction of Ordinary Differential Equations by Quantified Constraint Solving
Proceedings of the 3rd WSEAS/IASME International Conference on Dynamical Systems and Control, Arcachon, France, October 13-15, 2007 241 Estimating the Region of Attraction of Ordinary Differential Equations
More informationEvaluation, transformation, and parameterization of epipolar conics
Evaluation, transformation, and parameterization of epipolar conics Tomáš Svoboda svoboda@cmp.felk.cvut.cz N - CTU CMP 2000 11 July 31, 2000 Available at ftp://cmp.felk.cvut.cz/pub/cmp/articles/svoboda/svoboda-tr-2000-11.pdf
More informationMachine Learning Applied to 3-D Reservoir Simulation
Machine Learning Applied to 3-D Reservoir Simulation Marco A. Cardoso 1 Introduction The optimization of subsurface flow processes is important for many applications including oil field operations and
More informationCSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization
CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization April 6, 2018 1 / 34 This material is covered in the textbook, Chapters 9 and 10. Some of the materials are taken from it. Some of
More informationAlgorithms for nonlinear programming problems II
Algorithms for nonlinear programming problems II Martin Branda Charles University Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects of Optimization
More informationA Gauss Lobatto quadrature method for solving optimal control problems
ANZIAM J. 47 (EMAC2005) pp.c101 C115, 2006 C101 A Gauss Lobatto quadrature method for solving optimal control problems P. Williams (Received 29 August 2005; revised 13 July 2006) Abstract This paper proposes
More informationA New Block Method and Their Application to Numerical Solution of Ordinary Differential Equations
A New Block Method and Their Application to Numerical Solution of Ordinary Differential Equations Rei-Wei Song and Ming-Gong Lee* d09440@chu.edu.tw, mglee@chu.edu.tw * Department of Applied Mathematics/
More informationVariational assimilation Practical considerations. Amos S. Lawless
Variational assimilation Practical considerations Amos S. Lawless a.s.lawless@reading.ac.uk 4D-Var problem ] [ ] [ 2 2 min i i i n i i T i i i b T b h h J y R y B,,, n i i i i f subject to Minimization
More informationOptimization Tutorial 1. Basic Gradient Descent
E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.
More informationj=1 r 1 x 1 x n. r m r j (x) r j r j (x) r j (x). r j x k
Maria Cameron Nonlinear Least Squares Problem The nonlinear least squares problem arises when one needs to find optimal set of parameters for a nonlinear model given a large set of data The variables x,,
More informationAn Introduction to Optimal Control of Partial Differential Equations with Real-life Applications
An Introduction to Optimal Control of Partial Differential Equations with Real-life Applications Hans Josef Pesch Chair of Mathematics in Engineering Sciences University of Bayreuth, Bayreuth, Germany
More informationCOMPUTATION OF THE EMPIRICAL LIKELIHOOD RATIO FROM CENSORED DATA. Kun Chen and Mai Zhou 1 Bayer Pharmaceuticals and University of Kentucky
COMPUTATION OF THE EMPIRICAL LIKELIHOOD RATIO FROM CENSORED DATA Kun Chen and Mai Zhou 1 Bayer Pharmaceuticals and University of Kentucky Summary The empirical likelihood ratio method is a general nonparametric
More informationA semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint
Iranian Journal of Operations Research Vol. 2, No. 2, 20, pp. 29-34 A semidefinite relaxation scheme for quadratically constrained quadratic problems with an additional linear constraint M. Salahi Semidefinite
More informationOptimization and Root Finding. Kurt Hornik
Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding
More informationOn Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming
On Generalized Primal-Dual Interior-Point Methods with Non-uniform Complementarity Perturbations for Quadratic Programming Altuğ Bitlislioğlu and Colin N. Jones Abstract This technical note discusses convergence
More informationNumerical Recipes. in Fortran 77. The Art of Scientific Computing Second Edition. Volume 1 of Fortran Numerical Recipes. William H.
Numerical Recipes in Fortran 77 The Art of Scientific Computing Second Edition Volume 1 of Fortran Numerical Recipes William H. Press Harvard-Smithsonian Center for Astrophysics Saul A. Teukolsky Department
More informationImproving the Verification and Validation Process
Improving the Verification and Validation Process Mike Fagan Rice University Dave Higdon Los Alamos National Laboratory Notes to Audience I will use the much shorter VnV abbreviation, rather than repeat
More informationMaria Cameron. f(x) = 1 n
Maria Cameron 1. Local algorithms for solving nonlinear equations Here we discuss local methods for nonlinear equations r(x) =. These methods are Newton, inexact Newton and quasi-newton. We will show that
More informationAn Implicit Runge Kutta Solver adapted to Flexible Multibody System Simulation
An Implicit Runge Kutta Solver adapted to Flexible Multibody System Simulation Johannes Gerstmayr 7. WORKSHOP ÜBER DESKRIPTORSYSTEME 15. - 18. March 2005, Liborianum, Paderborn, Germany Austrian Academy
More informationDISCRETE MECHANICS AND OPTIMAL CONTROL
DISCRETE MECHANICS AND OPTIMAL CONTROL Oliver Junge, Jerrold E. Marsden, Sina Ober-Blöbaum Institute for Mathematics, University of Paderborn, Warburger Str. 1, 3398 Paderborn, Germany Control and Dynamical
More informationTwo-Stage Stochastic and Deterministic Optimization
Two-Stage Stochastic and Deterministic Optimization Tim Rzesnitzek, Dr. Heiner Müllerschön, Dr. Frank C. Günther, Michal Wozniak Abstract The purpose of this paper is to explore some interesting aspects
More informationAM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality
More informationAlgorithms for nonlinear programming problems II
Algorithms for nonlinear programming problems II Martin Branda Charles University in Prague Faculty of Mathematics and Physics Department of Probability and Mathematical Statistics Computational Aspects
More informationThe use of second-order information in structural topology optimization. Susana Rojas Labanda, PhD student Mathias Stolpe, Senior researcher
The use of second-order information in structural topology optimization Susana Rojas Labanda, PhD student Mathias Stolpe, Senior researcher What is Topology Optimization? Optimize the design of a structure
More informationA Fast Augmented Lagrangian Algorithm for Learning Low-Rank Matrices
A Fast Augmented Lagrangian Algorithm for Learning Low-Rank Matrices Ryota Tomioka 1, Taiji Suzuki 1, Masashi Sugiyama 2, Hisashi Kashima 1 1 The University of Tokyo 2 Tokyo Institute of Technology 2010-06-22
More informationIntroduction to Numerical Analysis
J. Stoer R. Bulirsch Introduction to Numerical Analysis Translated by R. Bartels, W. Gautschi, and C. Witzgall Springer Science+Business Media, LLC J. Stoer R. Bulirsch Institut fiir Angewandte Mathematik
More informationFast Linear Iterations for Distributed Averaging 1
Fast Linear Iterations for Distributed Averaging 1 Lin Xiao Stephen Boyd Information Systems Laboratory, Stanford University Stanford, CA 943-91 lxiao@stanford.edu, boyd@stanford.edu Abstract We consider
More informationOn construction of constrained optimum designs
On construction of constrained optimum designs Institute of Control and Computation Engineering University of Zielona Góra, Poland DEMA2008, Cambridge, 15 August 2008 Numerical algorithms to construct
More informationNumerical Data Fitting in Dynamical Systems
Numerical Data Fitting in Dynamical Systems A Practical Introduction with Applications and Software by Klaus Schittkowski Department of Mathematics, University of Bayreuth, Bayreuth, Germany * * KLUWER
More informationApplication demonstration. BifTools. Maple Package for Bifurcation Analysis in Dynamical Systems
Application demonstration BifTools Maple Package for Bifurcation Analysis in Dynamical Systems Introduction Milen Borisov, Neli Dimitrova Department of Biomathematics Institute of Mathematics and Informatics
More information