Numerical Optimization of Partial Differential Equations
|
|
- Jacob O’Connor’
- 5 years ago
- Views:
Transcription
1 Numerical Optimization of Partial Differential Equations Part I: basic optimization concepts in R n Bartosz Protas Department of Mathematics & Statistics McMaster University, Hamilton, Ontario, Canada URL: Rencontres Normandes sur les aspects théoriques et numériques des EDP 5 9 November 2018, Rouen
2 Unconstrained Optimization Problems Optimality Conditions Gradient Flows Lagrange Multipliers Projected Gradients
3 A good reference for standard approaches
4 Unconstrained Optimization Problems Optimality Conditions Gradient Flows Suppose f : R N R, N 1, is a twice continuously differentiable objective function Unconstrained Optimization Problems: min f (x) x R N (for maximization problems, we can consider min[ f (x)]) A point x is a global minimizer if f ( x) f (x) for all x A point x is a local minimizer if there exists a neighborhood N of x such that f ( x) f (x) for all x N A local minimizer is strict (or strong), if it is unique in N
5 Unconstrained Optimization Problems Optimality Conditions Gradient Flows Gradient of the objective function f (x) := [ f x 1,..., f x N ] T Hessian of the objective function [ 2 f (x) ] i,j := 2 f x j x i, i, j = 1,..., N
6 Unconstrained Optimization Problems Optimality Conditions Gradient Flows Theorem (First-Order Necessary Condition) If x is a local minimizer, then f ( x) = 0. Theorem (Second-Order Sufficient Conditions) Suppose that f ( x) = 0 and 2 f ( x) is positive-definite. Then x is a strict local minimizers of f. Unfortunately, analogous characterization of global minimizers is not possible
7 How to find a local minimizer x? Unconstrained Optimization Problems Optimality Conditions Gradient Flows Consider the following initial-value problem in R N, known as the gradient flow dx(τ) = f (x(τ)), τ > 0, (GF) dτ x(0) = x 0, where τ is a pseudo-time (a parametrization) x 0 is a suitable initial guess Then, lim τ x(τ) = x In principle, the gradient flow may converge to a saddle point x s, where f (x s ) = 0 and the Hessian 2 f (x s ) is not positive-definite, but in actual computations this is very unlikely.
8 Discretize the gradient flow (GF) with Euler s explicit method (SD) { x (n+1) = x (n) τ f (x (n) ), n = 1, 2,..., x (0) = x 0, where x (n) := x(n τ), such that lim n x (n) = x τ is a fixed step size (since Euler s explicit scheme is only conditionally stable, τ must be sufficiently small) In principle, the gradient flow (GF) can be discretized with higher-order schemes, including implicit approaches, but they are not easy to apply to PDE optimization problems, hence will not be considered here.
9 Algorithm 1 (SD) 1: x (0) x 0 (initial guess) 2: n 0 3: repeat 4: compute the gradient f (x (n) ) 5: update x (n+1) = x (n) τ f (x (n) ) 6: n n + 1 7: until f (x(n) ) f (x (n 1) ) < ε f (x (n 1) ) f Input: x 0 initial guess τ fixed step size ε f tolerance in the termination condition Output: an approximation of the minimizer x
10 Computational Tests Rosenbrock s banana function f (x 1, x 2 ) = 100(x 2 x1 2 ) 2 + (1 x 1 ) 2 Global minimizer x 1 = x 2 = 1, f (1, 1) = 0 The function is known for its poor conditioning eigenvalues of the Hessian 2 f at the minimum: λ 1 0.4, λ
11 Choice of the step size τ: steepest descent is not meant to approximate the gradient flow (GF) accurately, but to minimize f (x) rapidly Sufficient decrease Armijo s condition f (x (n) + τ p (n) ) f (x (n) ) C τ f (x (n) ) T p (n) where p (n) is a search direction and C (0, 1) Figure credit: Nocedal & Wright (1999) Wolfe s condition: sufficient decrease and curvature
12 Optimize the step size at every iteration by solving the line minimization (line-search) problem τ n := argmin τ>0 f (x (n) τ f (x (n) )) Brent s method for line minimization: a combination of the golden-section search with parabolic interpolation (derivative-free) Figure credit: Numerical Recipes in C (1992) A robust implementation of Brent s method available in Numerical Recipes in C (1992), see also the function fminbnd in MATLAB
13 Algorithm 2 with Line Search (SDLS) 1: x (0) x 0 (initial guess) 2: n 0 3: repeat 4: compute the gradient f (x (n) ) 5: determine optimal step size τ n = argmin τ>0 f (x (n) τ f (x (n) )) 6: update x (n+1) = x (n) τ n f (x (n) ) 7: n n + 1 8: until f (x(n) ) f (x (n 1) ) f (x (n 1) ) < ε f Input: x 0 initial guess ε τ tolerance in line search ε f tolerance in the termination condition Output: an approximation of the minimizer x
14 Consider, for now, minimization of a quadratic form f (x) = 1 2 xt Ax b T x, where A R N N is a symmetric, positive-definite matrix and b R N Then, f (x) = Ax b =: r such that minimizing f (x) is equivalent to solving Ax = b A set of nonzero vectors [p 0, p 1,..., p k ] is said to be conjugate with respect to matrix A if p T i Ap j = 0, i, j = 0,..., k, i j (conjugacy implies linear independence)
15 Conjugate Gradient (CG) method x (n+1) = x (n) + τ n p (n), n = 1, 2,..., p (n) = r (n) + β (n) p n 1, (r (n) = f (x (n) ) = Ax (n) b), β (n) = (r(n) ) T Ap (n 1) (p (n 1) ) T, ( momentum ), Ap (n 1) τ n = (r(n) ) T p (n) (p (n) ) T, (exact formula for optimal step size), Ap (n) x 0 = x 0, p (0) = r (0) The directions p (0), p (1),..., p (n) generated by the CG method are conjugate with respect to matrix A this gives rise to a number of interesting and useful properties
16 Theorem (properties of CG iterations) The iterates generated by the CG method have the following properties { span p (0), p (1),..., p (n)} = span {r (0), r (1),..., r (n)} = span {r (0), Ar (0),..., A n r (0)} (the expanding subspace property) (r (n) ) T r (k) = (r (n) ) T p (k) = 0, i = 0,..., n 1 x (n) is the minimizer of f (x) = 1 2 xt Ax b T x over the set { x 0 + span {p (0), p (1),..., p (n)}}
17 Thus, in the method minimization of f (x) = 1 2 xt Ax b T x is performed by solving (exactly) N = dim(x) line-minimization problems along the conjugate directions { p (0), p (1),..., p (n)} As a result, convergence to x is achieved in at most N iterations What happens when f (x) is a general convex function?
18 The (linear) method admits a generalization to the nonlinear setting by: replacing the residual r (n) with the gradient f (x (n) ) computing the step size via line search τ n = argmin τ>0 f (x (n) τ f (x (n) )) using a more general expressions for the momentum term β (n) (such that the descent directions p (0), p (1),..., p (n) will only be approximately conjugate)
19 Nonlinear Conjugate Gradient (NCG) method x (n+1) = x (n) + τ n p (n), n = 1, 2,..., p (n) = f (x (n) ) + β (n) p n 1, ( T f (x )) (n) f (x (n) ) β (n) ( f (x = (n 1) )) T f (x (n 1) ) ( ) T ( ) f (x (n) ) f (x (n) ) f (x (n 1) ) ( f (x (n 1) )) T f (x (n 1) ) τ n = argmin τ>0 f (x (n) τ f (x (n) )), x 0 = x 0, p (0) = f (x (0) ) (Fletcher-Reeves), (Polak-Ribière), For quadratic functions f (x), both the Fletcher-Reeves (FR) and the Polak-Ribière (PR) variant coincide with the the linear CG In general, the descent directions p (0), p (1),..., p (n) are now only approximately conjugate
20 Algorithm 3 Polak-Ribière version of Conjugate Gradient (CG-PR) 1: x (0) x 0 (initial guess) 2: n 0 3: repeat 4: compute the gradient f (x (n) ) 5: calculate β n = ( f (x(n) )) T ( f (x (n) ) f (x (n 1) )) ( f (x (n 1) )) T f (x (n 1) ) 6: determine the descent direction p (n) = f (x (n) ) + β (n) p n 1 7: determine optimal step size τ n = argmin τ>0 f (x (n) + τ p (n) ) 8: update x (n+1) = x (n) + τ n p (n) 9: n n : until f (x(n) ) f (x (n 1) ) f (x (n 1) ) < ε f Input: x 0 initial guess, ε τ tolerance in line search tolerance in the termination condition ε f Output: an approximation of the minimizer x
21 Convergence theory quadratic case (I) Let f (x) = 1 2 xt Ax b T x, where the matrix A has eigenvalues 0 < λ 1 λ N and x A = x T Ax Theorem (Linear Convergence of ) For the Steepest-Descent approach we have the following estimate ( ) x (n+1) x 2 A λn λ 2 1 x (n) x 2 A λ N + λ 1 The rate of convergence is controlled by the spread of the eigenvalues of A
22 Convergence theory quadratic case (II) Theorem (Convergence of Linear ) For the linear approach we have the following estimate ( ) x (n+1) x 2 A λn n λ 2 1 x 0 x 2 A λ N n + λ 1 The iterates take out one eigenvalue at a time clustering of eigenvalues matters In the nonlinear setting, it is advantageous to periodically reset β n to zero (helpful in practice and simplifies some convergence proofs)
23 Lagrange Multipliers Projected Gradients What about problems with equality constraints? suppose c : R N R M, where 1 M < N then, we have an equality-constrained optimization problem min f (x) x R N subject to: c(x) = 0 If the constraint equation can be solved and we can write x = y + z, where y R N M and z = g(y) R M, then the problem is reduced to an unconstrained one with a reduced objective function min f (y + g(y)) y RN M
24 Lagrange Multipliers Projected Gradients Consider augmented objective function L : R N R L(x, λ) := f (x) λ T c(x), where λ R M is the Lagrange multiplier Differentiating the augmented objective function with respect to x x L(x, λ) := f (x) λ T c(x) Theorem (First-Order Necessary Condition) If x is a local minimizer of an equality-constrained optimization problem, then there exists λ R M such that the following equations are satisfied f ( x) λ T c( x) = 0, c( x) = 0 For inequality-constrained problems, the first-order necessary conditions become more complicated the Karush-Kuhn-Tucker (KKT) conditions
25 Lagrange Multipliers Projected Gradients How to compute equality-constrained minimizers with a gradient method? At each x R N the linearized constraint function c(x) R M N defines a (kernel) subspace with dimension rank[ c(x)] S x := {x R N, c(x)x = 0} this is the subspace tangent to the constraint manifold at x we need to project the gradient f (x) onto Sx Assuming that rank[ c(x)] = M, the projection operator P Sx : R N S x is given by [ ] 1 P Sx := I c(x) ( c(x)) T c(x) ( c(x)) T Replace f (x) with P Sx f (x) in the gradient method (SD or SDLS) nonlinear constraints satisfied with an error O(( τ) 2 ) or O(τ 2 n )
26 Lagrange Multipliers Projected Gradients Algorithm 4 Projected (PSD) 1: x (0) x 0 (initial guess) 2: n 0 3: repeat 4: compute the gradient f (x (n) ) 5: compute linearization of the constraint c(x (n) ) 6: determine the projector P Sx (n) 7: determine the projected gradient P f Sx (n) (x(n) ) 8: update x (n+1) = x (n) τ P f Sx (n) (x(n) ) 9: n n : until f (x(n) ) f (x (n 1) ) < ε f (x (n 1) ) f Input: x 0 initial guess, τ fixed step size tolerance in the termination condition ε f Output: an approximation of the minimizer x
27 Lagrange Multipliers Projected Gradients Computational Tests Rosenbrock s banana function f (x 1, x 2 ) = 100(x 2 x1 2 ) 2 + (1 x 1 ) 2 Global minimizer x 1 = x 2 = 1, f (1, 1) = 0 The function is known for its poor conditioning eigenvalues of the Hessian 2 f at the minimum: λ 1 0.4, λ Constraint c(x 1, x 2 ) = 0.05 x 4 1 x = 0
Numerical Optimization of Partial Differential Equations
Numerical Optimization of Partial Differential Equations Bartosz Protas Department of Mathematics & Statistics McMaster University, Hamilton, Ontario, Canada URL: http://www.math.mcmaster.ca/bprotas Rencontres
More informationNumerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09
Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods
More informationNonlinear Optimization: What s important?
Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global
More informationOn Some Variational Optimization Problems in Classical Fluids and Superfluids
On Some Variational Optimization Problems in Classical Fluids and Superfluids Bartosz Protas Department of Mathematics & Statistics McMaster University, Hamilton, Ontario, Canada URL: http://www.math.mcmaster.ca/bprotas
More informationUnconstrained optimization
Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout
More informationConstrained Optimization
1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange
More information5.6 Penalty method and augmented Lagrangian method
5.6 Penalty method and augmented Lagrangian method Consider a generic NLP problem min f (x) s.t. c i (x) 0 i I c i (x) = 0 i E (1) x R n where f and the c i s are of class C 1 or C 2, and I and E are the
More informationScientific Computing: Optimization
Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture
More information1 Numerical optimization
Contents 1 Numerical optimization 5 1.1 Optimization of single-variable functions............ 5 1.1.1 Golden Section Search................... 6 1.1. Fibonacci Search...................... 8 1. Algorithms
More informationNumerical Optimization: Basic Concepts and Algorithms
May 27th 2015 Numerical Optimization: Basic Concepts and Algorithms R. Duvigneau R. Duvigneau - Numerical Optimization: Basic Concepts and Algorithms 1 Outline Some basic concepts in optimization Some
More informationOptimization Methods
Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available
More informationProgramming, numerics and optimization
Programming, numerics and optimization Lecture C-3: Unconstrained optimization II Łukasz Jankowski ljank@ippt.pan.pl Institute of Fundamental Technological Research Room 4.32, Phone +22.8261281 ext. 428
More informationIn view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written
11.8 Inequality Constraints 341 Because by assumption x is a regular point and L x is positive definite on M, it follows that this matrix is nonsingular (see Exercise 11). Thus, by the Implicit Function
More informationLecture 18: Optimization Programming
Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming
More informationNumerical Optimization Techniques
Numerical Optimization Techniques Léon Bottou NEC Labs America COS 424 3/2/2010 Today s Agenda Goals Representation Capacity Control Operational Considerations Computational Considerations Classification,
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More informationMA/OR/ST 706: Nonlinear Programming Midterm Exam Instructor: Dr. Kartik Sivaramakrishnan INSTRUCTIONS
MA/OR/ST 706: Nonlinear Programming Midterm Exam Instructor: Dr. Kartik Sivaramakrishnan INSTRUCTIONS 1. Please write your name and student number clearly on the front page of the exam. 2. The exam is
More informationOptimization Tutorial 1. Basic Gradient Descent
E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.
More informationChapter 4. Unconstrained optimization
Chapter 4. Unconstrained optimization Version: 28-10-2012 Material: (for details see) Chapter 11 in [FKS] (pp.251-276) A reference e.g. L.11.2 refers to the corresponding Lemma in the book [FKS] PDF-file
More informationJanuary 29, Non-linear conjugate gradient method(s): Fletcher Reeves Polak Ribière January 29, 2014 Hestenes Stiefel 1 / 13
Non-linear conjugate gradient method(s): Fletcher Reeves Polak Ribière Hestenes Stiefel January 29, 2014 Non-linear conjugate gradient method(s): Fletcher Reeves Polak Ribière January 29, 2014 Hestenes
More informationConstrained Optimization Theory
Constrained Optimization Theory Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Constrained Optimization Theory IMA, August
More information1 Numerical optimization
Contents Numerical optimization 5. Optimization of single-variable functions.............................. 5.. Golden Section Search..................................... 6.. Fibonacci Search........................................
More informationReview of Classical Optimization
Part II Review of Classical Optimization Multidisciplinary Design Optimization of Aircrafts 51 2 Deterministic Methods 2.1 One-Dimensional Unconstrained Minimization 2.1.1 Motivation Most practical optimization
More informationSolutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright.
Solutions and Notes to Selected Problems In: Numerical Optimzation by Jorge Nocedal and Stephen J. Wright. John L. Weatherwax July 7, 2010 wax@alum.mit.edu 1 Chapter 5 (Conjugate Gradient Methods) Notes
More informationBindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems.
Consider constraints Notes for 2017-04-24 So far, we have considered unconstrained optimization problems. The constrained problem is minimize φ(x) s.t. x Ω where Ω R n. We usually define x in terms of
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationGeneralization to inequality constrained problem. Maximize
Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum
More informationFALL 2018 MATH 4211/6211 Optimization Homework 4
FALL 2018 MATH 4211/6211 Optimization Homework 4 This homework assignment is open to textbook, reference books, slides, and online resources, excluding any direct solution to the problem (such as solution
More informationAM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationNumerical optimization
Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal
More informationOptimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30
Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained
More informationISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints
ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained
More informationPart 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)
Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where
More informationNumerical Optimization
Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,
More informationNumerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems
1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of
More informationChapter 10 Conjugate Direction Methods
Chapter 10 Conjugate Direction Methods An Introduction to Optimization Spring, 2012 1 Wei-Ta Chu 2012/4/13 Introduction Conjugate direction methods can be viewed as being intermediate between the method
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 12: Nonlinear optimization, continued Prof. John Gunnar Carlsson October 20, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I October 20,
More informationGradient Descent. Dr. Xiaowei Huang
Gradient Descent Dr. Xiaowei Huang https://cgi.csc.liv.ac.uk/~xiaowei/ Up to now, Three machine learning algorithms: decision tree learning k-nn linear regression only optimization objectives are discussed,
More informationConjugate Directions for Stochastic Gradient Descent
Conjugate Directions for Stochastic Gradient Descent Nicol N Schraudolph Thore Graepel Institute of Computational Science ETH Zürich, Switzerland {schraudo,graepel}@infethzch Abstract The method of conjugate
More informationThe Conjugate Gradient Algorithm
Optimization over a Subspace Conjugate Direction Methods Conjugate Gradient Algorithm Non-Quadratic Conjugate Gradient Algorithm Optimization over a Subspace Consider the problem min f (x) subject to x
More informationLecture 3. Optimization Problems and Iterative Algorithms
Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex
More information2.3 Linear Programming
2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods
More informationGradient-Based Optimization
Multidisciplinary Design Optimization 48 Chapter 3 Gradient-Based Optimization 3. Introduction In Chapter we described methods to minimize (or at least decrease) a function of one variable. While problems
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationOptimization Methods for Circuit Design
Technische Universität München Department of Electrical Engineering and Information Technology Institute for Electronic Design Automation Optimization Methods for Circuit Design Compendium H. Graeb Version
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 19: Midterm 2 Review Prof. John Gunnar Carlsson November 22, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 22, 2010 1 / 34 Administrivia
More informationAlgorithms for Constrained Optimization
1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic
More informationOptimization Methods
Optimization Methods Categorization of Optimization Problems Continuous Optimization Discrete Optimization Combinatorial Optimization Variational Optimization Common Optimization Concepts in Computer Vision
More informationDistributed Consensus-Based Optimization
Advanced Topics in Control 208: Distributed Systems & Control Florian Dörfler We have already seen in previous lectures that optimization problems defined over a network of agents with individual cost
More informationOptimization: Nonlinear Optimization without Constraints. Nonlinear Optimization without Constraints 1 / 23
Optimization: Nonlinear Optimization without Constraints Nonlinear Optimization without Constraints 1 / 23 Nonlinear optimization without constraints Unconstrained minimization min x f(x) where f(x) is
More informationAM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α
More informationAM 205: lecture 18. Last time: optimization methods Today: conditions for optimality
AM 205: lecture 18 Last time: optimization methods Today: conditions for optimality Existence of Global Minimum For example: f (x, y) = x 2 + y 2 is coercive on R 2 (global min. at (0, 0)) f (x) = x 3
More informationOn the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search
ANZIAM J. 55 (E) pp.e79 E89, 2014 E79 On the convergence properties of the modified Polak Ribiére Polyak method with the standard Armijo line search Lijun Li 1 Weijun Zhou 2 (Received 21 May 2013; revised
More informationOptimization and Root Finding. Kurt Hornik
Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding
More information1 Introduction
2018-06-12 1 Introduction The title of this course is Numerical Methods for Data Science. What does that mean? Before we dive into the course technical material, let s put things into context. I will not
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization
E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained
More informationOptimization Problems with Constraints - introduction to theory, numerical Methods and applications
Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)
More informationCS-E4830 Kernel Methods in Machine Learning
CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This
More informationConjugate Gradient Method
Conjugate Gradient Method Hung M Phan UMass Lowell April 13, 2017 Throughout, A R n n is symmetric and positive definite, and b R n 1 Steepest Descent Method We present the steepest descent method for
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationSteepest descent algorithm. Conjugate gradient training algorithm. Steepest descent algorithm. Remember previous examples
Conjugate gradient training algorithm Steepest descent algorithm So far: Heuristic improvements to gradient descent (momentum Steepest descent training algorithm Can we do better? Definitions: weight vector
More informationNumerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen
Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen
More informationIntroduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research
Introduction to Machine Learning Lecture 7 Mehryar Mohri Courant Institute and Google Research mohri@cims.nyu.edu Convex Optimization Differentiation Definition: let f : X R N R be a differentiable function,
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationOutline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems
Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationNumerical Optimization. Review: Unconstrained Optimization
Numerical Optimization Finding the best feasible solution Edward P. Gatzke Department of Chemical Engineering University of South Carolina Ed Gatzke (USC CHE ) Numerical Optimization ECHE 589, Spring 2011
More informationAn Efficient Modification of Nonlinear Conjugate Gradient Method
Malaysian Journal of Mathematical Sciences 10(S) March : 167-178 (2016) Special Issue: he 10th IM-G International Conference on Mathematics, Statistics and its Applications 2014 (ICMSA 2014) MALAYSIAN
More information1 Computing with constraints
Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)
More informationx k+1 = x k + α k p k (13.1)
13 Gradient Descent Methods Lab Objective: Iterative optimization methods choose a search direction and a step size at each iteration One simple choice for the search direction is the negative gradient,
More informationStatic unconstrained optimization
Static unconstrained optimization 2 In unconstrained optimization an objective function is minimized without any additional restriction on the decision variables, i.e. min f(x) x X ad (2.) with X ad R
More informationOptimization with nonnegativity constraints
Optimization with nonnegativity constraints Arie Verhoeven averhoev@win.tue.nl CASA Seminar, May 30, 2007 Seminar: Inverse problems 1 Introduction Yves van Gennip February 21 2 Regularization strategies
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 15: Nonlinear optimization Prof. John Gunnar Carlsson November 1, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 1, 2010 1 / 24
More informationIPAM Summer School Optimization methods for machine learning. Jorge Nocedal
IPAM Summer School 2012 Tutorial on Optimization methods for machine learning Jorge Nocedal Northwestern University Overview 1. We discuss some characteristics of optimization problems arising in deep
More informationNewton s Method. Javier Peña Convex Optimization /36-725
Newton s Method Javier Peña Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, f ( (y) = max y T x f(x) ) x Properties and
More informationUnconstrained Multivariate Optimization
Unconstrained Multivariate Optimization Multivariate optimization means optimization of a scalar function of a several variables: and has the general form: y = () min ( ) where () is a nonlinear scalar-valued
More informationConstrained optimization: direct methods (cont.)
Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a
More informationConstrained optimization
Constrained optimization In general, the formulation of constrained optimization is as follows minj(w), subject to H i (w) = 0, i = 1,..., k. where J is the cost function and H i are the constraints. Lagrange
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More informationMethods for Unconstrained Optimization Numerical Optimization Lectures 1-2
Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Coralia Cartis, University of Oxford INFOMM CDT: Modelling, Analysis and Computation of Continuous Real-World Problems Methods
More informationMotivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:
CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through
More informationNewton s Method. Ryan Tibshirani Convex Optimization /36-725
Newton s Method Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x
More informationA globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications
A globally and R-linearly convergent hybrid HS and PRP method and its inexact version with applications Weijun Zhou 28 October 20 Abstract A hybrid HS and PRP type conjugate gradient method for smooth
More informationConditional Gradient (Frank-Wolfe) Method
Conditional Gradient (Frank-Wolfe) Method Lecturer: Aarti Singh Co-instructor: Pradeep Ravikumar Convex Optimization 10-725/36-725 1 Outline Today: Conditional gradient method Convergence analysis Properties
More informationWHY DUALITY? Gradient descent Newton s method Quasi-newton Conjugate gradients. No constraints. Non-differentiable ???? Constrained problems? ????
DUALITY WHY DUALITY? No constraints f(x) Non-differentiable f(x) Gradient descent Newton s method Quasi-newton Conjugate gradients etc???? Constrained problems? f(x) subject to g(x) apple 0???? h(x) =0
More informationIntroduction to Nonlinear Optimization Paul J. Atzberger
Introduction to Nonlinear Optimization Paul J. Atzberger Comments should be sent to: atzberg@math.ucsb.edu Introduction We shall discuss in these notes a brief introduction to nonlinear optimization concepts,
More information1 Conjugate gradients
Notes for 2016-11-18 1 Conjugate gradients We now turn to the method of conjugate gradients (CG), perhaps the best known of the Krylov subspace solvers. The CG iteration can be characterized as the iteration
More informationMath 5630: Conjugate Gradient Method Hung M. Phan, UMass Lowell March 29, 2019
Math 563: Conjugate Gradient Method Hung M. Phan, UMass Lowell March 29, 219 hroughout, A R n n is symmetric and positive definite, and b R n. 1 Steepest Descent Method We present the steepest descent
More informationAcceleration Method for Convex Optimization over the Fixed Point Set of a Nonexpansive Mapping
Noname manuscript No. will be inserted by the editor) Acceleration Method for Convex Optimization over the Fixed Point Set of a Nonexpansive Mapping Hideaki Iiduka Received: date / Accepted: date Abstract
More informationA DIMENSION REDUCING CONIC METHOD FOR UNCONSTRAINED OPTIMIZATION
1 A DIMENSION REDUCING CONIC METHOD FOR UNCONSTRAINED OPTIMIZATION G E MANOUSSAKIS, T N GRAPSA and C A BOTSARIS Department of Mathematics, University of Patras, GR 26110 Patras, Greece e-mail :gemini@mathupatrasgr,
More informationIntroduction to Optimization Techniques. Nonlinear Optimization in Function Spaces
Introduction to Optimization Techniques Nonlinear Optimization in Function Spaces X : T : Gateaux and Fréchet Differentials Gateaux and Fréchet Differentials a vector space, Y : a normed space transformation
More informationLecture V. Numerical Optimization
Lecture V Numerical Optimization Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Numerical Optimization p. 1 /19 Isomorphism I We describe minimization problems: to maximize
More informationWritten Examination
Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method Lecture 5, Continuous Optimisation Oxford University Computing Laboratory, HT 2006 Notes by Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The notion of complexity (per iteration)
More informationNonlinear Programming
Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week
More informationComputational Finance
Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples
More informationNonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control
Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control 19/4/2012 Lecture content Problem formulation and sample examples (ch 13.1) Theoretical background Graphical
More information