Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Size: px
Start display at page:

Download "Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30"

Transcription

1 Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

2 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained optimization (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 2 / 30

3 Unconstrained optimization Introduction General problem of optimization (minimization) Given f : Ω R n R find x Ω such that f (x ) f (x) for all x Ω. f is called the objective function, and Ω the set of feasible solutions. Main cases: Unconstrained optimization: Ω = R n Constrained optimization: Ω R n, usually determined by a set of equality or inequality constraints, h(x) = 0, g(x) 0, etc. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 3 / 30

4 Unconstrained optimization Fact: The are no general techniques for solving the problem of global optimization. Therefore, one usually solves in a weaker sense: Local optimization Find x Ω such that f (x ) f (x) for all x such that x x R. Exception If f is a strictly convex function and Ω is a strictly convex set, then f has a unique global minimum in Ω. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 4 / 30

5 Unconstrained optimization Recall theory of local optimization One variable: Solve the optimization problem for f : R R: Find the set of critical points x (f (x ) = 0). If f (x ) > 0 then x is a local minimum. n variables: Solve the optimization problem for f : R n R: Find critical points x, which satisfy f (x ) = 0, i.e. x1 f (x ) = 0, x2 f (x ) = 0,..., xn f (x ) = 0 Compute the Hessian at x H(f )(x ) = ( ) n xi x j f (x )). i,j=1 If definite positive, then x is a local minimum. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 5 / 30

6 Newton s method Unconstrained optimization Consider the second-order Taylor expansion of f with x = x x k. f (x k + x) = f (x k ) + f (x k ) T x ( x)t H(x k ) x, The extremum is attained when the differential with respect to x equals zero, i.e. when f (x k ) T + H(x k ) x = 0 = x = H(x k ) 1 f (x k ) T Newton s method is defined by x k+1 = x k H(x k ) 1 f (x k ) T. Remark Exact for quadratic objective functions, where H(x) is constant. Identical to using Newton s method for solving f (x) = 0. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 6 / 30

7 Unconstrained optimization Example f (x, y) = 1 m (x m + ηy m ), where m > 1 is and η > 0. (0, 0) is a global minimum. Then, ( ) f (x, y) = (x m 1, ηy m 1 x m 2 0 ), H f (x, y) = (m 1) 0 ηy m 2. ( Hf (x, y) ) 1 f (x, y) = 1 m 1 For x = (x, y), Newton s method gives ( ) ( ) x 2 m 0 x m η y 2 m ηy m 1 = 1 m 1 x k+1 = x k 1 m 1 x k = m 2 m 1 x k. ( ) x. y (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 7 / 30

8 Unconstrained optimization x k+1 = m 2 m 1 x k. If m = 2 (f is a parabolid) Newton s method converges in the first step for any initial guess. If m 2, the iterative formula gives x k+1 = ( m 2 ) k+1x0 0 as k, m 1 for any x 0 R 2, since (m 2)/(m 1) < 1. The method converges for any m > 1 and for any initial guess. If m is very large, (m 2)/(m 1) 1, and the convergence is slow. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 8 / 30

9 Unconstrained optimization Convergence Theorem Assume the following conditions: f is three times continuously differentiable x is a critical point of f H f (x ) is positive definite. Then, if x 0 is close enough to x, the iterations of Newton s method converge quadratically to x, i.e., for some constant λ > 0, x k+1 x λ x k x 2. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 9 / 30

10 Unconstrained optimization Problems with Newton s method For nonlinear functions, Newton s method requires solving a linear system every step: expensive. It may not converge if the initial guess is not good, or may converge to a saddle-point or maximum: unreliable. Difficulties addressed by using variants or quasi-newton methods: x k+1 = x k α k H 1 k f (x k ) T, where 0 < α k < 1 and H k is an approximation to the exact Hessian. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 10 / 30

11 Descent methods Unconstrained optimization Remark Finding a local minimum is generally easier than the general problem of solving the non-linear equations because f (x ) = 0 We can evaluate (and use) f, in addition to f, The Hessian is positive defintite near the solution, which gives algebraic advantages. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 11 / 30

12 Descent methods Unconstrained optimization If we have a current guess for the solution, x k, and know a descent direction (downhill) d k, i.e. a direction in which f (x k + αd k ) < f (x k ) for all 0 < α α max, then we can move downhill and get a point closer to the minimum: where α k is a step length. x k+1 = x k + α k d k, (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 12 / 30

13 Gradient descent Unconstrained optimization Using Taylor s expansion f (x k + α k d k ) f (x k ) + α k ( f ) T d k. The fastest local decrease is achieved moving opposite to the gradient d k = f (x k ). For choosing α k we minimize φ(α) = f (x k + αd k ). Must be done approximately. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 13 / 30

14 Unconstrained optimization Choosing the step size, α k φ(α) = f (x k + αd k ) We minimize an interpolator of φ. We have the data φ(0) = f (x k ), φ(1) = f (x k + d k ), and φ (0) = d k, d k < 0, we take, for α [0, 1], the quadratic polynomial q(α) = φ(0) + φ (0)α + (φ(1) φ(0) φ (0))α 2. If φ(1) φ(0) φ (0) < 0, the minimum of q is on the border of [0, 1], and we take α = 1 (α = 0 stops the iterations). Otherwise φ has an interior minimum given by α L = Thus, we take α = min{1, α L }. φ (0) 2(φ(1) φ(0) φ (0)) > 0. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 14 / 30

15 Some properties Unconstrained optimization If α k is the exact minimum of φ(α) then, using the chain rule, we obtain 0 = φ (α k ) = f (x k + α k d k ), d k = f (x k+1 ), f (x k ), Thus, f (x k ) and f (x k+1 ) are orthogonal: steepest descent takes a zig-zag path down to the minimum. Error Steepest descent has linear convergence: for some constant λ > 0, x k+1 x λ x k x. Can be very slow for ill-conditioned Hessians. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 15 / 30

16 Unconstrained optimization Example Consider the function f (x) = a 2 x 2, with a (0, 1), having the unique critical point at x = 0. An easy computation for the step α = min{1, α L } shows that α L = 1/a, so we must take α = 1. Then x k+1 = x k f (x k ) = x k ax k, so we can expect only linear convergence: x k+1 x k = a x k = a x k x. Moreover, we obtain by recursion that x k = (1 a) k x 0, and therefore, if a is close to zero, the convergence is extremely slow. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 16 / 30

17 Unconstrained optimization Figure : Descent trajectories for x k and f (x k ) (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 17 / 30

18 Constrained optimization Outline 1 Unconstrained optimization 2 Constrained optimization (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 18 / 30

19 General formulation Constrained optimization General constrained optimization problem Given f : R n R, find x R n satisfying min x R n f (x), φ(x) = 0 ψ(x) 0 (equality constraints), (inequality constraints). We assume that functions f, φ and ψ are differentiable. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 19 / 30

20 Constrained optimization Theorem (Necessary conditions for constrained problems) Suppose that x is a point of the set U = {x Ω : φ i (x) = 0, 1 i m} Ω, such that the m vectors φ i (x ) R N, with i = 1,..., m, are linearly independent. Then, if f has a local minimum at x relative to the set U, there exist m numbers λ i (x ), such that f (x ) + λ 1 (x ) φ 1 (x ) λ m (x ) φ m (x ) = 0. The numbers λ i (x ) are called Lagrange multipliers. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 20 / 30

21 Constrained optimization Example If φ : R 2 R, the set {x R 2 : h(x) = 0} is a curve with normal vector = φ. At the minimum, we have f φ and then f + λ φ = 0, for some λ. Figure : Left: Surface and curve. Right: Contour map. The red line shows the constraint g(x, y) = c (g is our φ). The blue lines are contours of f (x, y). The point where the red line tangentially touches a blue contour is the solution. Since d 1 > d 2, the solution is a maximization of f (x, y). (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 21 / 30

22 Constrained optimization The Lagrangian function is L : R N R m R given by L(x, λ) = f (x) + m λ i φ i (x) with λ = (λ 1,..., λ m ). i=1 If (x, λ ) is a minimum of L (without constraints) then (x,λ) L(x, λ ) = 0. Therefore, the optimality conditions with respect to x f (x ) + m λ i φ i (x ) = 0, i=1 and with respect to λ hold. φ i (x ) = 0, i = 1,..., m, We deduce that any x such that (x, λ ) is a critical point of L(x, λ) is a candidate to be a minimum for the constrained problem. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 22 / 30

23 Constrained optimization Example Let f (x 1, x 2 ) = x 2 and φ(x 1, x 2 ) = x1 2 + x (n = 2, m = 1). The set of constraints is, then, the circumference The Lagrangian function is given by The critical points are determined by U = {(x 1, x 2 ) R 2 : x x 2 2 = 1}. L(x 1, x 2, λ) = x 2 + λ(x x 2 2 1). 0 = L x 1 (x, λ ) = 2λx 1, 0 = L x 2 (x, λ ) = 1 + 2λx 2, 0 = L λ (x, λ ) = x x Solving, we get x 1 = 0, x 2 = ±1 and λ = 1/2x 2. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 23 / 30

24 Constrained optimization Theorem (Sufficient conditions for constrained problems) Let x U, with U (equality constraints) and λ R m such that f (x ) + m λ i φ i (x ) = 0. Suppose that the Hessian matrix of L, with respect to x, given by i=1 H(x ) = H f (x ) + λ T H φ (x ) is positive definite in {y R m : φ(x ) T y = 0}. Then x is a constrained minimum of f in U. In the previous example, ( ) ( ) H(x 0 0 ) = + λ 2 0, M = {(y , y 2 ) R 2 : x2 y 2 = 0}. Therefore, H(x ) is positive definite only for x = (0, 1). The other critical point of the Lagrangian, (0, 1), corresponds to a constrained maximum. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 24 / 30

25 Penalty method Constrained optimization The problem is stated as min f (x). x S Idea: replace f (x) by f (x) + cp(x) and solve an unconstrained problem. To do this, we take c > 0 and P satisfying: 1 P is continuous in Ω, 2 P(x) 0 for x Ω, and 3 P(x) = 0 if and only if x S. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 25 / 30

26 Constrained optimization Example: Suppose that S is given by S = {x R N : φ i (x) 0, i = 1,..., m}. An example of penalty function is P(x) = 1 2 m max(0, φ i (x)) 2. i=1 In the next figure we have an example of cp(x), with φ 1 (x) = x 2, φ 2 (x) = 1 x. For c large, the minimum of f (x) + cp(x) lie in a region where P is small. When c, the solution to the penalty problem converges to the solution of the constrained problem. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 26 / 30

27 Constrained optimization c = c = c = Figure : Function cp(x) for several values of c. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 27 / 30

28 Constrained optimization The procedure to solve the constrained problem by the penalty method is: Let c k be a sequence such that: c k 0 c k+1 > c k, lim k c k =. Define the functions q(c, x) = f (x) + cp(x). For each k, assume that the problem min q(c k, x) has a solution, x k. Theorem Let x k be a sequence generated by the penalty method. Then, any limit point of the sequence is the solution of the constrained minimization problem. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 28 / 30

29 Constrained optimization Example: Minimize f (x, y) = x 2 + 2y 2 in the set S = {(x, y) R 2 : x + y 1}. We define the differentiable penalty function { P(x, y) = c k = k, and q k (x, y) = f (x, y) + c k P(x, y). P and c k satisify the conditions. 0 if (x, y) S, (x + y 1) 2 if (x, y) R 2 \S, In practice, we use a numerical method (such as the gradient method) to solve the unconstrained minimization of q k. In this (simple) example, we compute the exact solution. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 29 / 30

30 Constrained optimization We start computing the critical points. If (x, y) S is a critical point of q k then q k (x, y) = (2x, 4y) = (0, 0). The solution is (0, 0) / S. Therefore, we disregard this point. If (x, y) R 2 \S is a critical point of q k then q k (x, y) = (2(1 + k)x + 2ky 2k, 2kx + 2(2 + k)y 2k) = (0, 0), with the solution (x k, y k ) = ( 2k 3k + 2, k ). 3k + 2 Since x k + y k = 3k/(3k + 2) < 1, we have (x k, y k ) R2 \S, for all k. The exact constrained minimum of f is obtained taking the limit k, whcih gives (x, y ) = (2/3, 1/3) S. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 30 / 30

Scientific Computing: Optimization

Scientific Computing: Optimization Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture

More information

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems

Outline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

MATH 4211/6211 Optimization Basics of Optimization Problems

MATH 4211/6211 Optimization Basics of Optimization Problems MATH 4211/6211 Optimization Basics of Optimization Problems Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 A standard minimization

More information

Interpolation. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Interpolation 1 / 34

Interpolation. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Interpolation 1 / 34 Interpolation Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Interpolation 1 / 34 Outline 1 Introduction 2 Lagrange interpolation 3 Piecewise polynomial

More information

1 Computing with constraints

1 Computing with constraints Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)

More information

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained

More information

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.

Penalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way. AMSC 607 / CMSC 878o Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 3: Penalty and Barrier Methods Dianne P. O Leary c 2008 Reference: N&S Chapter 16 Penalty and Barrier

More information

Constrained Optimization

Constrained Optimization 1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange

More information

AM 205: lecture 18. Last time: optimization methods Today: conditions for optimality

AM 205: lecture 18. Last time: optimization methods Today: conditions for optimality AM 205: lecture 18 Last time: optimization methods Today: conditions for optimality Existence of Global Minimum For example: f (x, y) = x 2 + y 2 is coercive on R 2 (global min. at (0, 0)) f (x) = x 3

More information

Applications of Linear Programming

Applications of Linear Programming Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal

More information

5 Handling Constraints

5 Handling Constraints 5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest

More information

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)

More information

LECTURE 22: SWARM INTELLIGENCE 3 / CLASSICAL OPTIMIZATION

LECTURE 22: SWARM INTELLIGENCE 3 / CLASSICAL OPTIMIZATION 15-382 COLLECTIVE INTELLIGENCE - S19 LECTURE 22: SWARM INTELLIGENCE 3 / CLASSICAL OPTIMIZATION TEACHER: GIANNI A. DI CARO WHAT IF WE HAVE ONE SINGLE AGENT PSO leverages the presence of a swarm: the outcome

More information

Lecture 3: Basics of set-constrained and unconstrained optimization

Lecture 3: Basics of set-constrained and unconstrained optimization Lecture 3: Basics of set-constrained and unconstrained optimization (Chap 6 from textbook) Xiaoqun Zhang Shanghai Jiao Tong University Last updated: October 9, 2018 Optimization basics Outline Optimization

More information

Math 273a: Optimization Basic concepts

Math 273a: Optimization Basic concepts Math 273a: Optimization Basic concepts Instructor: Wotao Yin Department of Mathematics, UCLA Spring 2015 slides based on Chong-Zak, 4th Ed. Goals of this lecture The general form of optimization: minimize

More information

4TE3/6TE3. Algorithms for. Continuous Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization 4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca

More information

Algorithms for constrained local optimization

Algorithms for constrained local optimization Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

More information

Algorithms for Constrained Optimization

Algorithms for Constrained Optimization 1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic

More information

Lecture Notes: Geometric Considerations in Unconstrained Optimization

Lecture Notes: Geometric Considerations in Unconstrained Optimization Lecture Notes: Geometric Considerations in Unconstrained Optimization James T. Allison February 15, 2006 The primary objectives of this lecture on unconstrained optimization are to: Establish connections

More information

Numerical optimization

Numerical optimization Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal

More information

MATH2070 Optimisation

MATH2070 Optimisation MATH2070 Optimisation Nonlinear optimisation with constraints Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The full nonlinear optimisation problem with equality constraints

More information

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality

More information

Numerical Optimization. Review: Unconstrained Optimization

Numerical Optimization. Review: Unconstrained Optimization Numerical Optimization Finding the best feasible solution Edward P. Gatzke Department of Chemical Engineering University of South Carolina Ed Gatzke (USC CHE ) Numerical Optimization ECHE 589, Spring 2011

More information

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems

Numerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems 1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of

More information

Unconstrained optimization

Unconstrained optimization Chapter 4 Unconstrained optimization An unconstrained optimization problem takes the form min x Rnf(x) (4.1) for a target functional (also called objective function) f : R n R. In this chapter and throughout

More information

Examination paper for TMA4180 Optimization I

Examination paper for TMA4180 Optimization I Department of Mathematical Sciences Examination paper for TMA4180 Optimization I Academic contact during examination: Phone: Examination date: 26th May 2016 Examination time (from to): 09:00 13:00 Permitted

More information

Multidisciplinary System Design Optimization (MSDO)

Multidisciplinary System Design Optimization (MSDO) Multidisciplinary System Design Optimization (MSDO) Numerical Optimization II Lecture 8 Karen Willcox 1 Massachusetts Institute of Technology - Prof. de Weck and Prof. Willcox Today s Topics Sequential

More information

Chapter 3 Numerical Methods

Chapter 3 Numerical Methods Chapter 3 Numerical Methods Part 2 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization 1 Outline 3.2 Systems of Equations 3.3 Nonlinear and Constrained Optimization Summary 2 Outline 3.2

More information

Lecture 3. Optimization Problems and Iterative Algorithms

Lecture 3. Optimization Problems and Iterative Algorithms Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex

More information

2.3 Linear Programming

2.3 Linear Programming 2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL) Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x

More information

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I IE 5531: Engineering Optimization I Lecture 15: Nonlinear optimization Prof. John Gunnar Carlsson November 1, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I November 1, 2010 1 / 24

More information

Optimization. Next: Curve Fitting Up: Numerical Analysis for Chemical Previous: Linear Algebraic and Equations. Subsections

Optimization. Next: Curve Fitting Up: Numerical Analysis for Chemical Previous: Linear Algebraic and Equations. Subsections Next: Curve Fitting Up: Numerical Analysis for Chemical Previous: Linear Algebraic and Equations Subsections One-dimensional Unconstrained Optimization Golden-Section Search Quadratic Interpolation Newton's

More information

Written Examination

Written Examination Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

More information

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares

CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares CS 542G: Robustifying Newton, Constraints, Nonlinear Least Squares Robert Bridson October 29, 2008 1 Hessian Problems in Newton Last time we fixed one of plain Newton s problems by introducing line search

More information

Lecture 18: Optimization Programming

Lecture 18: Optimization Programming Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

More information

Nonlinear Programming

Nonlinear Programming Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week

More information

The Steepest Descent Algorithm for Unconstrained Optimization

The Steepest Descent Algorithm for Unconstrained Optimization The Steepest Descent Algorithm for Unconstrained Optimization Robert M. Freund February, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 1 Steepest Descent Algorithm The problem

More information

Constrained optimization: direct methods (cont.)

Constrained optimization: direct methods (cont.) Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a

More information

NonlinearOptimization

NonlinearOptimization 1/35 NonlinearOptimization Pavel Kordík Department of Computer Systems Faculty of Information Technology Czech Technical University in Prague Jiří Kašpar, Pavel Tvrdík, 2011 Unconstrained nonlinear optimization,

More information

Gradient Descent. Dr. Xiaowei Huang

Gradient Descent. Dr. Xiaowei Huang Gradient Descent Dr. Xiaowei Huang https://cgi.csc.liv.ac.uk/~xiaowei/ Up to now, Three machine learning algorithms: decision tree learning k-nn linear regression only optimization objectives are discussed,

More information

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods

AM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

8 Numerical methods for unconstrained problems

8 Numerical methods for unconstrained problems 8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields

More information

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Session: 15 Aug 2015 (Mon), 10:00am 1:00pm I. Optimization with

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

Optimization Tutorial 1. Basic Gradient Descent

Optimization Tutorial 1. Basic Gradient Descent E0 270 Machine Learning Jan 16, 2015 Optimization Tutorial 1 Basic Gradient Descent Lecture by Harikrishna Narasimhan Note: This tutorial shall assume background in elementary calculus and linear algebra.

More information

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

Lecture 13: Constrained optimization

Lecture 13: Constrained optimization 2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems

More information

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method.

Constrained optimization. Unconstrained optimization. One-dimensional. Multi-dimensional. Newton with equality constraints. Active-set method. Optimization Unconstrained optimization One-dimensional Multi-dimensional Newton s method Basic Newton Gauss- Newton Quasi- Newton Descent methods Gradient descent Conjugate gradient Constrained optimization

More information

8 Barrier Methods for Constrained Optimization

8 Barrier Methods for Constrained Optimization IOE 519: NL, Winter 2012 c Marina A. Epelman 55 8 Barrier Methods for Constrained Optimization In this subsection, we will restrict our attention to instances of constrained problem () that have inequality

More information

Linear & nonlinear classifiers

Linear & nonlinear classifiers Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1396 1 / 44 Table

More information

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44 Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)

More information

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09

Numerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09 Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods

CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods March 23, 2018 1 / 35 This material is covered in S. Boyd, L. Vandenberge s book Convex Optimization https://web.stanford.edu/~boyd/cvxbook/.

More information

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema

Chapter 7. Extremal Problems. 7.1 Extrema and Local Extrema Chapter 7 Extremal Problems No matter in theoretical context or in applications many problems can be formulated as problems of finding the maximum or minimum of a function. Whenever this is the case, advanced

More information

Optimization and Root Finding. Kurt Hornik

Optimization and Root Finding. Kurt Hornik Optimization and Root Finding Kurt Hornik Basics Root finding and unconstrained smooth optimization are closely related: Solving ƒ () = 0 can be accomplished via minimizing ƒ () 2 Slide 2 Basics Root finding

More information

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by:

1 Newton s Method. Suppose we want to solve: x R. At x = x, f (x) can be approximated by: Newton s Method Suppose we want to solve: (P:) min f (x) At x = x, f (x) can be approximated by: n x R. f (x) h(x) := f ( x)+ f ( x) T (x x)+ (x x) t H ( x)(x x), 2 which is the quadratic Taylor expansion

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted

More information

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014 Convex Optimization Dani Yogatama School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA February 12, 2014 Dani Yogatama (Carnegie Mellon University) Convex Optimization February 12,

More information

Nonlinear Optimization for Optimal Control

Nonlinear Optimization for Optimal Control Nonlinear Optimization for Optimal Control Pieter Abbeel UC Berkeley EECS Many slides and figures adapted from Stephen Boyd [optional] Boyd and Vandenberghe, Convex Optimization, Chapters 9 11 [optional]

More information

17 Solution of Nonlinear Systems

17 Solution of Nonlinear Systems 17 Solution of Nonlinear Systems We now discuss the solution of systems of nonlinear equations. An important ingredient will be the multivariate Taylor theorem. Theorem 17.1 Let D = {x 1, x 2,..., x m

More information

A New Trust Region Algorithm Using Radial Basis Function Models

A New Trust Region Algorithm Using Radial Basis Function Models A New Trust Region Algorithm Using Radial Basis Function Models Seppo Pulkkinen University of Turku Department of Mathematics July 14, 2010 Outline 1 Introduction 2 Background Taylor series approximations

More information

Sufficient Conditions for Finite-variable Constrained Minimization

Sufficient Conditions for Finite-variable Constrained Minimization Lecture 4 It is a small de tour but it is important to understand this before we move to calculus of variations. Sufficient Conditions for Finite-variable Constrained Minimization ME 256, Indian Institute

More information

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints. 1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,

More information

Optimization Methods

Optimization Methods Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available

More information

LINEAR AND NONLINEAR PROGRAMMING

LINEAR AND NONLINEAR PROGRAMMING LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico

More information

Optimization using Calculus. Optimization of Functions of Multiple Variables subject to Equality Constraints

Optimization using Calculus. Optimization of Functions of Multiple Variables subject to Equality Constraints Optimization using Calculus Optimization of Functions of Multiple Variables subject to Equality Constraints 1 Objectives Optimization of functions of multiple variables subjected to equality constraints

More information

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes)

Motivation: We have already seen an example of a system of nonlinear equations when we studied Gaussian integration (p.8 of integration notes) AMSC/CMSC 460 Computational Methods, Fall 2007 UNIT 5: Nonlinear Equations Dianne P. O Leary c 2001, 2002, 2007 Solving Nonlinear Equations and Optimization Problems Read Chapter 8. Skip Section 8.1.1.

More information

Unconstrained minimization of smooth functions

Unconstrained minimization of smooth functions Unconstrained minimization of smooth functions We want to solve min x R N f(x), where f is convex. In this section, we will assume that f is differentiable (so its gradient exists at every point), and

More information

Functions of Several Variables

Functions of Several Variables Functions of Several Variables The Unconstrained Minimization Problem where In n dimensions the unconstrained problem is stated as f() x variables. minimize f()x x, is a scalar objective function of vector

More information

Numerical Optimization

Numerical Optimization Numerical Optimization Shan-Hung Wu shwu@cs.nthu.edu.tw Department of Computer Science, National Tsing Hua University, Taiwan Machine Learning Shan-Hung Wu (CS, NTHU) Numerical Optimization Machine Learning

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods

More information

Unconstrained Optimization

Unconstrained Optimization 1 / 36 Unconstrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University February 2, 2015 2 / 36 3 / 36 4 / 36 5 / 36 1. preliminaries 1.1 local approximation

More information

1 Numerical optimization

1 Numerical optimization Contents 1 Numerical optimization 5 1.1 Optimization of single-variable functions............ 5 1.1.1 Golden Section Search................... 6 1.1. Fibonacci Search...................... 8 1. Algorithms

More information

Numerical Optimization of Partial Differential Equations

Numerical Optimization of Partial Differential Equations Numerical Optimization of Partial Differential Equations Part I: basic optimization concepts in R n Bartosz Protas Department of Mathematics & Statistics McMaster University, Hamilton, Ontario, Canada

More information

Lagrange multipliers. Portfolio optimization. The Lagrange multipliers method for finding constrained extrema of multivariable functions.

Lagrange multipliers. Portfolio optimization. The Lagrange multipliers method for finding constrained extrema of multivariable functions. Chapter 9 Lagrange multipliers Portfolio optimization The Lagrange multipliers method for finding constrained extrema of multivariable functions 91 Lagrange multipliers Optimization problems often require

More information

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems 10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

More information

Numerical Methods I Solving Nonlinear Equations

Numerical Methods I Solving Nonlinear Equations Numerical Methods I Solving Nonlinear Equations Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 16th, 2014 A. Donev (Courant Institute)

More information

Lecture V. Numerical Optimization

Lecture V. Numerical Optimization Lecture V Numerical Optimization Gianluca Violante New York University Quantitative Macroeconomics G. Violante, Numerical Optimization p. 1 /19 Isomorphism I We describe minimization problems: to maximize

More information

Lagrange Multipliers

Lagrange Multipliers Lagrange Multipliers (Com S 477/577 Notes) Yan-Bin Jia Nov 9, 2017 1 Introduction We turn now to the study of minimization with constraints. More specifically, we will tackle the following problem: minimize

More information

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL) Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where

More information

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Lectures 9 and 10: Constrained optimization problems and their optimality conditions Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained

More information

Introduction to Unconstrained Optimization: Part 2

Introduction to Unconstrained Optimization: Part 2 Introduction to Unconstrained Optimization: Part 2 James Allison ME 555 January 29, 2007 Overview Recap Recap selected concepts from last time (with examples) Use of quadratic functions Tests for positive

More information

Optimization. Yuh-Jye Lee. March 21, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 29

Optimization. Yuh-Jye Lee. March 21, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 29 Optimization Yuh-Jye Lee Data Science and Machine Intelligence Lab National Chiao Tung University March 21, 2017 1 / 29 You Have Learned (Unconstrained) Optimization in Your High School Let f (x) = ax

More information

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory

More information

Convex Optimization. Lecture 12 - Equality Constrained Optimization. Instructor: Yuanzhang Xiao. Fall University of Hawaii at Manoa

Convex Optimization. Lecture 12 - Equality Constrained Optimization. Instructor: Yuanzhang Xiao. Fall University of Hawaii at Manoa Convex Optimization Lecture 12 - Equality Constrained Optimization Instructor: Yuanzhang Xiao University of Hawaii at Manoa Fall 2017 1 / 19 Today s Lecture 1 Basic Concepts 2 for Equality Constrained

More information

Interior Point Methods in Mathematical Programming

Interior Point Methods in Mathematical Programming Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000

More information

Conjugate Gradient (CG) Method

Conjugate Gradient (CG) Method Conjugate Gradient (CG) Method by K. Ozawa 1 Introduction In the series of this lecture, I will introduce the conjugate gradient method, which solves efficiently large scale sparse linear simultaneous

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems.

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems. Consider constraints Notes for 2017-04-24 So far, we have considered unconstrained optimization problems. The constrained problem is minimize φ(x) s.t. x Ω where Ω R n. We usually define x in terms of

More information

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection

6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE. Three Alternatives/Remedies for Gradient Projection 6.252 NONLINEAR PROGRAMMING LECTURE 10 ALTERNATIVES TO GRADIENT PROJECTION LECTURE OUTLINE Three Alternatives/Remedies for Gradient Projection Two-Metric Projection Methods Manifold Suboptimization Methods

More information

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to 1 of 11 11/29/2010 10:39 AM From Wikipedia, the free encyclopedia In mathematical optimization, the method of Lagrange multipliers (named after Joseph Louis Lagrange) provides a strategy for finding the

More information