Lecture 3. Optimization Problems and Iterative Algorithms
|
|
- Gregory Eaton
- 5 years ago
- Views:
Transcription
1 Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns
2 Outline Special Functions: Linear, Quadratic, Convex Criteria for Convexity of a Function Operations Preserving Convexity Unconstrained Optimization First-Order Necessary Optimality Conditions Constrained Optimization First-Order Necessary Optimality Conditions KKT Conditions Iterative Algorithms Stochastic Optimization 1
3 Convex Function f is convex when dom(f) is convex set and there holds f(αx + (1 α)y) αf(x) + (1 α)f(y) for all x, y dom(f) and α [0, 1] strictly convex if the inequality is strict for all x, y dom(f) & α (0, 1) Note that dom(f) is defined as dom(f) {x : f(x) < + }. Stochastic Optimization 2
4 f (x) f (y) f (x) x y x y f is concave when f is convex f is strictly concave when f is strictly convex Stochastic Optimization 3
5 Examples of Convex/Concave Functions Examples on R Convex: Affine: ax + b over R for any a, b R Exponential: e ax over R for any a R Power: x p over (0, + ) for p 1 or p 0 Powers of absolute value: x p over R for p 1 Negative entropy: x ln x over (0, + ) Concave: Affine: ax + b over R for any a, b R Powers: x p over (0, + ) for 0 p 1 Logarithm: ln x over (0, + ) Examples on R n Affine functions are both convex and concave Norms x, x 1, x are convex Stochastic Optimization 4
6 Second-Order Conditions for Convexity Let f be twice differentiable and let dom(f) be the domain of f [In general, when differentiability is considered, it is required that dom(f) is open] The Hessian 2 f(x) is a symmetric n n matrix whose entries are the second-order partial derivatives of f at x: [ 2 f(x) ] ij = 2 f(x) x i x j for i, j = 1,..., n 2nd-order conditions: f is convex if and only if dom(f) is convex set and 2 f(x) 0 for all x dom(f) Positive semidefiniteness of a matrix: [Recall that R n n M 0 if for all x R n, x T Mx 0] f is strictly convex if dom(f) is convex set 2 f(x) 0 for all x dom(f) Positive definiteness of a matrix: [Recall that R n n M 0 if for all x R n, x T Mx > 0] Stochastic Optimization 5
7 Examples Quadratic function: f(x) = (1/2)x Qx + q x + r with a symmetric n n matrix Q f(x) = Qx + q, 2 f(x) = Q Convex for Q 0 Least-squares objective: f(x) = Ax b 2 with an m n matrix A f(x) = 2A T (Ax b), Convex for any A 2 f(x) = 2A T A Quadratic-over-linear: f(x, y) = x 2 /y Convex for y > 0 2 f(x, y) = 2 y 3 [ y x ] [ y x ] T 0 Stochastic Optimization 6
8 First-Order Condition for Convexity Let f be differentiable and let dom(f) be its domain. Then, the gradient f(x) = f(x) x 1 f(x) x 2. f(x) x n exists at each x dom(f) 1st-order condition: f is convex if and only if dom(f) is convex and f(x) + f(x) T (z x) f(z) for all x, z dom(f) Note: A first order approximation is a global underestimate of f Stochastic Optimization 7
9 Very important property used in convex optimization for algorithm designs and performance analysis Stochastic Optimization 8
10 Operations Preserving Convexity Let f and g be convex functions over R n Positive Scaling: λf is convex for λ > 0; Sum: f + g is convex; (λf)(x) = λf(x) for all x (f + g)(x) = f(x) + g(x) for all x Composition with affine function: for g affine [i.e., g(x) = Ax + b], the composition f g is convex, where (f g)(x) = f(ax + b) for all x Pointwise maximum: For convex functions f 1,..., f m, the pointwisemax function h(x) = max {f 1 (x),..., f m (x)} is convex Polyhedral function: f(x) = max i=1,...,m (a T i x + b i ) is convex Pointwise supremum: Let Y R m and f : R n R m R. Let f(x, y) be convex in x for each y Y. Then, the supremum function over the set Y h(x) = sup y Y f(x, y) is convex Stochastic Optimization 9
11 Optimization Terminology Let C R n and f : C R. Consider the following optimization problem minimize subject to f(x) x C Example: C = {x R n g(x) 0, x X} Terminology: The set C is referred to as feasible set We say that the problem is feasible when C is nonempty The problem is unconstrained when C = R n, and it is constrained otherwise We say that a vector x is optimal solution or a global minimum when x is feasible and the value f(x ) is not exceeded at any x C, i.e., x C f(x ) f(x) for all x C Stochastic Optimization 10
12 Local Minimum minimize subject to f(x) x C A vector ˆx is a local minimum for the problem if ˆx C and there is a ball B(ˆx, r) such that f(ˆx) f(x) for all x C with x ˆx r Every global minimum is also a local minimum When the set C is convex and the function f is convex then a local minimum is also global Stochastic Optimization 11
13 First-Order Necessary Optimality Condition: Unconstrained Problem Let f be a differentiable function with dom(f) = R n and let C = R n. If ˆx is a local minimum of f over R n, then the following holds: f(ˆx) = 0 The gradient relation can be equivalently given as: (y ˆx) f(ˆx) 0 for all y R n This is a variational inequality V I(K, F ) with the set K and the mapping F given by K = R n, F (x) = f(x) Solving a minimization problem can be reduced to solving a corresponding variational inequality Stochastic Optimization 12
14 First-Order Necessary Optimality Condition: Constrained Problem Let f be a differentiable function with dom(f) = R n and let C R n be a closed convex set. If ˆx is a local minimum of f over C, then the following holds: (y ˆx) f(ˆx) 0 for all y C (1) Again, this is a variational inequality V I(K, F ) with the set K and the mapping F given by K = C, F (x) = f(x) Recall that when f is convex, then a local minimum is also global When f is convex: the preceding relation is also sufficient for ˆx to be a global minimum i.e., if ˆx satisfies relation (1), then ˆx is a (global) minimum Stochastic Optimization 13
15 Equality and Inequality Constrained Problem Consider the following problem minimize f(x) subject to h 1 (x) = 0,..., h p (x) = 0 g 1 (x) 0,..., g m (x) 0 where f, h i and g j are continuously differentiable over R n. Def. For a feasible vector x, an active set of (inequality) constraints is the set given by A(x) = {j g j (x) = 0} If j A(x), we say that the j-th constraint is inactive at x Def. We say that a vector x is regular if the gradients h 1 (x),..., h p (x), and g j (x) for j A(x) are linearly independent NOTE: x is regular when there are no equality constraints, and all the inequality constrains are inactive [p = 0 and A(x) = ] Stochastic Optimization 14
16 Lagrangian Function With the problem minimize f(x) subject to h 1 (x) = 0,..., h p (x) = 0 g 1 (x) 0,..., g m (x) 0 (2) we associate the Lagrangian function L(x, λ, µ) defined by L(x, λ, µ) = f(x) + p i=1 λ i h i (x) + m j=1 µ j g j (x) where λ i R for all i, and µ j R + for all j Stochastic Optimization 15
17 First-Order Karush-Kuhn-Tucker (KKT) Necessary Conditions Th. Let ˆx be a local minimum of the equality/inequality constrained problem (2). Also, assume that ˆx is regular. Then, there exist unique multipliers ˆλ and ˆµ such that x L(ˆx, ˆλ, ˆµ) = 0 [L is the Lagrangian function] ˆµ j 0 for all j ˆµ j = 0 for all j A(ˆx) The last condition is referred to as complementarity conditions We can compactly write them as: g(ˆx) ˆµ Stochastic Optimization 16
18 In fact, the complementarity-based formulation can be used to write the first-order optimality conditions more compactly. Consider the following constrained optimization problem: minimize f(x) subject to c 1 (x) 0. c m (x) 0 0. Then, if ˆx is regular, then there exists multipliers ˆλ such that 0 ˆx x f(ˆx) x c(ˆx) Tˆλ 0 (3) 0 ˆλ c(ˆx) 0 (4) More succinctly, this is a nonlinear complementarity problem, denoted by Stochastic Optimization 17
19 CP (R m+n, F ), a problem that requires a z that satisfies 0 z F (z) 0, where z ( ) x λ and F (z) ( x f x c T λ c(x) ). Stochastic Optimization 18
20 Second-Order KKT Necessary Conditions Th. Let ˆx be a local minimum of the equality/inequality constrained problem (2). Also, assume that ˆx is regular and that f, h i, g j are twice continuously differentiable. Then, there exist unique multipliers ˆλ and ˆµ such that x L(ˆx, ˆλ, ˆµ) = 0 ˆµ j 0 for all j ˆµ j = 0 for all j A(ˆx) For any vector y such that h i (ˆx) y = 0 for all i and g j (ˆx) y = 0 for all j A(ˆx), the following relation holds: y 2 xxl(ˆx, ˆλ, ˆµ)y 0 Stochastic Optimization 19
21 Solution Procedures: Iterative Algorithms For solving problems, we will consider iterative algorithms Given an initial iterate x 0 We generate a new iterate x k+1 = G k (x k ) where G k is a mapping that depends on the optimization problem Objectives: Provide necessary conditions on the mappings G k that yield a sequence {x k } converging to a solution of the problem of interest Study how fast the sequence {x k } converges: Global convergence rate (when far from optimal points) Local convergence rate (when near an optimal point) Stochastic Optimization 20
22 Gradient Descent Method Consider continuously differentiable function f. We want to minimize f(x) over x R n Gradient descent method x k+1 = x k α k f(x k ) The scalar α k is a stepsize: α k > 0 The stepsize choices α k = α, or line search, or other stepsize rule so that f(x k+1 ) < f(x k ) Convergence Rate: Looking at the tail of an error e(x k ) = dist(x k, X ) sequence: where dist(x, A) {d(x, a) : a A}. Local convergence is at the best linear lim sup k e(x k+1 ) e(x k ) q for some q (0, 1) Stochastic Optimization 21
23 Global convergence is also at the best linear Stochastic Optimization 22
24 Newton s Method Consider twice continuously differentiable function f with Hessian 2 f(x) 0 for all x. We want to solve the following problem: minimize {f(x) : x R n } Newton s method x k+1 = x k α k 2 f(x k ) 1 f(x k ) Local Convergence Rate (near x ) f(x) converges to zero quadratically: f(x k ) C q 2k for all large enough k where C > 0 and q (0, 1) Stochastic Optimization 23
25 Penalty Methods For solving inequality constrained problems: minimize f(x) subject to g j (x) 0, j = 1,..., m Penalty Approach: Remove the constraints but penalize their violation P c : minimize F (x, c) = f(x)+cp (g 1 (x),..., g m (x)) over x R n where c > 0 is a penalty parameter and P is some penalty function Penalty methods operate in two stages for c and x, respectively Choose initial value c 0 (1) Having c k, solve the problem P ck to obtain its optimal x (c k ) (2) Using x (c k ), update c k to obtain c k+1 and go to step 1 Stochastic Optimization 24
26 Q-Rates of Convergence Let {x k } be a sequence in R n that converges to x Convergence is said to be: 1. Q-linear if r (0, 1) such that x k+1 x x k x r for k > K. Example: ( k ) converges Q-linearly to Q-quadratic if M such that x k+1 x x k x 2 M for k > K. Example: ( k ) converges Q-quadratically to Q-superlinear if r (0, 1) such that lim k x k+1 x x k x = 0 Example: (1 + k k ) converges Q-superlinearly to Q-quadratically = Q-superlinearly = Q-linearly Stochastic Optimization 25
27 Example 1 f(x, y) = x 2 + y 2 1. Steepest descent from ( ) Newton from 3. Newton from ( ) 1 1 ( 1 1 ) Stochastic Optimization 26
28 y y y Uday V. Shanbhag Lecture x x x Figure 1: Well Conditioned Function:Steepest, Newton, Newton Stochastic Optimization 27
29 Example 2 f(x, y) = 0.1x 2 + y 2 1. Steepest descent from ( ) Newton from 3. Newton from ( ) 1 1 ( 1 1 ) Stochastic Optimization 28
30 y 0 y 0 y x x x Figure 2: Ill-Conditioned Function: Steepest, Newton, Newton Stochastic Optimization 29
31 Interior-Point Methods Solve inequality (and more generally) constrained problem: minimize f(x) subject to g j (x) 0, j = 1,..., m The IPM solves a sequence of problems parametrized by t > 0: minimize f(x) 1 t m j=1 ln( g j (x)) Can be viewed as a penalty method with Penalty parameter c = 1 t Penalty function P (u 1,..., u m ) = m j=1 over x R n ln( u j ) This function is known as logarithmic barrier or log barrier function Stochastic Optimization 30
32 The material for this lecture: References for this lecture (B) Bertsekas D.P. Nonlinear Programming Chapter 1 and Chapter 3 (descent and Newton s methods, KKT conditions) (FP) Facchinei and Pang Finite Dimensional..., Vol I (Part on Complementarity Problems) Chapter 1 for Normal Cone, Dual Cone, and Tangent Cone (BNO) Bertsekas, Nedić, Ozdaglar Convex Analysis and Optimization Chapter 1 (convex functions) Stochastic Optimization 31
Constrained Optimization
1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange
More informationIntroduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research
Introduction to Machine Learning Lecture 7 Mehryar Mohri Courant Institute and Google Research mohri@cims.nyu.edu Convex Optimization Differentiation Definition: let f : X R N R be a differentiable function,
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationLectures 9 and 10: Constrained optimization problems and their optimality conditions
Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained
More informationLecture 15 Newton Method and Self-Concordance. October 23, 2008
Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationNumerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen
Numerisches Rechnen (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang Institut für Geometrie und Praktische Mathematik RWTH Aachen Wintersemester 2011/12 IGPM, RWTH Aachen Numerisches Rechnen
More informationConvex Optimization Theory. Chapter 5 Exercises and Solutions: Extended Version
Convex Optimization Theory Chapter 5 Exercises and Solutions: Extended Version Dimitri P. Bertsekas Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com
More informationMVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg
MVE165/MMG631 Overview of nonlinear programming Ann-Brith Strömberg 2015 05 21 Areas of applications, examples (Ch. 9.1) Structural optimization Design of aircraft, ships, bridges, etc Decide on the material
More informationLecture 2: Convex Sets and Functions
Lecture 2: Convex Sets and Functions Hyang-Won Lee Dept. of Internet & Multimedia Eng. Konkuk University Lecture 2 Network Optimization, Fall 2015 1 / 22 Optimization Problems Optimization problems are
More informationConvex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014
Convex Optimization Dani Yogatama School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA February 12, 2014 Dani Yogatama (Carnegie Mellon University) Convex Optimization February 12,
More informationMath 273a: Optimization Subgradients of convex functions
Math 273a: Optimization Subgradients of convex functions Made by: Damek Davis Edited by Wotao Yin Department of Mathematics, UCLA Fall 2015 online discussions on piazza.com 1 / 42 Subgradients Assumptions
More informationShiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient
Shiqian Ma, MAT-258A: Numerical Optimization 1 Chapter 4 Subgradient Shiqian Ma, MAT-258A: Numerical Optimization 2 4.1. Subgradients definition subgradient calculus duality and optimality conditions Shiqian
More informationISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints
ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More informationSubgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus
1/41 Subgradient Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes definition subgradient calculus duality and optimality conditions directional derivative Basic inequality
More informationLECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE
LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization
More informationConstrained optimization: direct methods (cont.)
Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a
More informationOptimization Problems with Constraints - introduction to theory, numerical Methods and applications
Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)
More informationCS-E4830 Kernel Methods in Machine Learning
CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This
More informationLecture: Duality of LP, SOCP and SDP
1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:
More informationN. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:
0.1 N. L. P. Katta G. Murty, IOE 611 Lecture slides Introductory Lecture NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP does not include everything
More informationNumerical Optimization
Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,
More informationQuiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006
Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in
More informationIntroduction to Nonlinear Stochastic Programming
School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS
More informationLagrangian Duality Theory
Lagrangian Duality Theory Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapter 14.1-4 1 Recall Primal and Dual
More informationChapter 2. Optimization. Gradients, convexity, and ALS
Chapter 2 Optimization Gradients, convexity, and ALS Contents Background Gradient descent Stochastic gradient descent Newton s method Alternating least squares KKT conditions 2 Motivation We can solve
More informationNumerical optimization
Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal
More informationExtreme Abridgment of Boyd and Vandenberghe s Convex Optimization
Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The
More informationUNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems
UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction
More informationOn the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method
Optimization Methods and Software Vol. 00, No. 00, Month 200x, 1 11 On the Local Quadratic Convergence of the Primal-Dual Augmented Lagrangian Method ROMAN A. POLYAK Department of SEOR and Mathematical
More informationMiscellaneous Nonlinear Programming Exercises
Miscellaneous Nonlinear Programming Exercises Henry Wolkowicz 2 08 21 University of Waterloo Department of Combinatorics & Optimization Waterloo, Ontario N2L 3G1, Canada Contents 1 Numerical Analysis Background
More information2.3 Linear Programming
2.3 Linear Programming Linear Programming (LP) is the term used to define a wide range of optimization problems in which the objective function is linear in the unknown variables and the constraints are
More informationConstrained Optimization Theory
Constrained Optimization Theory Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Constrained Optimization Theory IMA, August
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationLecture 8. Strong Duality Results. September 22, 2008
Strong Duality Results September 22, 2008 Outline Lecture 8 Slater Condition and its Variations Convex Objective with Linear Inequality Constraints Quadratic Objective over Quadratic Constraints Representation
More informationConvex Functions. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)
Convex Functions Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Definition convex function Examples
More informationComputational Finance
Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples
More informationAlgorithms for constrained local optimization
Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More informationLecture 18: Optimization Programming
Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming
More informationLecture 4: Convex Functions, Part I February 1
IE 521: Convex Optimization Instructor: Niao He Lecture 4: Convex Functions, Part I February 1 Spring 2017, UIUC Scribe: Shuanglong Wang Courtesy warning: These notes do not necessarily cover everything
More informationPenalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Duality in Nonlinear Optimization ) Tamás TERLAKY Computing and Software McMaster University Hamilton, January 2004 terlaky@mcmaster.ca Tel: 27780 Optimality
More informationChap 2. Optimality conditions
Chap 2. Optimality conditions Version: 29-09-2012 2.1 Optimality conditions in unconstrained optimization Recall the definitions of global, local minimizer. Geometry of minimization Consider for f C 1
More informationNumerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems
1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of
More informationCONSTRAINED NONLINEAR PROGRAMMING
149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach
More information8 Barrier Methods for Constrained Optimization
IOE 519: NL, Winter 2012 c Marina A. Epelman 55 8 Barrier Methods for Constrained Optimization In this subsection, we will restrict our attention to instances of constrained problem () that have inequality
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationConstrained optimization
Constrained optimization In general, the formulation of constrained optimization is as follows minj(w), subject to H i (w) = 0, i = 1,..., k. where J is the cost function and H i are the constraints. Lagrange
More informationConvex Optimization & Lagrange Duality
Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT
More information2.098/6.255/ Optimization Methods Practice True/False Questions
2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence
More informationMore on Lagrange multipliers
More on Lagrange multipliers CE 377K April 21, 2015 REVIEW The standard form for a nonlinear optimization problem is min x f (x) s.t. g 1 (x) 0. g l (x) 0 h 1 (x) = 0. h m (x) = 0 The objective function
More informationPenalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.
AMSC 607 / CMSC 878o Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 3: Penalty and Barrier Methods Dianne P. O Leary c 2008 Reference: N&S Chapter 16 Penalty and Barrier
More informationOptimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30
Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained
More informationThe general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.
1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,
More informationNonlinear Optimization: What s important?
Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global
More informationKarush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725
Karush-Kuhn-Tucker Conditions Lecturer: Ryan Tibshirani Convex Optimization 10-725/36-725 1 Given a minimization problem Last time: duality min x subject to f(x) h i (x) 0, i = 1,... m l j (x) = 0, j =
More informationPrimal/Dual Decomposition Methods
Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients
More informationLecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima
B9824 Foundations of Optimization Lecture 1: Introduction Fall 2009 Copyright 2009 Ciamac Moallemi Outline 1. Administrative matters 2. Introduction 3. Existence of optima 4. Local theory of unconstrained
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More information8. Conjugate functions
L. Vandenberghe EE236C (Spring 2013-14) 8. Conjugate functions closed functions conjugate function 8-1 Closed set a set C is closed if it contains its boundary: x k C, x k x = x C operations that preserve
More informationSECTION C: CONTINUOUS OPTIMISATION LECTURE 11: THE METHOD OF LAGRANGE MULTIPLIERS
SECTION C: CONTINUOUS OPTIMISATION LECTURE : THE METHOD OF LAGRANGE MULTIPLIERS HONOUR SCHOOL OF MATHEMATICS OXFORD UNIVERSITY HILARY TERM 005 DR RAPHAEL HAUSER. Examples. In this lecture we will take
More informationSequential Unconstrained Minimization: A Survey
Sequential Unconstrained Minimization: A Survey Charles L. Byrne February 21, 2013 Abstract The problem is to minimize a function f : X (, ], over a non-empty subset C of X, where X is an arbitrary set.
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 12: Nonlinear optimization, continued Prof. John Gunnar Carlsson October 20, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I October 20,
More informationIntroduction to Optimization Techniques. Nonlinear Optimization in Function Spaces
Introduction to Optimization Techniques Nonlinear Optimization in Function Spaces X : T : Gateaux and Fréchet Differentials Gateaux and Fréchet Differentials a vector space, Y : a normed space transformation
More informationInterior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems
AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss
More informationSolving Dual Problems
Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem
More informationSolution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark
Solution Methods Richard Lusby Department of Management Engineering Technical University of Denmark Lecture Overview (jg Unconstrained Several Variables Quadratic Programming Separable Programming SUMT
More informationIn view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written
11.8 Inequality Constraints 341 Because by assumption x is a regular point and L x is positive definite on M, it follows that this matrix is nonsingular (see Exercise 11). Thus, by the Implicit Function
More informationApplications of Linear Programming
Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal
More informationI.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010
I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0
More informationConvex Optimization. Newton s method. ENSAE: Optimisation 1/44
Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)
More informationExtended Monotropic Programming and Duality 1
March 2006 (Revised February 2010) Report LIDS - 2692 Extended Monotropic Programming and Duality 1 by Dimitri P. Bertsekas 2 Abstract We consider the problem minimize f i (x i ) subject to x S, where
More informationConvex Analysis and Optimization Chapter 2 Solutions
Convex Analysis and Optimization Chapter 2 Solutions Dimitri P. Bertsekas with Angelia Nedić and Asuman E. Ozdaglar Massachusetts Institute of Technology Athena Scientific, Belmont, Massachusetts http://www.athenasc.com
More informationGeneralization to inequality constrained problem. Maximize
Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More information1 Computing with constraints
Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)
More informationScientific Computing: Optimization
Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture
More informationConvex Optimization M2
Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization
More informationLecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem
Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R
More informationDate: July 5, Contents
2 Lagrange Multipliers Date: July 5, 2001 Contents 2.1. Introduction to Lagrange Multipliers......... p. 2 2.2. Enhanced Fritz John Optimality Conditions...... p. 14 2.3. Informative Lagrange Multipliers...........
More informationDuality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities
Duality Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Lagrangian Consider the optimization problem in standard form
More information8 Numerical methods for unconstrained problems
8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields
More informationComputational Optimization. Augmented Lagrangian NW 17.3
Computational Optimization Augmented Lagrangian NW 17.3 Upcoming Schedule No class April 18 Friday, April 25, in class presentations. Projects due unless you present April 25 (free extension until Monday
More informationAM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α
More informationINTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE
INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE HANDE Y. BENSON, ARUN SEN, AND DAVID F. SHANNO Abstract. In this paper, we present global
More informationCSCI : Optimization and Control of Networks. Review on Convex Optimization
CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one
More informationConvex Optimization Boyd & Vandenberghe. 5. Duality
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationPrimal Solutions and Rate Analysis for Subgradient Methods
Primal Solutions and Rate Analysis for Subgradient Methods Asu Ozdaglar Joint work with Angelia Nedić, UIUC Conference on Information Sciences and Systems (CISS) March, 2008 Department of Electrical Engineering
More informationOutline. Scientific Computing: An Introductory Survey. Optimization. Optimization Problems. Examples: Optimization Problems
Outline Scientific Computing: An Introductory Survey Chapter 6 Optimization 1 Prof. Michael. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationSubgradients. subgradients and quasigradients. subgradient calculus. optimality conditions via subgradients. directional derivatives
Subgradients subgradients and quasigradients subgradient calculus optimality conditions via subgradients directional derivatives Prof. S. Boyd, EE392o, Stanford University Basic inequality recall basic
More informationLecture 1: Introduction. Outline. B9824 Foundations of Optimization. Fall Administrative matters. 2. Introduction. 3. Existence of optima
B9824 Foundations of Optimization Lecture 1: Introduction Fall 2010 Copyright 2010 Ciamac Moallemi Outline 1. Administrative matters 2. Introduction 3. Existence of optima 4. Local theory of unconstrained
More informationA Brief Review on Convex Optimization
A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review
More informationConic Linear Programming. Yinyu Ye
Conic Linear Programming Yinyu Ye December 2004, revised January 2015 i ii Preface This monograph is developed for MS&E 314, Conic Linear Programming, which I am teaching at Stanford. Information, lecture
More informationIOE 511/Math 652: Continuous Optimization Methods, Section 1
IOE 511/Math 652: Continuous Optimization Methods, Section 1 Marina A. Epelman Fall 2007 These notes can be freely reproduced for any non-commercial purpose; please acknowledge the author if you do so.
More informationLecture 8 Plus properties, merit functions and gap functions. September 28, 2008
Lecture 8 Plus properties, merit functions and gap functions September 28, 2008 Outline Plus-properties and F-uniqueness Equation reformulations of VI/CPs Merit functions Gap merit functions FP-I book:
More informationLecture 13 Newton-type Methods A Newton Method for VIs. October 20, 2008
Lecture 13 Newton-type Methods A Newton Method for VIs October 20, 2008 Outline Quick recap of Newton methods for composite functions Josephy-Newton methods for VIs A special case: mixed complementarity
More informationConvex Optimization and Modeling
Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual
More information