Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)
|
|
- Kenneth Johns
- 5 years ago
- Views:
Transcription
1 Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where the objective function f : IR n IR assume that f C 1 sometimes C 2 and Lipschitz often in practice this assumption violated, but not necessary important special cases: linear programming: fx g T x quadratic programming: fx g T x + 2x 1 T Hx Concentrate here on quadratic programming
2 QUADRATIC PROGRAMMING QP: H is n by n, real symmetric, g IR n a T A 1. is m by n real, b a T m in general, constraints may have upper bounds: b l Ax b u include equalities: A e x b e qx g T x + 1 2x T Hx subject to Ax b [b] 1. [b] m involve simple bounds: x l x x u include network constraints... PROBLEM TYPES Convex problems H is positive semi-definite x T Hx for all x any local r is global important special case: H linear programming Strictly convex problems H is positive definite x T Hx > for all x unique r if any
3 CONVEX EXAMPLE minx x subject to x 1 + x 2 1 3x 1 + x x 1, x Contours of objective function PROBLEM TYPES II General non-convex problems H may be indefinite x T Hx < for some x may be many local rs may have to be content with a local r problem may be unbounded from below
4 NON-CONVEX EXAMPLE min 2x x subject to x 1 + x 2 1 3x 1 + x x 1, x Contours of objective function PROBLEM TYPES III Small values/structure of matrix data H and A irrelevant currently minm, n O1 2 Large values/structure of matrix data H and A important currently minm, n O1 3 Huge factorizations involving H and A are unrealistic currently minm, n O1 5
5 WHY IS QP SO IMPORTANT? many applications portfolio analysis, structural analysis, VLSI design, discrete-time stabilization, optimal and fuzzy control, finite impulse response design, optimal power flow, economic dispatch... 5 application papers prototypical nonlinear programming problem basic subproblem in constrained optimization: fx subject to cx f + g T x + 1 2x T Hx subject to Ax + c SQP methods Course Part 7 OPTIMALITY CONDITIONS Recall: the importance of optimality conditions is: to be able to recognise a solution if found by accident or design to guide the development of algorithms
6 FIRST-ORDER OPTIMALITY QP: qx g T x + 1 2x T Hx subject to Ax b Any point x that satisfies the conditions Ax b Hx + g A T y and y [Ax b] i [y ] i for all i primal feasibility dual feasibility complementary slackness for some vector of Lagrange multipliers y is a first-order critical or Karush-Kuhn-Tucker point If [Ax b] i [y ] i > for all i the solution is strictly complementary SECOND-ORDER OPTIMALITY QP: qx g T x + 1 2x T Hx subject to Ax b Let N + { s } a T i s for all i such that at i x [b] i and [y ] i > and a T i s for all i such that a T i x [b] i and [y ] i Any first-order critical point x for which additionally s T Hs resp. > for all s N + is a second-order resp. strong second-order critical point Theorem 4.1: x is a an isolated local r of QP x is strong second-order critical
7 WEAK SECOND-ORDER OPTIMALITY QP: qx g T x + 1 2x T Hx subject to Ax b Let N { s a T i s for all i such that a T i x [b] i } Any first-order critical point x for which additionally s T Hs for all s N is a weak second-order critical point Note that a weak second-order critical point may be a maximizer! checking for weak second-order criticality is easy strong is hard NON-CONVEX EXAMPLE min x x 2 2 6x 1 x 2 subject to x 1 + x 2 1 3x 1 + x x 1, x Contours of objective function: note that escaping from the origin may be difficult!
8 [ DUALITY QP: qx g T x + 1 2x T Hx subject to Ax b If QP is convex, any first-order critical point is a global r If H is strictly convex, the problem maximize y IR m, y is known as the dual of QP QP is the primal 1 2g T H 1 g + AH 1 g + b T y 1 2y T AH 1 A T y primal and dual have same KKT conditions if primal is feasible, optimal value of primal optimal value dual can be generalized for simply convex case ] ALGORITHMS Essentially two classes of methods slight simplification active set methods : primal active set methods aim for dual feasibility while maintaining primal feasibility and complementary slackness dual active set methods aim for primal feasibility while maintaining dual feasibility and complementary slackness interior-point methods : aim for complementary slackness while maintaining primal and dual feasibility Course Part 6
9 EQUALITY CONSTRAINED QP The basic subproblem in all of the methods we will consider is EQP: g T x + 1 2x T Hx subject to Ax N.B. Assume A is m by n, full-rank preprocess if necessary First-order optimality Lagrange multipliers y H A T x g A y Second-order necessary optimality: s T Hs for all s for which As Second-order sufficient optimality: s T Hs > for all s for which As EQUALITY CONSTRAINED QP II EQP: Four possibilities: i qx g T x + 1 2x T Hx subject to Ax H A T A x g y and H is second-order sufficient unique r x ii holds, H is second-order necessary, but s such that Hs and As family of weak rs x + αs for any α IR iii s for which As, Hs and g T s < q unbounded along direction of linear infinite descent s iv s for which As and s T Hs < q unbounded along direction of negative curvature s
10 CLASSIFICATION OF EQP METHODS Aim to solve H A T A x g y Three basic approaches: full-space approach range-space approach null-space approach For each of these can use direct factorization method iterative conjugate-gradient method FULL-SPACE/KKT/AUGMENTED SYSTEM APPROACH KKT matrix H A T A K x g y H A T A is symmetric, indefinite use Bunch-Parlett type factorization K P LBL T P T P permutation, L unit lower-triangular B block diagonal with 1x1 and 2x2 blocks LAPACK for small problems, MA27/MA57 for large ones Theorem 4.2: H is second-order sufficient K non-singular and has precisely m negative eigenvalues
11 RANGE-SPACE APPROACH H A T x g A y For non-singular H eliminate x using first block of AH 1 A T y AH 1 g followed by Hx g + A T y strictly convex case H and AH 1 A T positive definite Cholesky factorization Theorem 4.3: H is second-order sufficient H and AH 1 A T have same number of negative eigenvalues AH 1 A T usually dense factorization only for small m NULL-SPACE APPROACH H A T x A y g let n by n m S be a basis for null-space of A AS second block x Sx N premultiply first block by S T S T HSx S S T g Theorem 4.4: H is second-order sufficient S T HS is positive definite Cholesky factorization S T HS usually dense factorization only for small n m
12 NULL-SPACE BASIS Require n by n m null-space basis S for A AS Non-orthogonal basis: let A A 1 A 2 P P permutation, A 1 non-singular S P T A 1 1 A 2 I generally suitable for large problems. Best A 1? Orthogonal basis: let A L Q L non-singular e.g., triangular, Q Q 1 Q 2 orthonormal S Q T 2 more stable but... generally unsuitable for large problems [ ITERATIVE METHODS FOR SYMMETRIC LINEAR SYSTEMS Bx b Best methods are based on finding solutions from the Krylov space K {r, Br, BBr,...} r b Bx B indefinite: use MINRES method B positive definite: use conjugate gradient method usually satisfactory to find approximation rather than exact solution usually try to precondition system, i.e., solve C 1 Bx C 1 b where C 1 B I ]
13 [ ITERATIVE RANGE-SPACE APPROACH AH 1 A T y AH 1 g followed by Hx g + A T y For strictly convex case H and AH 1 A T positive definite H 1 available: directly or via factors, use conjugate gradients to solve AH 1 A T y AH 1 g matrix vector product AH 1 A T v A H 1 A T v preconditioning? Need to approximate likely dense AH 1 A T H 1 not available: use composite conjugate gradient method Urzawa s method iterating both on solutions to AH 1 A T y AH 1 g and Hx g + A T y at the same time may not converge ] [ ITERATIVE NULL-SPACE APPROACH S T HSx N S T g followed by x Sx N use conjugate gradient method matrix vector product S T HSv N S T HSv N preconditioning? Need to approximate likely dense S T HS if we encounter s N such that s T NS T HSs N < s Ns N is a direction of negative curvature since As and s T Hs < Advantage: Ax approx ]
14 [ ITERATIVE FULL-SPACE APPROACH H A T x g A y use MINRES with the preconditioner M AN 1 A T where M and N H. Disadvantage: Ax approx use conjugate gradients with the preconditioner M A T A where M H. Advantage: Ax approx ] ACTIVE SET ALGORITHMS QP: The active set Ax at x is If x solves QP, we have qx g T x + 1 2x T Hx subject to Ax b Ax {i a T i x [b] i } arg min qx subject to Ax b arg min qx subject to a T i x [b] i for all i Ax A working set Wx at x is a subset of the active set for which the vectors {a i }, i Wx are linearly independent
15 BASICS OF ACTIVE SET ALGORITHMS Basic idea: Pick a subset W k of {1,..., m} and find x k+1 arg min qx subject to a T i x [b] i for all i W k If x k+1 does not solve QP, adjust W k to form W k+1 and repeat Important issues are: how do we know if x k+1 solves QP? if x k+1 does not solve QP, how do we pick the next working set W k+1? Notation: rows of A k are those of A indexed by W k components of b k are those of b indexed by W k PRIMAL ACTIVE SET ALGORITHMS Important feature: ensure all iterates are feasible, i.e., Ax k b If W k Ax k A k x k b k and A k x k+1 b k x k+1 x k + s k, where s k arg min EQP k arg min qx k + s subject to A k s }{{} equality constrained problem Need an initial feasible point x
16 PRIMAL ACTIVE SET ALGORITHMS ADDING CONSTRAINTS s k arg min qx k + s subject to A k s What if x k + s k is not feasible? a currently inactive constraint j must become active at x k + α k s k for some α k < 1 pick the smallest such α k move instead to x k+1 x k + α k s k and set W k+1 W k + {j} PRIMAL ACTIVE SET ALGORITHMS DELETING CONSTRAINTS What if x k+1 x k + s k is feasible? x k+1 arg min qx subject to a T i x [b] i Lagrange multipliers y k+1 such that Three possibilities: H A T k A k x k+1 y k+1 qx k+1 not strictly-convex case only g b k for all i W k y k+1 x k+1 is a first-order critical point of QP [y k+1 ] i < for some i qx may be improved by considering W k+1 W k \ {j}, where j is the i-th member of W k
17 ACTIVE-SET APPROACH A 1 B Starting point. Unconstrained r 1. Encounter constraint A 1. Minimizer on constraint A 2. Encounter constraint B, move off constraint A 3. Minimizer on constraint B required solution LINEAR ALGEBRA Need to solve a sequence of EQP k s in which either W k+1 W k + {j} A k+1 or W k+1 W k \ {j} A k A k+1 a T j A k a T j Since working sets change gradually, aim to update factorizations rather than compute afresh
18 RANGE-SPACE APPROACH MATRIX UPDATES Need factors L k+1 L T k+1 A k+1 H 1 A T k+1 given L k LT k A k H 1 A T k A When A k+1 k a T j A k+1 H 1 A T A k+1 k H 1 A T k A k H 1 a j a T j H 1 A T k a T j H 1 a j where L k+1 L k l A k H 1 a j and λ L k l T λ a T j H 1 a j l T l Essentially reverse this to remove a constraint NULL-SPACE APPROACH MATRIX UPDATES Need factors A k+1 L k+1 Q k+1 given A k L k Q k L k Q 1 k Q 2 k To add a constraint to remove is similar A A k+1 k L a T k j a T Q k j QT 1 k at j QT 2 k L k I I a T j Q T 1 k at j Q T 2 k U T U [ ] [ ] L k I a T Q k j QT 1 k σet 1 U }{{}}{{} L k+1 Q k+1 where the Householder matrix U reduces Q 2 k a j to σe 1 Q k σ
19 [ FULL-SPACE APPROACH MATRIX UPDATES A W k becomes W l A k C A becomes A l C A D A A Solving H A T k A k y l y C y A H A T l A l H A T C A C A D s l y l A T D A A I A T A I g l s l y C y D y A u l g l ;... ] [ FULL-SPACE APPROACH MATRIX UPDATES CONT.... can solve H A T H A k T C A T D A T A s l g l A k A C y C A D I y D A A y A I u l using the factors of K k H A T k A k and the Schur complement A S l A H A T k I A k... 1 A T A I ]
20 [ SCHUR COMPLEMENT UPDATING Major iteration starts with factorization of H A T k K k A k As W k changes to W l, factorization of S l A A I is updated not recomputed H A T k A k 1 A T A I Once dim S l exceeds a given threshold, or it is cheaper to factorize/use K l than maintain/use K k and S l, start the next major iteration ] PHASE-1 To find an initial feasible point x such that Ax b use traditional simplex phase-1, or let r minb Ax guess,, and solve [x, ξ x guess, 1], ξ IR ξ subject to Ax + ξr b and ξ Alternatively, use a single-phase method Big-M: for some sufficiently large M, ξ IR qx + Mξ subject to Ax + ξr b and ξ l 1 QP ρ > may be reformulated as a QP qx + ρ maxb Ax,
21 CONVEX EXAMPLE minx x2.52 subject to x1 + x2 1 3x1 + x2 1.5 x1, x Contours of penalty function qx + ρk maxb Ax, k with ρ 2 NON-CONVEX EXAMPLE min 2x x2.52 subject to x1 + x2 1 3x1 + x2 1.5 x1, x Contours of penalty function qx + ρk maxb Ax, k with ρ 3
22 TERMINATION, DEGENERACY & ANTI-CYCLING So long as α k >, these methods are finite: finite number of steps to find an EQP with a feasible solution finite number of EQP with feasible solutions If x k is degenerate active constraints are dependent it is possible that α k. If this happens infinitely often may make no progress a cycle algorithm may stall Various anti-cycling rules Wolfe s and lexicographic perturbations least-index Bland s rule Fletcher s robust method NON-CONVEXITY causes little extra difficulty so long as suitable factorizations are possible Inertia-controlling methods tolerate at most one negative eigenvalue in the reduced Hessian. Idea is 1. start from working set on which problem is strictly convex e.g., a vertex 2. if a negative eigenvalue appears, do not drop any further constraints until 1. is restored 3. a direction of negative curvature is easy to obtain in 2. latest methods are not inertia controlling more flexible
23 COMPLEXITY When the problem is convex, there are algorithms that will solve QP in a polynomial number of iterations some interior-point algorithms are polynomial no known polynomial active-set algorithm When the problem is non-convex, it is unlikely that there are polynomial algorithms problem is NP complete even verifying that a proposed solution is locally optimal is NP hard NON-QUADRATIC OBJECTIVE When fx is non quadratic H H k changes active-set subproblem x k+1 arg min fx subject to a T i x [b] i for all i W k iteration now required but each step satisfies A k s linear algebra as before usually solve subproblem inaccurately when to stop? which Lagrange multipliers in this case? need to avoid zig-zagging in which working sets repeat
Nonlinear Optimization: What s important?
Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global
More informationPart 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)
Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization Nick Gould (RAL) x IR n f(x) subject to c(x) = Part C course on continuoue optimization CONSTRAINED MINIMIZATION x
More information2.098/6.255/ Optimization Methods Practice True/False Questions
2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence
More informationConstrained Nonlinear Optimization Algorithms
Department of Industrial Engineering and Management Sciences Northwestern University waechter@iems.northwestern.edu Institute for Mathematics and its Applications University of Minnesota August 4, 2016
More informationLecture 18: Optimization Programming
Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming
More information5.5 Quadratic programming
5.5 Quadratic programming Minimize a quadratic function subject to linear constraints: 1 min x t Qx + c t x 2 s.t. a t i x b i i I (P a t i x = b i i E x R n, where Q is an n n matrix, I and E are the
More informationAlgorithms for Constrained Optimization
1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic
More information1 Computing with constraints
Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)
More informationAdvanced Mathematical Programming IE417. Lecture 24. Dr. Ted Ralphs
Advanced Mathematical Programming IE417 Lecture 24 Dr. Ted Ralphs IE417 Lecture 24 1 Reading for This Lecture Sections 11.2-11.2 IE417 Lecture 24 2 The Linear Complementarity Problem Given M R p p and
More informationPart 3: Trust-region methods for unconstrained optimization. Nick Gould (RAL)
Part 3: Trust-region methods for unconstrained optimization Nick Gould (RAL) minimize x IR n f(x) MSc course on nonlinear optimization UNCONSTRAINED MINIMIZATION minimize x IR n f(x) where the objective
More informationOptimization Methods
Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available
More information10 Numerical methods for constrained problems
10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside
More informationLINEAR AND NONLINEAR PROGRAMMING
LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico
More informationConstrained Optimization
1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More informationLectures 9 and 10: Constrained optimization problems and their optimality conditions
Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained
More informationComputational Finance
Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples
More informationLecture 15: SQP methods for equality constrained optimization
Lecture 15: SQP methods for equality constrained optimization Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 15: SQP methods for equality constrained
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationLinear Programming: Simplex
Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints
ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained
More informationNumerical optimization
Numerical optimization Lecture 4 Alexander & Michael Bronstein tosca.cs.technion.ac.il/book Numerical geometry of non-rigid shapes Stanford University, Winter 2009 2 Longest Slowest Shortest Minimal Maximal
More informationWritten Examination
Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes
More informationScientific Computing: Optimization
Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture
More informationNumerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems
1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of
More informationPenalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques
More informationCONSTRAINED NONLINEAR PROGRAMMING
149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationAlgorithms for constrained local optimization
Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained
More informationLecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.
MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.
More information3 The Simplex Method. 3.1 Basic Solutions
3 The Simplex Method 3.1 Basic Solutions In the LP of Example 2.3, the optimal solution happened to lie at an extreme point of the feasible set. This was not a coincidence. Consider an LP in general form,
More informationConvex Optimization & Lagrange Duality
Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT
More informationConstrained optimization: direct methods (cont.)
Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a
More informationWhat s New in Active-Set Methods for Nonlinear Optimization?
What s New in Active-Set Methods for Nonlinear Optimization? Philip E. Gill Advances in Numerical Computation, Manchester University, July 5, 2011 A Workshop in Honor of Sven Hammarling UCSD Center for
More informationInterior Point Methods for Mathematical Programming
Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained
More informationSupport Vector Machines: Maximum Margin Classifiers
Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 16, 2008 Piotr Mirowski Based on slides by Sumit Chopra and Fu-Jie Huang 1 Outline What is behind
More informationBindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems.
Consider constraints Notes for 2017-04-24 So far, we have considered unconstrained optimization problems. The constrained problem is minimize φ(x) s.t. x Ω where Ω R n. We usually define x in terms of
More informationMATH2070 Optimisation
MATH2070 Optimisation Nonlinear optimisation with constraints Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The full nonlinear optimisation problem with equality constraints
More informationMS&E 318 (CME 338) Large-Scale Numerical Optimization
Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2015 Notes 9: Augmented Lagrangian Methods
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization
E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained
More informationMachine Learning. Support Vector Machines. Manfred Huber
Machine Learning Support Vector Machines Manfred Huber 2015 1 Support Vector Machines Both logistic regression and linear discriminant analysis learn a linear discriminant function to separate the data
More informationIE 5531 Midterm #2 Solutions
IE 5531 Midterm #2 s Prof. John Gunnar Carlsson November 9, 2011 Before you begin: This exam has 9 pages and a total of 5 problems. Make sure that all pages are present. To obtain credit for a problem,
More informationInfeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization
Infeasibility Detection and an Inexact Active-Set Method for Large-Scale Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with James V. Burke, University of Washington Daniel
More informationKernel Machines. Pradeep Ravikumar Co-instructor: Manuela Veloso. Machine Learning
Kernel Machines Pradeep Ravikumar Co-instructor: Manuela Veloso Machine Learning 10-701 SVM linearly separable case n training points (x 1,, x n ) d features x j is a d-dimensional vector Primal problem:
More informationNumerical Optimization
Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,
More informationNONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)
NONLINEAR PROGRAMMING (Hillier & Lieberman Introduction to Operations Research, 8 th edition) Nonlinear Programming g Linear programming has a fundamental role in OR. In linear programming all its functions
More informationAM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods Optimality Conditions: Equality Constrained Case As another example of equality
More informationSummary of the simplex method
MVE165/MMG630, The simplex method; degeneracy; unbounded solutions; infeasibility; starting solutions; duality; interpretation Ann-Brith Strömberg 2012 03 16 Summary of the simplex method Optimality condition:
More informationConvex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014
Convex Optimization Dani Yogatama School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA February 12, 2014 Dani Yogatama (Carnegie Mellon University) Convex Optimization February 12,
More informationSupport Vector Machines
Support Vector Machines Support vector machines (SVMs) are one of the central concepts in all of machine learning. They are simply a combination of two ideas: linear classification via maximum (or optimal
More informationIn view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written
11.8 Inequality Constraints 341 Because by assumption x is a regular point and L x is positive definite on M, it follows that this matrix is nonsingular (see Exercise 11). Thus, by the Implicit Function
More informationOptimization Problems with Constraints - introduction to theory, numerical Methods and applications
Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)
More informationCE 191: Civil & Environmental Engineering Systems Analysis. LEC 17 : Final Review
CE 191: Civil & Environmental Engineering Systems Analysis LEC 17 : Final Review Professor Scott Moura Civil & Environmental Engineering University of California, Berkeley Fall 2014 Prof. Moura UC Berkeley
More informationLecture 6: Conic Optimization September 8
IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions
More informationNumerical Optimization of Partial Differential Equations
Numerical Optimization of Partial Differential Equations Part I: basic optimization concepts in R n Bartosz Protas Department of Mathematics & Statistics McMaster University, Hamilton, Ontario, Canada
More informationOptimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30
Optimization Escuela de Ingeniería Informática de Oviedo (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30 Unconstrained optimization Outline 1 Unconstrained optimization 2 Constrained
More informationCS-E4830 Kernel Methods in Machine Learning
CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This
More informationLine Search Methods for Unconstrained Optimisation
Line Search Methods for Unconstrained Optimisation Lecture 8, Numerical Linear Algebra and Optimisation Oxford University Computing Laboratory, MT 2007 Dr Raphael Hauser (hauser@comlab.ox.ac.uk) The Generic
More informationMATH Dr. Pedro V squez UPRM. P. V squez (UPRM) Conferencia 1/ 17
Dr. Pedro V squez UPRM P. V squez (UPRM) Conferencia 1/ 17 Quadratic programming MATH 6026 Equality constraints A general formulation of these problems is: min x 2R nq (x) = 1 2 x T Qx + x T c (1) subjec
More informationWHY DUALITY? Gradient descent Newton s method Quasi-newton Conjugate gradients. No constraints. Non-differentiable ???? Constrained problems? ????
DUALITY WHY DUALITY? No constraints f(x) Non-differentiable f(x) Gradient descent Newton s method Quasi-newton Conjugate gradients etc???? Constrained problems? f(x) subject to g(x) apple 0???? h(x) =0
More informationMore First-Order Optimization Algorithms
More First-Order Optimization Algorithms Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 3, 8, 3 The SDM
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationNumerical Optimization. Review: Unconstrained Optimization
Numerical Optimization Finding the best feasible solution Edward P. Gatzke Department of Chemical Engineering University of South Carolina Ed Gatzke (USC CHE ) Numerical Optimization ECHE 589, Spring 2011
More informationOptimality, Duality, Complementarity for Constrained Optimization
Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear
More informationNumerical Optimization Professor Horst Cerjak, Horst Bischof, Thomas Pock Mat Vis-Gra SS09
Numerical Optimization 1 Working Horse in Computer Vision Variational Methods Shape Analysis Machine Learning Markov Random Fields Geometry Common denominator: optimization problems 2 Overview of Methods
More informationPenalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.
AMSC 607 / CMSC 878o Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 3: Penalty and Barrier Methods Dianne P. O Leary c 2008 Reference: N&S Chapter 16 Penalty and Barrier
More informationAdditional Exercises for Introduction to Nonlinear Optimization Amir Beck March 16, 2017
Additional Exercises for Introduction to Nonlinear Optimization Amir Beck March 16, 2017 Chapter 1 - Mathematical Preliminaries 1.1 Let S R n. (a) Suppose that T is an open set satisfying T S. Prove that
More informationA Dual Gradient-Projection Method for Large-Scale Strictly Convex Quadratic Problems
A Dual Gradient-Projection Method for Large-Scale Strictly Convex Quadratic Problems Nicholas I. M. Gould and Daniel P. Robinson February 8, 206 Abstract The details of a solver for minimizing a strictly
More informationAM 205: lecture 19. Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods
AM 205: lecture 19 Last time: Conditions for optimality, Newton s method for optimization Today: survey of optimization methods Quasi-Newton Methods General form of quasi-newton methods: x k+1 = x k α
More informationQuadratic Programming
Quadratic Programming Outline Linearly constrained minimization Linear equality constraints Linear inequality constraints Quadratic objective function 2 SideBar: Matrix Spaces Four fundamental subspaces
More informationKey words. Nonconvex quadratic programming, active-set methods, Schur complement, Karush- Kuhn-Tucker system, primal-feasible methods.
INERIA-CONROLLING MEHODS FOR GENERAL QUADRAIC PROGRAMMING PHILIP E. GILL, WALER MURRAY, MICHAEL A. SAUNDERS, AND MARGARE H. WRIGH Abstract. Active-set quadratic programming (QP) methods use a working set
More informationNonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control
Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control 19/4/2012 Lecture content Problem formulation and sample examples (ch 13.1) Theoretical background Graphical
More informationSemidefinite Programming Basics and Applications
Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent
More information1 Review Session. 1.1 Lecture 2
1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions
More informationI.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010
I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0
More informationInterior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems
AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss
More informationCSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming
CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming January 26, 2018 1 / 38 Liability/asset cash-flow matching problem Recall the formulation of the problem: max w c 1 + p 1 e 1 = 150
More informationInterior Point Methods in Mathematical Programming
Interior Point Methods in Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Brazil Journées en l honneur de Pierre Huard Paris, novembre 2008 01 00 11 00 000 000 000 000
More informationAn Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization
An Inexact Sequential Quadratic Optimization Method for Nonlinear Optimization Frank E. Curtis, Lehigh University involving joint work with Travis Johnson, Northwestern University Daniel P. Robinson, Johns
More informationMATH Dr. Pedro Vásquez UPRM. P. Vásquez (UPRM) Conferencia 1 / 17
MATH 6026 Dr. Pedro Vásquez UPRM P. Vásquez (UPRM) Conferencia 1 / 17 Quadratic programming uemath 6026 Equality constraints A general formulation of these problems is: min x 2R nq (x) = 1 2 x T Qx + x
More information5.6 Penalty method and augmented Lagrangian method
5.6 Penalty method and augmented Lagrangian method Consider a generic NLP problem min f (x) s.t. c i (x) 0 i I c i (x) = 0 i E (1) x R n where f and the c i s are of class C 1 or C 2, and I and E are the
More informationSupport Vector Machine (SVM) and Kernel Methods
Support Vector Machine (SVM) and Kernel Methods CE-717: Machine Learning Sharif University of Technology Fall 2015 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin
More informationLecture: Algorithms for LP, SOCP and SDP
1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More informationLinear programming II
Linear programming II Review: LP problem 1/33 The standard form of LP problem is (primal problem): max z = cx s.t. Ax b, x 0 The corresponding dual problem is: min b T y s.t. A T y c T, y 0 Strong Duality
More informationSupport Vector Machine (SVM) and Kernel Methods
Support Vector Machine (SVM) and Kernel Methods CE-717: Machine Learning Sharif University of Technology Fall 2014 Soleymani Outline Margin concept Hard-Margin SVM Soft-Margin SVM Dual Problems of Hard-Margin
More informationLinear algebra issues in Interior Point methods for bound-constrained least-squares problems
Linear algebra issues in Interior Point methods for bound-constrained least-squares problems Stefania Bellavia Dipartimento di Energetica S. Stecco Università degli Studi di Firenze Joint work with Jacek
More informationInexact Newton Methods and Nonlinear Constrained Optimization
Inexact Newton Methods and Nonlinear Constrained Optimization Frank E. Curtis EPSRC Symposium Capstone Conference Warwick Mathematics Institute July 2, 2009 Outline PDE-Constrained Optimization Newton
More informationThe simplex algorithm
The simplex algorithm The simplex algorithm is the classical method for solving linear programs. Its running time is not polynomial in the worst case. It does yield insight into linear programs, however,
More informationA STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE
A STABILIZED SQP METHOD: SUPERLINEAR CONVERGENCE Philip E. Gill Vyacheslav Kungurtsev Daniel P. Robinson UCSD Center for Computational Mathematics Technical Report CCoM-14-1 June 30, 2014 Abstract Regularized
More information8 Numerical methods for unconstrained problems
8 Numerical methods for unconstrained problems Optimization is one of the important fields in numerical computation, beside solving differential equations and linear systems. We can see that these fields
More informationSupport Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology. M. Soleymani Fall 2012
Support Vector Machine (SVM) & Kernel CE-717: Machine Learning Sharif University of Technology M. Soleymani Fall 2012 Linear classifier Which classifier? x 2 x 1 2 Linear classifier Margin concept x 2
More informationFollowing The Central Trajectory Using The Monomial Method Rather Than Newton's Method
Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method Yi-Chih Hsieh and Dennis L. Bricer Department of Industrial Engineering The University of Iowa Iowa City, IA 52242
More informationMethods for Unconstrained Optimization Numerical Optimization Lectures 1-2
Methods for Unconstrained Optimization Numerical Optimization Lectures 1-2 Coralia Cartis, University of Oxford INFOMM CDT: Modelling, Analysis and Computation of Continuous Real-World Problems Methods
More informationMachine Learning A Geometric Approach
Machine Learning A Geometric Approach CIML book Chap 7.7 Linear Classification: Support Vector Machines (SVM) Professor Liang Huang some slides from Alex Smola (CMU) Linear Separator Ham Spam From Perceptron
More informationA REGULARIZED ACTIVE-SET METHOD FOR SPARSE CONVEX QUADRATIC PROGRAMMING
A REGULARIZED ACTIVE-SET METHOD FOR SPARSE CONVEX QUADRATIC PROGRAMMING A DISSERTATION SUBMITTED TO THE INSTITUTE FOR COMPUTATIONAL AND MATHEMATICAL ENGINEERING AND THE COMMITTEE ON GRADUATE STUDIES OF
More informationOn fast trust region methods for quadratic models with linear constraints. M.J.D. Powell
DAMTP 2014/NA02 On fast trust region methods for quadratic models with linear constraints M.J.D. Powell Abstract: Quadratic models Q k (x), x R n, of the objective function F (x), x R n, are used by many
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 6 Optimization Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction permitted
More informationCS711008Z Algorithm Design and Analysis
CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief
More informationPattern Classification, and Quadratic Problems
Pattern Classification, and Quadratic Problems (Robert M. Freund) March 3, 24 c 24 Massachusetts Institute of Technology. 1 1 Overview Pattern Classification, Linear Classifiers, and Quadratic Optimization
More information