Save this PDF as:

Size: px
Start display at page:

Transcription

1 Convexification of Mixed-Integer Quadratically Constrained Quadratic Programs Laura Galli 1 Adam N. Letchford 2 Lancaster, April DEIS, University of Bologna, Italy 2 Department of Management Science, Lancaster University, United Kingdom

2 Outline Literature Review Motivation Relaxations of QCQPs Quadratic Convex Reformulation (QCR) Convexification of 0-1 QCQP Reformulation with equations only Reformulation with inequalities Convexification of MIQCQP General integer variables Continuous variables Conclusions

3 Motivation It is well known that Semidefinite Programming (SDP) can be used to form useful relaxations for a variety of N P-hard optimisation problems, such as: zero-one linear programs (0-1 LP) zero-one quadratic programs (0-1 QP) non-convex quadratically constrained quadratic programs (QCQP) general polynomial programs

4 Motivation (cont.) Recently, Billionnet et al. (2009, 2010) proposed to use SDP to reformulate 0-1 QPs, not just to relax them. The idea is to perturb the objective function in such a way that it is convex and the continuous relaxation of the 0-1 QP is tighter. We examine to what extent this approach can be applied to the (much) more general case of mixed-integer quadratically constrained quadratic programs (MIQCQP).

5 Semidefinite relaxation of QCQP We start with Shor s (1987) relaxation of non-convex QCQP (which is an N P-hard global optimisation problem). An instance takes the form: inf x T Q 0 x +c 0 x s.t. x T Q j x +c j x b j x R n, (j = 1,...,m) where the Q j are symmetric, square and rational matrices of order n, the c j are rational n-vectors, and the b j are rational scalars.

6 Semidefinite relaxation of QCQP (cont.) The basic SDP relaxation of the QCQP is derived as follows: define the n n symmetric matrix X = xx T replace the terms x T Q j x with Q j X define the augmented symmetric matrix ( )( ) T ( ) x T Y = = x x x X note that Y is positive semidefinite (psd).

7 Semidefinite relaxation of QCQP (cont.) The resulting SDP is: inf Q 0 X +c 0 x s.t. Q j X +c j x b j Y 0. (j = 1,...,m) (The last constraint imposes psd-ness on Y.)

8 Lagrangian relaxation of QCQP As noted by Fujie & Kojima (1997), Lémaréchal & Oustry (2001) and others, there is a connection between semidefinite relaxation and Lagrangian relaxation. Suppose we relax all of the constraint in Lagrangian fashion, and let λ denote the vector of Lagrangian multipliers. The Lagrangian objective becomes: ( ) m m m x (Q T 0 + λ j Q )x j + c 0 + λ j c j x λ j b j. j=1 j=1 j=1

9 Lagrangian relaxation of QCQP (cont.) Let the Lagrangian objective be denoted by f(x,λ). The relaxed problem is: inf { f(x,λ) : x R n }, which is an unconstrained quadratic minimisation problem. The Lagrangian dual is: sup inf x R nf(x,λ). λ R m +

10 Lagrangian relaxation of QCQP (cont.) Suppose that strong duality holds for the SDP. Then one can show that: the value of the Lagrangian dual equals the SDP bound, and the optimal Lagrangian multiplier vector λ is nothing but the optimal dual vector for the SDP.

11 QCQP summary We have: opt SDP bound dual SDP bound = Lagrangian bound and the second inequality is an equality under suitable constraint qualifications.

12 0-1 QP Now consider a 0-1 QP of the form: min x T Qx +c T x s.t. Ax = b Dx f x {0,1} n. As observed by many authors, the condition that x be binary is equivalent to requiring x i xi 2 = 0 (i = 1,...,n).

13 SDP relaxation of 0-1 QP This leads immediately to the following semidefinite relaxation: min Q X +c T x s.t. Ax = b Dx f x = diag(x) Y 0.

14 Convex reformulation of 0-1 QP Reformulating a 0-1 QP instance means perturbing the objective function in such a way that: the objective function is convex the cost of each feasible solution is unchanged the constraints remain unchanged, so that the feasible set is unchanged.

15 Minimum eigenvalue reformulation of 0-1 QP A simple reformulation technique was given by Hammer & Rubin (1970). They add 1 i n M(x2 i x i ) to the objective function, where M > 0 is large enough to ensure that the matrix Q +MI of quadratic coefficients is psd. In this way, they ensure that the reformulated instance is convex. (A suitable value for M is the minimum eigenvalue of Q, multiplied by 1.)

16 Quadratic Convex Reformulation (QCR) Billionnet et al. (2009) develop this idea further, in two ways: they perturb the objective function by addinggeneral terms of the form λ i (x i x 2 i ) and µ jk (a j x b j )x k. instead of merely ensuring that the reformulated instance is convex, they also ensure that its continuous relaxation is as tight as possible.

17 Quadratic Convex Reformulation (QCR) (cont.) Billionnet et al. (2009) show that the optimal multipliers λ and µ can be found by solving an SDP. They call this approach convex quadratic reformulation or QCR. Once QCR has been applied, one can feed the reformulated instance into any convex MIQP solver. (An extension of QCR to general MIQP was given recently by Billionnet et al., 2010.)

18 0-1 QCQP Our first goal is to extend the QCR method to 0-1 QCQP. We suppose the instance is written as: min x T Q 0 x +c 0 x s.t. x T Q j x +c j x = b j (j N 1) x T Q j x +c j x b j (j N 2) x {0,1} n. (We assume for simplicity of notation that any linear constraints, along with any additional valid inequalities, are subsumed in the above system.)

19 SDP relaxation of 0-1 QCQP The semidefinite relaxation is immediate: min Q 0 X +c 0 x s.t. Q j X +c j x = b j (j N 1) Q j X +c j x b j (j N 2) x = diag(x) Y 0. The Lagrangian relaxation is easy to write as well.

20 Convex reformulation of 0-1 QCQP By analogy with the original QCR method, we seek to transform a 0-1 QCQP instance into another 0-1 QCQP instance, that has the following four properties: the objective and constraint functions are convex the set of feasible solutions is identical to that of the original instance the cost of each feasible solution is the same as it was in the original instance...

21 Convex reformulation of 0-1 QCQP By analogy with the original QCR method, we seek to transform a 0-1 QCQP instance into another 0-1 QCQP instance, that has the following four properties: the objective and constraint functions are convex the set of feasible solutions is identical to that of the original instance the cost of each feasible solution is the same as it was in the original instance and the lower bound from the continuous relaxation is equal to the semidefinite/lagrangian bound for the original instance.

22 0-1 QCQP with Quadratic Equations For the case in which all constraints are equations (i.e., N 2 = ), we propose the following scheme: 1 Solve the SDP, obtaining optimal dual vectors λ R N 1 and ν R n ; 2 Perturb the objective function by adding terms of the form x T Diag(ν )x (ν ) T x and λ j (xt Q j x +c j x b j ) for j N 1 ; 3 Replace each quadratic equation with two quadratic inequalities; 4 Convexify the quadratic inequalities arbitrarily (e.g., using the minimum eigenvalue method).

23 0-1 QCQP with Quadratic Equations (cont.) Theorem When the above scheme is applied to a 0-1 QCQP instance with N 2 =, the reformulated instance is convex. Moreover, if the primal SDP satisfies the Slater condition, then the lower bound from the continuous relaxation of the reformulated instance will equal the SDP bound.

24 Handling Quadratic Inequalities: two difficulties The case of quadratic inequalities is more tricky because of two serious difficulties! The first can be shown considering the following simple instance: max x 1 + +x n s.t. x i x j 0 (1 i < j n) x 2 i x i = 0 (i = 1,...,n). The optimum is 1 and one can check that the SDP bound is also 1.

25 Handling Quadratic Inequalities: two difficulties (cont.) The quadratic inequalities are non-convex, so we must use the equations x 2 i x i = 0 to convexify them. It turns out that the best we can do is to replace the inequalities with: 1 2 (x2 i x i )+ 1 2 (x2 j x j )+x i x j 0 (1 i < j n). But the optimal solution to the continuous relaxation of the reformulated instance is then (1/2,...,1/2) T, yielding a very poor upper bound of n/2.

26 Handling Quadratic Inequalities: two difficulties (cont.) In other words, when quadratic inequalities are present: opt SDP bound = dual SDP bound = Lagrangian bound LP bound from reformulated instance and the last inequality is usually strict! An intuitive explanation for this phenomenon is that quadratic inequalities cannot be used to perturb the objective function.

27 Handling Quadratic Inequalities: two difficulties (cont.) The second difficulty is that finding an optimal reformulation is itself a rather complex optimisation problem. It is possible to formulate the problem of finding the best such reformulation as one huge SDP. Unfortunaley, the size and complexity of this SDP grows rapidly as the number of inequalities increases.

28 Handling Quadratic Inequalities: possible remedies If we are willing to convert the 0-1 QCQP into a mixed 0-1 QCQP, then we can do better. We can add continuous slack variables to convert every quadratic inequality into a quadratic equation. Once this is done, it is always possible to reformulate the mixed 0-1 QCQP instance in such a way that the bound from the continuous relaxation is equal to the Lagrangian bound. As before, if the primal SDP satisfies the Slater condition, this bound will be equal to the SDP bound.

29 Handling Quadratic Inequalities: possible remedies (cont.) The above scheme has one disadvantage: we have to add N 2 continuous variables. This is unattractive when N 2 is large. We have another variant in which only one continuous variable needs to be added. (The basic idea is to use the dual SDP solution to construct a surrogate quadratic inequality that contains all of the information we need to get a good bound, and then assign a slack variable only to the surrogate constraint.)

30 General-Integer Variables Next, we move on to the case of general-integer variables. A first observation is that integer variables present no problem for the semidefinite and Lagrangian relaxations. As for reformulation, note that we cannot use terms of the form x 2 i x i to perturb the objective or constraints coefficient of integer variables. This can prevent the existence of a convex reformulation.

31 General-Integer Variables (cont.) To get around this, we can use binary expansion. In fact, if such a variable is nonnegative and bounded from above by 2 p for some positive integer p, then it can be replaced by p binary variables. Then, if all variables are bounded in this way, we can reduce the problem to 0-1 QCQP. We show that this transformation cannot make the SDP bound any worse, and in fact in some cases it can make it better!

32 General-Integer Variables (cont.) Consider the following instance with just one integer variable: } min {x1 2 3x 1 : x 1 3, x 1 Z +. There are two optimal solutions of cost 2, obtained by setting x 1 to either 1 or 2. The basic SDP relaxation is: min{x 11 3x 1 : 0 x 1 3, Y 0}. The optimal solution to this relaxation is to set x 1 to 1.5 and X 11 to 2.25, giving a lower bound of 2.25.

33 General-Integer Variables (cont.) Now, applying binary expansion to this instance, we use the substitution x 1 = x x 2 1, where x 1 1 and x 2 1 are new binary variables. The SDP relaxation for the transformed instance is: min X 1, X 2,2 11 s.t. x 1 1 = X 1,1 11 x 1 2 2,2 = X 11 Ỹ X 1, x1 1 6 x 2 1 One can check with an SDP solver that all optimal solutions to this SDP have cost 2. So, the transformed SDP yields an improved lower bound.

34 General-Integer Variables (cont.) If, on the other hand, the general-integer variables are not bounded from above a priori, then we run into a serious problem: Theorem (Jeroslow 1973) Testing feasibility of an integer program with quadratic constraints is undecidable. So we cannot expect an efficient convexification procedure in general. (Of course, in practical applications, variables can usually be bounded quite easily.)

35 Continuous Variables Finally, we consider the case in which continuous variables are permitted. In the context of the semidefinite and Lagrangian approaches, continuous variables can be handled in exactly the same way as integer variables, and therefore they present no problems. When it comes to reformulation, on the other hand, continuous variables cause serious difficulties. (Indeed, a convex reformulation may not exist in general, and we cannot get around this issue by binarisation, as we do for integer variables.)

36 Continuous Variables (cont.) Here, three outcomes are possible: The instance can be convexified, and the bound from the continuous relaxation is as good as the SDP bound. The instance can be convexified, but the bound from the continuous relaxation is worse than the SDP bound. The instance cannot be convexified at all. We have some necessary and sufficient conditions for these three cases to hold.

37 Conclusions Basically, we have three possible situations: When there are no quadratic inequalities and no continuous variables, and all general-integer variables are bounded, we can always obtain a convex reformulation that gives a strong bound. When quadratic inequalities are present, this may not be possible, unless we are prepared to add slack variables. When continuous variables are present, a convex reformulation is not even guaranteed to exist.

Semidefinite Programming Basics and Applications

Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent

OSE SEMINAR 2014 Quadratic reformulation techniques for 0-1 quadratic programs Ray Pörn CENTER OF EXCELLENCE IN OPTIMIZATION AND SYSTEMS ENGINEERING ÅBO AKADEMI UNIVERSITY ÅBO NOVEMBER 14th 2014 2 Structure

Using quadratic convex reformulation to tighten the convex relaxation of a quadratic program with complementarity constraints

Noname manuscript No. (will be inserted by the editor) Using quadratic conve reformulation to tighten the conve relaation of a quadratic program with complementarity constraints Lijie Bai John E. Mitchell

5. Duality. Lagrangian

5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

Convex Optimization M2

Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization

A Note on Representations of Linear Inequalities in Non-Convex Mixed-Integer Quadratic Programs

A Note on Representations of Linear Inequalities in Non-Convex Mixed-Integer Quadratic Programs Adam N. Letchford Daniel J. Grainger To appear in Operations Research Letters Abstract In the literature

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity

Convex Optimization Boyd & Vandenberghe. 5. Duality

5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

Lecture: Duality.

Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong

Convex Optimization M2

Convex Optimization M2 Lecture 8 A. d Aspremont. Convex Optimization M2. 1/57 Applications A. d Aspremont. Convex Optimization M2. 2/57 Outline Geometrical problems Approximation problems Combinatorial

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

Duality Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Lagrangian Consider the optimization problem in standard form

Nonconvex Quadratic Programming: Return of the Boolean Quadric Polytope Kurt M. Anstreicher Dept. of Management Sciences University of Iowa Seminar, Chinese University of Hong Kong, October 2009 We consider

Lecture: Duality of LP, SOCP and SDP

1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program

Lecture 6: Conic Optimization September 8

IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0

Lecture 18: Optimization Programming

Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual

EE/AA 578, Univ of Washington, Fall Duality

7. Duality EE/AA 578, Univ of Washington, Fall 2016 Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized

Comparing Convex Relaxations for Quadratically Constrained Quadratic Programming Kurt M. Anstreicher Dept. of Management Sciences University of Iowa European Workshop on MINLP, Marseille, April 2010 The

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

Optimization for Communications and Networks Poompat Saengudomlert Session 4 Duality and Lagrange Multipliers P Saengudomlert (2015) Optimization Session 4 1 / 14 24 Dual Problems Consider a primal convex

Convex Optimization & Lagrange Duality

Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT

CS711008Z Algorithm Design and Analysis

CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief

4y Springer NONLINEAR INTEGER PROGRAMMING

NONLINEAR INTEGER PROGRAMMING DUAN LI Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong Shatin, N. T. Hong Kong XIAOLING SUN Department of Mathematics Shanghai

12. Interior-point methods

12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity

Lecture 8. Strong Duality Results. September 22, 2008

Strong Duality Results September 22, 2008 Outline Lecture 8 Slater Condition and its Variations Convex Objective with Linear Inequality Constraints Quadratic Objective over Quadratic Constraints Representation

Cutting Planes for First Level RLT Relaxations of Mixed 0-1 Programs

Cutting Planes for First Level RLT Relaxations of Mixed 0-1 Programs 1 Cambridge, July 2013 1 Joint work with Franklin Djeumou Fomeni and Adam N. Letchford Outline 1. Introduction 2. Literature Review

Module 04 Optimization Problems KKT Conditions & Solvers

Module 04 Optimization Problems KKT Conditions & Solvers Ahmad F. Taha EE 5243: Introduction to Cyber-Physical Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ taha/index.html September

Linear and non-linear programming

Linear and non-linear programming Benjamin Recht March 11, 2005 The Gameplan Constrained Optimization Convexity Duality Applications/Taxonomy 1 Constrained Optimization minimize f(x) subject to g j (x)

Lecture: Introduction to LP, SDP and SOCP

Lecture: Introduction to LP, SDP and SOCP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2015.html wenzw@pku.edu.cn Acknowledgement:

Lecture Note 5: Semidefinite Programming for Stability Analysis

ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State

A notion of Total Dual Integrality for Convex, Semidefinite and Extended Formulations

A notion of for Convex, Semidefinite and Extended Formulations Marcel de Carli Silva Levent Tunçel April 26, 2018 A vector in R n is integral if each of its components is an integer, A vector in R n is

subject to (x 2)(x 4) u,

Exercises Basic definitions 5.1 A simple example. Consider the optimization problem with variable x R. minimize x 2 + 1 subject to (x 2)(x 4) 0, (a) Analysis of primal problem. Give the feasible set, the

Relaxations and Randomized Methods for Nonconvex QCQPs

Relaxations and Randomized Methods for Nonconvex QCQPs Alexandre d Aspremont, Stephen Boyd EE392o, Stanford University Autumn, 2003 Introduction While some special classes of nonconvex problems can be

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets Tarski-Seidenberg and quantifier elimination Feasibility

On Valid Inequalities for Quadratic Programming with Continuous Variables and Binary Indicators

On Valid Inequalities for Quadratic Programming with Continuous Variables and Binary Indicators Hongbo Dong and Jeff Linderoth Wisconsin Institutes for Discovery University of Wisconsin-Madison, USA, hdong6,linderoth}@wisc.edu

Lecture: Cone programming. Approximating the Lorentz cone.

Strong relaxations for discrete optimization problems 10/05/16 Lecture: Cone programming. Approximating the Lorentz cone. Lecturer: Yuri Faenza Scribes: Igor Malinović 1 Introduction Cone programming is

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through

ICS-E4030 Kernel Methods in Machine Learning

ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This

There are several approaches to solve UBQP, we will now briefly discuss some of them:

3 Related Work There are several approaches to solve UBQP, we will now briefly discuss some of them: Since some of them are actually algorithms for the Max Cut problem (MC), it is necessary to show their

Introduction to Semidefinite Programming I: Basic properties a

Introduction to Semidefinite Programming I: Basic properties and variations on the Goemans-Williamson approximation algorithm for max-cut MFO seminar on Semidefinite Programming May 30, 2010 Semidefinite

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Raphael Louca & Eilyan Bitar School of Electrical and Computer Engineering American Control Conference (ACC) Chicago,

Convex Optimization and Support Vector Machine

Convex Optimization and Support Vector Machine Problem 0. Consider a two-class classification problem. The training data is L n = {(x 1, t 1 ),..., (x n, t n )}, where each t i { 1, 1} and x i R p. We

10 Numerical methods for constrained problems

10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside

Optimality, Duality, Complementarity for Constrained Optimization

Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear

Support Vector Machines: Maximum Margin Classifiers

Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 16, 2008 Piotr Mirowski Based on slides by Sumit Chopra and Fu-Jie Huang 1 Outline What is behind

Lecture 3: Semidefinite Programming

Lecture 3: Semidefinite Programming Lecture Outline Part I: Semidefinite programming, examples, canonical form, and duality Part II: Strong Duality Failure Examples Part III: Conditions for strong duality

Written Examination

Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes

Lecture: Examples of LP, SOCP and SDP

1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

EE 227A: Convex Optimization and Applications October 14, 2008

EE 227A: Convex Optimization and Applications October 14, 2008 Lecture 13: SDP Duality Lecturer: Laurent El Ghaoui Reading assignment: Chapter 5 of BV. 13.1 Direct approach 13.1.1 Primal problem Consider

Linear and Combinatorial Optimization

Linear and Combinatorial Optimization The dual of an LP-problem. Connections between primal and dual. Duality theorems and complementary slack. Philipp Birken (Ctr. for the Math. Sc.) Lecture 3: Duality

Tutorial on Convex Optimization for Engineers Part II

Tutorial on Convex Optimization for Engineers Part II M.Sc. Jens Steinwandt Communications Research Laboratory Ilmenau University of Technology PO Box 100565 D-98684 Ilmenau, Germany jens.steinwandt@tu-ilmenau.de

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained

A General Framework for Convex Relaxation of Polynomial Optimization Problems over Cones

Research Reports on Mathematical and Computing Sciences Series B : Operations Research Department of Mathematical and Computing Sciences Tokyo Institute of Technology 2-12-1 Oh-Okayama, Meguro-ku, Tokyo

Appendix A Taylor Approximations and Definite Matrices

Appendix A Taylor Approximations and Definite Matrices Taylor approximations provide an easy way to approximate a function as a polynomial, using the derivatives of the function. We know, from elementary

Decomposition Methods for Quadratic Zero-One Programming

Decomposition Methods for Quadratic Zero-One Programming Borzou Rostami This dissertation is submitted for the degree of Doctor of Philosophy in Information Technology Advisor: Prof. Federico Malucelli

l p -Norm Constrained Quadratic Programming: Conic Approximation Methods

OUTLINE l p -Norm Constrained Quadratic Programming: Conic Approximation Methods Wenxun Xing Department of Mathematical Sciences Tsinghua University, Beijing Email: wxing@math.tsinghua.edu.cn OUTLINE OUTLINE

EE364a Review Session 5

EE364a Review Session 5 EE364a Review announcements: homeworks 1 and 2 graded homework 4 solutions (check solution to additional problem 1) scpd phone-in office hours: tuesdays 6-7pm (650-723-1156) 1 Complementary

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

Convex Optimization (EE227A: UC Berkeley) Lecture 6 (Conic optimization) 07 Feb, 2013 Suvrit Sra Organizational Info Quiz coming up on 19th Feb. Project teams by 19th Feb Good if you can mix your research

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March

CS-E4830 Kernel Methods in Machine Learning

CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This

The min-cut and vertex separator problem

manuscript No. (will be inserted by the editor) The min-cut and vertex separator problem Fanz Rendl Renata Sotirov the date of receipt and acceptance should be inserted later Abstract We consider graph

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008

Lecture 8 Plus properties, merit functions and gap functions September 28, 2008 Outline Plus-properties and F-uniqueness Equation reformulations of VI/CPs Merit functions Gap merit functions FP-I book:

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in

Duality Theory of Constrained Optimization

Duality Theory of Constrained Optimization Robert M. Freund April, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 The Practical Importance of Duality Duality is pervasive

Algorithms for constrained local optimization

Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained

minimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1,

4 Duality 4.1 Numerical perturbation analysis example. Consider the quadratic program with variables x 1, x 2, and parameters u 1, u 2. minimize x 2 1 +2x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

HW1 solutions Exercise 1 (Some sets of probability distributions.) Let x be a real-valued random variable with Prob(x = a i ) = p i, i = 1,..., n, where a 1 < a 2 < < a n. Of course p R n lies in the standard

Optimality Conditions for Constrained Optimization

72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming

School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization

Copositive Programming and Combinatorial Optimization

Copositive Programming and Combinatorial Optimization Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria joint work with I.M. Bomze (Wien) and F. Jarre (Düsseldorf) IMA

Continuous Optimisation, Chpt 9: Semidefinite Problems

Continuous Optimisation, Chpt 9: Semidefinite Problems Peter J.C. Dickinson DMMP, University of Twente p.j.c.dickinson@utwente.nl http://dickinson.website/teaching/2016co.html version: 21/11/16 Monday

CONSTRAINED NONLINEAR PROGRAMMING

149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.

Lecture 7: Convex Optimizations

Lecture 7: Convex Optimizations Radu Balan, David Levermore March 29, 2018 Convex Sets. Convex Functions A set S R n is called a convex set if for any points x, y S the line segment [x, y] := {tx + (1

Convex Optimization and SVM

Convex Optimization and SVM Problem 0. Cf lecture notes pages 12 to 18. Problem 1. (i) A slab is an intersection of two half spaces, hence convex. (ii) A wedge is an intersection of two half spaces, hence

Lecture #21. c T x Ax b. maximize subject to

COMPSCI 330: Design and Analysis of Algorithms 11/11/2014 Lecture #21 Lecturer: Debmalya Panigrahi Scribe: Samuel Haney 1 Overview In this lecture, we discuss linear programming. We first show that the

Part IB Optimisation

Part IB Optimisation Theorems Based on lectures by F. A. Fischer Notes taken by Dexter Chua Easter 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after

CSC Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method

CSC2411 - Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method Notes taken by Stefan Mathe April 28, 2007 Summary: Throughout the course, we have seen the importance

IE 521 Convex Optimization

Lecture 14: and Applications 11th March 2019 Outline LP SOCP SDP LP SOCP SDP 1 / 21 Conic LP SOCP SDP Primal Conic Program: min c T x s.t. Ax K b (CP) : b T y s.t. A T y = c (CD) y K 0 Theorem. (Strong

CHAPTER 2: QUADRATIC PROGRAMMING Overview Quadratic programming (QP) problems are characterized by objective functions that are quadratic in the design variables, and linear constraints. In this sense,

c 2000 Society for Industrial and Applied Mathematics

SIAM J. OPIM. Vol. 10, No. 3, pp. 750 778 c 2000 Society for Industrial and Applied Mathematics CONES OF MARICES AND SUCCESSIVE CONVEX RELAXAIONS OF NONCONVEX SES MASAKAZU KOJIMA AND LEVEN UNÇEL Abstract.

Relations between Semidefinite, Copositive, Semi-infinite and Integer Programming

Relations between Semidefinite, Copositive, Semi-infinite and Integer Programming Author: Faizan Ahmed Supervisor: Dr. Georg Still Master Thesis University of Twente the Netherlands May 2010 Relations

Nonlinear Optimization: What s important?

Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global

Solving large Semidefinite Programs - Part 1 and 2

Solving large Semidefinite Programs - Part 1 and 2 Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Singapore workshop 2006 p.1/34 Overview Limits of Interior

Network Flows. 6. Lagrangian Relaxation. Programming. Fall 2010 Instructor: Dr. Masoud Yaghini

In the name of God Network Flows 6. Lagrangian Relaxation 6.3 Lagrangian Relaxation and Integer Programming Fall 2010 Instructor: Dr. Masoud Yaghini Integer Programming Outline Branch-and-Bound Technique

Homework Set #6 - Solutions

EE 15 - Applications of Convex Optimization in Signal Processing and Communications Dr Andre Tkacenko JPL Third Term 11-1 Homework Set #6 - Solutions 1 a The feasible set is the interval [ 4] The unique

minimize x subject to (x 2)(x 4) u,

Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems

Lagrangian-Conic Relaxations, Part I: A Unified Framework and Its Applications to Quadratic Optimization Problems Naohiko Arima, Sunyoung Kim, Masakazu Kojima, and Kim-Chuan Toh Abstract. In Part I of

An introductory example

CS1 Lecture 9 An introductory example Suppose that a company that produces three products wishes to decide the level of production of each so as to maximize profits. Let x 1 be the amount of Product 1

Constrained Optimization and Lagrangian Duality

CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

Interior Point Algorithms for Constrained Convex Optimization

Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Convex relaxation The art and science of convex relaxation revolves around taking a non-convex problem that you want to solve, and replacing it with a convex problem which you can actually solve the solution

12. Interior-point methods

12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity