Advances in Convex Optimization: Theory, Algorithms, and Applications


 Samson Andrews
 1 years ago
 Views:
Transcription
1 Advances in Convex Optimization: Theory, Algorithms, and Applications Stephen Boyd Electrical Engineering Department Stanford University (joint work with Lieven Vandenberghe, UCLA) ISIT 02 ISIT 02 Lausanne 7/3/02
2 Two problems polytope P described by linear inequalities, a T i x b i, i =1,...,L P a 1 Problem 1a: find minimum volume ellipsoid P Problem 1b: find maximum volume ellipsoid P are these (computationally) difficult? or easy? ISIT 02 Lausanne 7/3/02 1
3 problem 1a is very difficult in practice in theory (NPhard) problem 1b is very easy in practice (readily solved on small computer) in theory (polynomial complexity) ISIT 02 Lausanne 7/3/02 2
4 Two more problems find capacity of discrete memoryless channel, subject to constraints on input distribution Problem 2a: find channel capacity, subject to: no more than 30% of the probability is concentrated on any 10% of the input symbols Problem 2b: find channel capacity, subject to: at least 30% of the probability is concentrated on 10% of the input symbols are problems 2a and 2b (computationally) difficult? or easy? ISIT 02 Lausanne 7/3/02 3
5 problem 2a is very easy in practice & theory problem 2b is very difficult 1 1 I m almost sure ISIT 02 Lausanne 7/3/02 4
6 Moral very difficult and very easy problems can look quite similar... unless you re trained to recognize the difference ISIT 02 Lausanne 7/3/02 5
7 Outline what s new in convex optimization some new standard problem classes generalized inequalities and semidefinite programming interiorpoint algorithms and complexity analysis ISIT 02 Lausanne 7/3/02 6
8 Convex optimization problems minimize f 0 (x) subject to f 1 (x) 0,...,f L (x) 0, Ax = b x R n is optimization variable f i : R n R are convex, i.e., forallx,y,0 λ 1, f i (λx +(1 λ)y) λf i (x)+(1 λ)f i (y) examples: linear & (convex) quadratic programs problem 1b & 2a (if formulated properly) ISIT 02 Lausanne 7/3/02 7
9 Convex analysis & optimization nice properties of convex optimization problems known since 1960s local solutions are global duality theory, optimality conditions simple solution methods like alternating projections convex analysis well developed by 1970s (Rockafellar) separating & supporting hyperplanes subgradient calculus ISIT 02 Lausanne 7/3/02 8
10 What s new (since 1990 or so) powerful primaldual interiorpoint methods extremely efficient, handle nonlinear large scale problems polynomialtime complexity results for interiorpoint methods based on selfconcordance analysis of Newton s method extension to generalized inequalities semidefinite & maxdet programming new standard problem classes generalizations of LP, with theory, algorithms, software lots of applications control, combinatorial optimization, signal processing, circuit design,... ISIT 02 Lausanne 7/3/02 9
11 Recent history ( ) interiorpoint methods for LP (1984) Karmarkar s interiorpoint LP method theory (Ye, Renegar, Kojima, Todd, Monteiro, Roos,... ) practice (Wright, Mehrotra, Vanderbei, Shanno, Lustig,... ) (1988) Nesterov & Nemirovsky s selfconcordance analysis (1989 ) semidefinite programming in control (Boyd, El Ghaoui, Balakrishnan, Feron, Scherer,... ) (1990 ) semidefinite programming in combinatorial optimization (Alizadeh, Goemans, Williamson, Lovasz & Schrijver, Parrilo,... ) (1994) interiorpoint methods for nonlinear convex problems (Nesterov & Nemirovsky, Overton, Todd, Ye, Sturm,... ) (1997 ) robust optimization (Ben Tal, Nemirovsky, El Ghaoui,... ) ISIT 02 Lausanne 7/3/02 10
12 Some new standard (convex) problem classes secondorder cone programming (SOCP) semidefinite programming (SDP), maxdet programming (convex form) geometric programming (GP) for these new problem classes we have complete duality theory, similar to LP good algorithms, and robust, reliable software wide variety of new applications ISIT 02 Lausanne 7/3/02 11
13 Secondorder cone programming secondorder cone program (SOCP) has form minimize c T 0 x subject to A i x + b i 2 c T i x + d i, i =1,...,m Fx = g variable is x R n includes LP as special case (A i =0,b i =0), QP (c i =0) nondifferentiable when A i x + b i =0 new IP methods can solve (almost) as fast as LPs ISIT 02 Lausanne 7/3/02 12
14 Robust linear programming robust linear program: minimize c T x subject to a T i x b i for all a i E i ellipsoid E i = { a i + F i p p 2 1}describes uncertainty in constraint vectors a i x must satisfy constraints for all possible values of a i can extend to uncertain c & b i, correlated uncertainties... ISIT 02 Lausanne 7/3/02 13
15 Robust LP as SOCP robust LP is minimize subject to c T x a T i x +sup{(f i p) T x p 2 1} b i which is the same as minimize subject to c T x a T i x + Fi T x 2 b i an SOCP (hence, readily solved) term F T i x 2 is extra margin required to accommodate uncertainty in a i ISIT 02 Lausanne 7/3/02 14
16 Stochastic robust linear programming minimize c T x subject to Prob(a T i x b i) η, i =1,...,m where a i N(ā i, Σ i ), η 1/2 (c and b i are fixed) i.e., each constraint must hold with probability at least η equivalent to SOCP minimize c T x subject to ā T i x +Φ 1 (η) Σ 1/2 i x 2 1, i =1,...,m where Φ is CDF of N (0, 1) random variable ISIT 02 Lausanne 7/3/02 15
17 logsumexp function: Geometric programming lse(x) =log(e x 1 + +e x n )...a smooth convex approximation of the max function geometric program (GP), with variable x R n : minimize lse(a 0 x + b 0 ) subject to lse(a i x + b i ) 0, i =1,...,m where A i R m i n, b i R m i new IP methods can solve large scale GPs (almost) as fast as LPs ISIT 02 Lausanne 7/3/02 16
18 Dual geometric program dual of geometric program is an unnormalized entropy problem m ( maximize i=0 b T i ν i + entr(ν i ) ) subject to ν i 0, i =0,...,m, 1 T ν 0 =1, m i=0 AT i ν i =0 dual variables are ν i R m i, i =0,...,m (unnormalized) entropy is entr(ν) = n i=1 ν i log ν i 1 T ν GP is closely related to problems involving entropy, KL divergence ISIT 02 Lausanne 7/3/02 17
19 Example: DMC capacity problem x R n is distribution of input; y R m is distribution of output P R m n gives conditional probabilities: y = Px primal channel capacity problem: maximize where c j = m i=1 p ij log p ij c T x + entr(y) subject to x 0, 1 T x =1, y = Px dual channel capacity problem is a simple GP: minimize lse(u) subject to c + P T u 0 ISIT 02 Lausanne 7/3/02 18
20 Generalized inequalities with proper convex cone K R k we associate generalized inequality x K y y x K convex optimization problem with generalized inequalities: minimize f 0 (x) subject to f 1 (x) K1 0,...,f L (x) KL 0, Ax = b f i : R n R k i are K i convex: for all x, y, 0 λ 1, f i (λx +(1 λ)y) Ki λf i (x)+(1 λ)f i (y) ISIT 02 Lausanne 7/3/02 19
21 Semidefinite program semidefinite program (SDP): minimize c T x subject to A 0 + x 1 A 1 + +x n A n 0, Cx = d A i = A T i R m m inequality is matrix inequality, i.e., K is positive semidefinite cone single constraint, which is affine (hence, matrix convex) ISIT 02 Lausanne 7/3/02 20
22 Maxdet problem extension of SDP: maxdet problem minimize c T x log det + (G 0 + x 1 G 1 + +x m G m ) subject to A 0 + x 1 A 1 + +x n A n 0, Cx = d x R n is variable A i = A T i R m m, G i = G T i R p p det + (Z) = { det Z if Z 0 0 otherwise ISIT 02 Lausanne 7/3/02 21
23 Semidefinite & maxdet programming nearly complete duality theory, similar to LP interiorpoint algorithms that are efficient in theory & practice applications in many areas: control theory combinatorial optimization & graph theory structural optimization statistics signal processing circuit design geometrical problems algebraic geometry ISIT 02 Lausanne 7/3/02 22
24 Chebyshev bounds generalized Chebyshev inequalities: lower bounds on Prob(X C) X R n is a random variable with E X = a, E XX T = S C is an open polyhedron C = {x a T i x<b i,i=1,...,m} cf. classical Chebyshev inequality on R if E X =0,EX 2 =σ 2 Prob(X <1) 1 1+σ 2 ISIT 02 Lausanne 7/3/02 23
25 Chebyshev bounds via SDP minimize 1 m i=1 λ i subject to a T i z i b i λ i, i =1,...,m [ ] [ ] m Zi z i S a i=1 zi T λ i a T 1 [ ] Zi z i zi T 0, i =1,...,m λ i an SDP with variables Z i = Z T i R n n, z i R n,andλ i R optimal value is a (sharp) lower bound on Prob(X C) can construct a distribution with E X = a, E XX T = S that attains the lower bound ISIT 02 Lausanne 7/3/02 24
26 Detection example x = s + v x R n : received signal s: transmitted signal s {s 1,s 2,...,s N } (one of N possible symbols) v: noise with E v =0,Evv T = I (but otherwise unknown distribution) detection problem: given observed value of x, estimate s ISIT 02 Lausanne 7/3/02 25
27 example (n =2,N=7) s 3 s 2 s 4 s 1 s 7 s 5 s 6 detector selects symbol s k closest to received signal x correct detection if s k + v lies in the Voronoi region around s k ISIT 02 Lausanne 7/3/02 26
28 example: bound on probability of correct detection of s 1 is s 3 s 2 s 4 s 1 s 7 s 5 s 6 solid circles: distribution with probability of correct detection ISIT 02 Lausanne 7/3/02 27
29 Boolean leastsquares x { 1,1} n is transmitted; we receive y = Ax + v, v N(0,I) ML estimate of x found by solving boolean leastsquares problem minimize Ax y 2 subject to x 2 i =1, i =1,...,n could check all 2 n possible values of x... an NPhard problem many heuristics for approximate solution ISIT 02 Lausanne 7/3/02 28
30 Boolean leastsquares as matrix problem where X = xx T Ax y 2 = x T A T Ax 2y T A T x + y T y = Tr A T AX 2y T A T x + y T y hence can express BLS as minimize Tr A T AX 2y T A T x + y T y subject to X ii =1, X xx T, rank(x) =1... still a very hard problem ISIT 02 Lausanne 7/3/02 29
31 SDP relaxation for BLS ignore rank one constraint, and use X xx T [ X x x T 1 ] 0 to obtain SDP relaxation (with variables X, x) minimize Tr A T AX 2y T A T x + y T y [ ] X x subject to X ii =1, x T 0 1 optimal value of SDP gives lower bound for BLS if optimal matrix is rank one, we re done ISIT 02 Lausanne 7/3/02 30
32 Stochastic interpretation and heuristic suppose X, x are optimal for SDP relaxation generate z from normal distribution N (x, X xx T ) take x = sgn(z) as approximate solution of BLS (can repeat many times and take best one) ISIT 02 Lausanne 7/3/02 31
33 Interiorpoint methods handle linear and nonlinear convex problems (Nesterov & Nemirovsky) based on Newton s method applied to barrier functions that trap x in interior of feasible region (hence the name IP) worstcase complexity theory: # Newton steps problem size in practice: # Newton steps between 5 & 50 (!) 1000s variables, 10000s constraints feasible on PC; far larger if structure is exploited ISIT 02 Lausanne 7/3/02 32
34 Log barrier for convex problem minimize f 0 (x) subject to f i (x) 0, i =1,...,m we define logarithmic barrier as φ(x) = m log( f i (x)) i=1 φ is convex, smooth on interior of feasible set φ as x approaches boundary of feasible set ISIT 02 Lausanne 7/3/02 33
35 Central path central path is curve x (t) = argmin x (tf 0 (x)+φ(x)), t 0 x (t) is strictly feasible, i.e., f i (x) < 0 x (t) can be computed by, e.g., Newton s method intuition suggests x (t) converges to optimal as t using duality can prove x (t) is m/tsuboptimal ISIT 02 Lausanne 7/3/02 34
36 Example: central path for LP x (t) = argmin x ( tc T x 6 i=1 log(b i a T i x) ) c x opt x (10) x (0) ISIT 02 Lausanne 7/3/02 35
37 a.k.a. pathfollowing method Barrier method given strictly feasible x, t>0, µ>1 repeat 1. compute x := x (t) (using Newton s method, starting from x) 2. exit if m/t < tol 3. t := µt duality gap reduced by µ each outer iteration ISIT 02 Lausanne 7/3/02 36
38 Tradeoff in choice of µ large µ means fast duality gap reduction (fewer outer iterations), but many Newton steps to compute x (t + ) (more Newton steps per outer iteration) total effort measured by total number of Newton steps ISIT 02 Lausanne 7/3/02 37
39 Typical example 10 2 GP with n =50variables, m = 100 constraints, m i =5 wide range of µ works well very typical behavior (even for large m, n) duality gap µ= 150 µ=50 µ = Newton iterations ISIT 02 Lausanne 7/3/02 38
40 Effect of µ Newton iterations µ barrier method works well for µ in large range ISIT 02 Lausanne 7/3/02 39
41 Typical effort versus problem dimensions 35 LPs with n =2mvbles, m constraints 100 instances for each of 20 problem sizes avg & std dev shown Newton iterations m ISIT 02 Lausanne 7/3/02 40
42 Other interiorpoint methods more sophisticated IP algorithms primaldual, incomplete centering, infeasible start use same ideas, e.g., central path, log barrier readily available (commercial and noncommercial packages) typical performance: Newton steps (!) over wide range of problem dimensions, problem type, and problem data ISIT 02 Lausanne 7/3/02 41
43 Complexity analysis of Newton s method classical result: if f small, Newton s method converges fast classical analysis is local, and coordinate dependent need analysis that is global, and, like Newton s method, coordinate invariant ISIT 02 Lausanne 7/3/02 42
44 Selfconcordance selfconcordant function f (Nesterov & Nemirovsky, 1988): when restricted to any line, f (t) 2f (t) 3/2 f SC f (z )=f(tz) SC, for T nonsingular (i.e., SC is coordinate invariant) a large number of common convex functions are SC x log x log x, log det X 1, log(y 2 x T x),... ISIT 02 Lausanne 7/3/02 43
45 Complexity analysis of Newton s method for selfconcordant functions for selfconcordant function f, with minimum value f, theorem: #Newton steps to minimize f, starting from x: #steps 11(f(x) f )+5 empirically: #steps 0.6(f(x) f )+5 note absence of unknown constants, problem dimension, etc. ISIT 02 Lausanne 7/3/02 44
46 25 20 #iterations f(x (0) ) f ISIT 02 Lausanne 7/3/02 45
47 Complexity of pathfollowing algorithm to compute x (µt) starting from x (t), #steps 11m(µ 1 log µ)+5 using N&N s selfconcordance theory, duality to bound f number of outer steps to reduce duality gap by factor α: log α/ log µ total number of Newton steps bounded by product, log α (11m(µ 1 log µ)+5) log µ... captures tradeoff in choice of µ ISIT 02 Lausanne 7/3/02 46
48 Complexity analysis conclusions for any choice of µ, #steps is O(m log 1/ɛ), whereɛis final accuracy to optimize complexity bound, can take µ =1+1/ m, which yields #steps O( m log 1/ɛ) in any case, IP methods work extremely well in practice ISIT 02 Lausanne 7/3/02 47
49 Conclusions since 1985, lots of advances in theory & practice of convex optimization complexity analysis semidefinite programming, other new problem classes efficient interiorpoint methods & software lots of applications ISIT 02 Lausanne 7/3/02 48
50 Some references Semidefinite Programming, SIAM Review 1996 Determinant Maximization with Linear Matrix Inequality Constraints, SIMAX 1998 Applications of Secondorder Cone Programming, LAA 1999 Linear Matrix Inequalities in System and Control Theory, SIAM 1994 Interiorpoint Polynomial Algorithms in Convex Programming, SIAM 1994, Nesterov & Nemirovsky Lectures on Modern Convex Optimization, SIAM 2001, Ben Tal & Nemirovsky ISIT 02 Lausanne 7/3/02 49
51 Shameless promotion Convex Optimization, Boyd & Vandenberghe to be published 2003 pretty good draft available at Stanford EE364 (UCLA EE236B) class web site as course reader ISIT 02 Lausanne 7/3/02 50
12. Interiorpoint methods
12. Interiorpoint methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationLecture 9 Sequential unconstrained minimization
S. Boyd EE364 Lecture 9 Sequential unconstrained minimization brief history of SUMT & IP methods logarithmic barrier function central path UMT & SUMT complexity analysis feasibility phase generalized inequalities
More information12. Interiorpoint methods
12. Interiorpoint methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationDeterminant maximization with linear. S. Boyd, L. Vandenberghe, S.P. Wu. Information Systems Laboratory. Stanford University
Determinant maximization with linear matrix inequality constraints S. Boyd, L. Vandenberghe, S.P. Wu Information Systems Laboratory Stanford University SCCM Seminar 5 February 1996 1 MAXDET problem denition
More informationInterior Point Algorithms for Constrained Convex Optimization
Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems
More informationBarrier Method. Javier Peña Convex Optimization /36725
Barrier Method Javier Peña Convex Optimization 10725/36725 1 Last time: Newton s method For rootfinding F (x) = 0 x + = x F (x) 1 F (x) For optimization x f(x) x + = x 2 f(x) 1 f(x) Assume f strongly
More information4. Convex optimization problems
Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization
More informationAgenda. Interior Point Methods. 1 Barrier functions. 2 Analytic center. 3 Central path. 4 Barrier method. 5 Primaldual path following algorithms
Agenda Interior Point Methods 1 Barrier functions 2 Analytic center 3 Central path 4 Barrier method 5 Primaldual path following algorithms 6 Nesterov Todd scaling 7 Complexity analysis Interior point
More informationPOLYNOMIAL OPTIMIZATION WITH SUMSOFSQUARES INTERPOLANTS
POLYNOMIAL OPTIMIZATION WITH SUMSOFSQUARES INTERPOLANTS Sercan Yıldız syildiz@samsi.info in collaboration with Dávid Papp (NCSU) OPT Transition Workshop May 02, 2017 OUTLINE Polynomial optimization and
More informationConvex Optimization. Newton s method. ENSAE: Optimisation 1/44
Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)
More informationConvex Optimization and l 1 minimization
Convex Optimization and l 1 minimization Sangwoon Yun Computational Sciences Korea Institute for Advanced Study December 11, 2009 2009 NIMS Thematic Winter School Outline I. Convex Optimization II. l
More informationLecture: Examples of LP, SOCP and SDP
1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More informationLecture: Convex Optimization Problems
1/36 Lecture: Convex Optimization Problems http://bicmr.pku.edu.cn/~wenzw/opt2015fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/36 optimization
More informationConvex Optimization M2
Convex Optimization M2 Lecture 8 A. d Aspremont. Convex Optimization M2. 1/57 Applications A. d Aspremont. Convex Optimization M2. 2/57 Outline Geometrical problems Approximation problems Combinatorial
More informationIntroduction to Convex Optimization
Introduction to Convex Optimization Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470  Convex Optimization Fall 201819, HKUST, Hong Kong Outline of Lecture Optimization
More information8. Geometric problems
8. Geometric problems Convex Optimization Boyd & Vandenberghe extremal volume ellipsoids centering classification placement and facility location 8 Minimum volume ellipsoid around a set LöwnerJohn ellipsoid
More informationHandout 6: Some Applications of Conic Linear Programming
ENGG 550: Foundations of Optimization 08 9 First Term Handout 6: Some Applications of Conic Linear Programming Instructor: Anthony Man Cho So November, 08 Introduction Conic linear programming CLP, and
More informationLecture 6: Conic Optimization September 8
IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions
More informationA CONIC DANTZIGWOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING
A CONIC DANTZIGWOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING Kartik Krishnan Advanced Optimization Laboratory McMaster University Joint work with Gema Plaza Martinez and Tamás
More information8. Geometric problems
8. Geometric problems Convex Optimization Boyd & Vandenberghe extremal volume ellipsoids centering classification placement and facility location 8 1 Minimum volume ellipsoid around a set LöwnerJohn ellipsoid
More informationConvex Optimization in Classification Problems
New Trends in Optimization and Computational Algorithms December 9 13, 2001 Convex Optimization in Classification Problems Laurent El Ghaoui Department of EECS, UC Berkeley elghaoui@eecs.berkeley.edu 1
More information9. Geometric problems
9. Geometric problems EE/AA 578, Univ of Washington, Fall 2016 projection on a set extremal volume ellipsoids centering classification 9 1 Projection on convex set projection of point x on set C defined
More informationPrimalDual InteriorPoint Methods. Ryan Tibshirani Convex Optimization /36725
PrimalDual InteriorPoint Methods Ryan Tibshirani Convex Optimization 10725/36725 Given the problem Last time: barrier method min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h i, i = 1,...
More informationCS295: Convex Optimization. Xiaohui Xie Department of Computer Science University of California, Irvine
CS295: Convex Optimization Xiaohui Xie Department of Computer Science University of California, Irvine Course information Prerequisites: multivariate calculus and linear algebra Textbook: Convex Optimization
More informationPrimalDual InteriorPoint Methods. Javier Peña Convex Optimization /36725
PrimalDual InteriorPoint Methods Javier Peña Convex Optimization 10725/36725 Last time: duality revisited Consider the problem min x subject to f(x) Ax = b h(x) 0 Lagrangian L(x, u, v) = f(x) + u T
More informationExample 1 linear elastic structure; forces f 1 ;:::;f 100 induce deections d 1 ;:::;d f i F max i, several hundred other constraints: max load p
Convex Optimization in Electrical Engineering Stephen Boyd and Lieven Vandenberghe EE Dept., Stanford ISL EE370 Seminar, 2/9/96 1 Main idea many electrical engineering design problems can be cast as convex
More informationThe Qparametrization (Youla) Lecture 13: Synthesis by Convex Optimization. Lecture 13: Synthesis by Convex Optimization. Example: Springmass System
The Qparametrization (Youla) Lecture 3: Synthesis by Convex Optimization controlled variables z Plant distubances w Example: Springmass system measurements y Controller control inputs u Idea for lecture
More informationOn the Sandwich Theorem and a approximation algorithm for MAX CUT
On the Sandwich Theorem and a 0.878approximation algorithm for MAX CUT Kees Roos Technische Universiteit Delft Faculteit Electrotechniek. Wiskunde en Informatica email: C.Roos@its.tudelft.nl URL: http://ssor.twi.tudelft.nl/
More informationCourse Outline. FRTN10 Multivariable Control, Lecture 13. General idea for Lectures Lecture 13 Outline. Example 1 (Doyle Stein, 1979)
Course Outline FRTN Multivariable Control, Lecture Automatic Control LTH, 6 LL Specifications, models and loopshaping by hand L6L8 Limitations on achievable performance L9L Controller optimization:
More informationMIT Algebraic techniques and semidefinite optimization February 14, Lecture 3
MI 6.97 Algebraic techniques and semidefinite optimization February 4, 6 Lecture 3 Lecturer: Pablo A. Parrilo Scribe: Pablo A. Parrilo In this lecture, we will discuss one of the most important applications
More informationHandout 8: Dealing with Data Uncertainty
MFE 5100: Optimization 2015 16 First Term Handout 8: Dealing with Data Uncertainty Instructor: Anthony Man Cho So December 1, 2015 1 Introduction Conic linear programming CLP, and in particular, semidefinite
More information4. Convex optimization problems
Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization
More information15. Conic optimization
L. Vandenberghe EE236C (Spring 216) 15. Conic optimization conic linear program examples modeling duality 151 Generalized (conic) inequalities Conic inequality: a constraint x K where K is a convex cone
More informationLecture 1: Introduction
EE 227A: Convex Optimization and Applications January 17 Lecture 1: Introduction Lecturer: Anh Pham Reading assignment: Chapter 1 of BV 1. Course outline and organization Course web page: http://www.eecs.berkeley.edu/~elghaoui/teaching/ee227a/
More informationFRTN10 Multivariable Control, Lecture 13. Course outline. The Qparametrization (Youla) Example: Springmass System
FRTN Multivariable Control, Lecture 3 Anders Robertsson Automatic Control LTH, Lund University Course outline The Qparametrization (Youla) LL5 Purpose, models and loopshaping by hand L6L8 Limitations
More informationRelaxations and Randomized Methods for Nonconvex QCQPs
Relaxations and Randomized Methods for Nonconvex QCQPs Alexandre d Aspremont, Stephen Boyd EE392o, Stanford University Autumn, 2003 Introduction While some special classes of nonconvex problems can be
More informationELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications
ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March
More informationLecture 15 Newton Method and SelfConcordance. October 23, 2008
Newton Method and SelfConcordance October 23, 2008 Outline Lecture 15 Selfconcordance Notion Selfconcordant Functions Operations Preserving Selfconcordance Properties of Selfconcordant Functions Implications
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17
EE/ACM 150  Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko
More informationw Kluwer Academic Publishers Boston/Dordrecht/London HANDBOOK OF SEMIDEFINITE PROGRAMMING Theory, Algorithms, and Applications
HANDBOOK OF SEMIDEFINITE PROGRAMMING Theory, Algorithms, and Applications Edited by Henry Wolkowicz Department of Combinatorics and Optimization Faculty of Mathematics University of Waterloo Waterloo,
More informationConvex relaxation. In example below, we have N = 6, and the cut we are considering
Convex relaxation The art and science of convex relaxation revolves around taking a nonconvex problem that you want to solve, and replacing it with a convex problem which you can actually solve the solution
More informationDisciplined Convex Programming
Disciplined Convex Programming Stephen Boyd Michael Grant Electrical Engineering Department, Stanford University University of Pennsylvania, 3/30/07 Outline convex optimization checking convexity via convex
More informationAcyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs
2015 American Control Conference Palmer House Hilton July 13, 2015. Chicago, IL, USA Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs Raphael Louca and Eilyan Bitar
More informationInterior Point Methods: SecondOrder Cone Programming and Semidefinite Programming
School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods: SecondOrder Cone Programming and Semidefinite Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio
More information10. Unconstrained minimization
Convex Optimization Boyd & Vandenberghe 10. Unconstrained minimization terminology and assumptions gradient descent method steepest descent method Newton s method selfconcordant functions implementation
More informationConvex optimization problems. Optimization problem in standard form
Convex optimization problems optimization problem in standard form convex optimization problems linear optimization quadratic optimization geometric programming quasiconvex optimization generalized inequality
More informationAgenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization
Agenda Applications of semidefinite programming 1 Control and system theory 2 Combinatorial and nonconvex optimization 3 Spectral estimation & superresolution Control and system theory SDP in wide use
More informationCSCI 1951G Optimization Methods in Finance Part 09: Interior Point Methods
CSCI 1951G Optimization Methods in Finance Part 09: Interior Point Methods March 23, 2018 1 / 35 This material is covered in S. Boyd, L. Vandenberge s book Convex Optimization https://web.stanford.edu/~boyd/cvxbook/.
More informationOptimization in. Stephen Boyd. 3rd SIAM Conf. Control & Applications. and Control Theory. System. Convex
Optimization in Convex and Control Theory System Stephen Boyd Engineering Department Electrical University Stanford 3rd SIAM Conf. Control & Applications 1 Basic idea Many problems arising in system and
More informationConvex relaxation. In example below, we have N = 6, and the cut we are considering
Convex relaxation The art and science of convex relaxation revolves around taking a nonconvex problem that you want to solve, and replacing it with a convex problem which you can actually solve the solution
More information1 Robust optimization
ORF 523 Lecture 16 Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Any typos should be emailed to a a a@princeton.edu. In this lecture, we give a brief introduction to robust optimization
More informationPrimalDual InteriorPoint Methods. Ryan Tibshirani Convex Optimization
PrimalDual InteriorPoint Methods Ryan Tibshirani Convex Optimization 10725 Given the problem Last time: barrier method min x subject to f(x) h i (x) 0, i = 1,... m Ax = b where f, h i, i = 1,... m are
More informationLecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.
MA 796S: Convex Optimization and Interior Point Methods October 8, 2007 Lecture 1 Lecturer: Kartik Sivaramakrishnan Scribe: Kartik Sivaramakrishnan 1 Conic programming Consider the conic program min s.t.
More informationIE 521 Convex Optimization
Lecture 14: and Applications 11th March 2019 Outline LP SOCP SDP LP SOCP SDP 1 / 21 Conic LP SOCP SDP Primal Conic Program: min c T x s.t. Ax K b (CP) : b T y s.t. A T y = c (CD) y K 0 Theorem. (Strong
More informationADVANCES IN CONVEX OPTIMIZATION: CONIC PROGRAMMING. Arkadi Nemirovski
ADVANCES IN CONVEX OPTIMIZATION: CONIC PROGRAMMING Arkadi Nemirovski Georgia Institute of Technology and Technion Israel Institute of Technology Convex Programming solvable case in Optimization Revealing
More informationInterior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems
AMSC 607 / CMSC 764 Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 4: Introduction to Interior Point Methods Dianne P. O Leary c 2008 Interior Point Methods We ll discuss
More information61 The Positivstellensatz P. Parrilo and S. Lall, ECC
61 The Positivstellensatz P. Parrilo and S. Lall, ECC 2003 2003.09.02.10 6. The Positivstellensatz Basic semialgebraic sets Semialgebraic sets TarskiSeidenberg and quantifier elimination Feasibility
More information1 Introduction Semidenite programming (SDP) has been an active research area following the seminal work of Nesterov and Nemirovski [9] see also Alizad
Quadratic Maximization and Semidenite Relaxation Shuzhong Zhang Econometric Institute Erasmus University P.O. Box 1738 3000 DR Rotterdam The Netherlands email: zhang@few.eur.nl fax: +3110408916 August,
More informationsubject to (x 2)(x 4) u,
Exercises Basic definitions 5.1 A simple example. Consider the optimization problem with variable x R. minimize x 2 + 1 subject to (x 2)(x 4) 0, (a) Analysis of primal problem. Give the feasible set, the
More informationAnalysis and synthesis: a complexity perspective
Analysis and synthesis: a complexity perspective Pablo A. Parrilo ETH ZürichZ control.ee.ethz.ch/~parrilo Outline System analysis/design Formal and informal methods SOS/SDP techniques and applications
More informationLagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)
Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470  Convex Optimization Fall 201718, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual
More informationConvex Optimization: Applications
Convex Optimization: Applications Lecturer: Pradeep Ravikumar Coinstructor: Aarti Singh Convex Optimization 175/3675 Based on material from Boyd, Vandenberghe Norm Approximation minimize Ax b (A R m
More informationThe Simplest Semidefinite Programs are Trivial
The Simplest Semidefinite Programs are Trivial Robert J. Vanderbei Bing Yang Program in Statistics & Operations Research Princeton University Princeton, NJ 08544 January 10, 1994 Technical Report SOR9312
More informationSupplement: Universal SelfConcordant Barrier Functions
IE 8534 1 Supplement: Universal SelfConcordant Barrier Functions IE 8534 2 Recall that a selfconcordant barrier function for K is a barrier function satisfying 3 F (x)[h, h, h] 2( 2 F (x)[h, h]) 3/2,
More informationLecture 7: Convex Optimizations
Lecture 7: Convex Optimizations Radu Balan, David Levermore March 29, 2018 Convex Sets. Convex Functions A set S R n is called a convex set if for any points x, y S the line segment [x, y] := {tx + (1
More informationSemidefinite Programming
Chapter 2 Semidefinite Programming 2.0.1 Semidefinite programming (SDP) Given C M n, A i M n, i = 1, 2,..., m, and b R m, the semidefinite programming problem is to find a matrix X M n for the optimization
More informationFast Algorithms for SDPs derived from the KalmanYakubovichPopov Lemma
Fast Algorithms for SDPs derived from the KalmanYakubovichPopov Lemma Venkataramanan (Ragu) Balakrishnan School of ECE, Purdue University 8 September 2003 European Union RTN Summer School on MultiAgent
More informationLargest dual ellipsoids inscribed in dual cones
Largest dual ellipsoids inscribed in dual cones M. J. Todd June 23, 2005 Abstract Suppose x and s lie in the interiors of a cone K and its dual K respectively. We seek dual ellipsoidal norms such that
More informationSemidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization
Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization Instructor: Farid Alizadeh Author: Ai Kagawa 12/12/2012
More informationSemidefinite Programming Basics and Applications
Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent
More informationSparse Optimization Lecture: Basic Sparse Optimization Models
Sparse Optimization Lecture: Basic Sparse Optimization Models Instructor: Wotao Yin July 2013 online discussions on piazza.com Those who complete this lecture will know basic l 1, l 2,1, and nuclearnorm
More information6. Approximation and fitting
6. Approximation and fitting Convex Optimization Boyd & Vandenberghe norm approximation leastnorm problems regularized approximation robust approximation 6 Norm approximation minimize Ax b (A R m n with
More informationA Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs
A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs Raphael Louca Eilyan Bitar Abstract Robust semidefinite programs are NPhard in general In contrast, robust linear programs admit
More informationEE 227A: Convex Optimization and Applications October 14, 2008
EE 227A: Convex Optimization and Applications October 14, 2008 Lecture 13: SDP Duality Lecturer: Laurent El Ghaoui Reading assignment: Chapter 5 of BV. 13.1 Direct approach 13.1.1 Primal problem Consider
More informationSecondOrder Cone Program (SOCP) Detection and Transformation Algorithms for Optimization Software
and SecondOrder Cone Program () and Algorithms for Optimization Software Jared Erickson JaredErickson2012@u.northwestern.edu Robert 4er@northwestern.edu Northwestern University INFORMS Annual Meeting,
More informationIntroduction to Semidefinite Programming I: Basic properties a
Introduction to Semidefinite Programming I: Basic properties and variations on the GoemansWilliamson approximation algorithm for maxcut MFO seminar on Semidefinite Programming May 30, 2010 Semidefinite
More informationAnalytic Center CuttingPlane Method
Analytic Center CuttingPlane Method S. Boyd, L. Vandenberghe, and J. Skaf April 14, 2011 Contents 1 Analytic center cuttingplane method 2 2 Computing the analytic center 3 3 Pruning constraints 5 4 Lower
More informationNewton s Method. Ryan Tibshirani Convex Optimization /36725
Newton s Method Ryan Tibshirani Convex Optimization 10725/36725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, Properties and examples: f (y) = max x
More informationA new primaldual pathfollowing method for convex quadratic programming
Volume 5, N., pp. 97 0, 006 Copyright 006 SBMAC ISSN 00805 www.scielo.br/cam A new primaldual pathfollowing method for convex quadratic programming MOHAMED ACHACHE Département de Mathématiques, Faculté
More informationA direct formulation for sparse PCA using semidefinite programming
A direct formulation for sparse PCA using semidefinite programming A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley A. d Aspremont, INFORMS, Denver,
More informationJune Engineering Department, Stanford University. System Analysis and Synthesis. Linear Matrix Inequalities. Stephen Boyd (E.
Stephen Boyd (E. Feron :::) System Analysis and Synthesis Control Linear Matrix Inequalities via Engineering Department, Stanford University Electrical June 1993 ACC, 1 linear matrix inequalities (LMIs)
More informationAgenda. 1 Cone programming. 2 Convex cones. 3 Generalized inequalities. 4 Linear programming (LP) 5 Secondorder cone programming (SOCP)
Agenda 1 Cone programming 2 Convex cones 3 Generalized inequalities 4 Linear programming (LP) 5 Secondorder cone programming (SOCP) 6 Semidefinite programming (SDP) 7 Examples Optimization problem in
More informationGeometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as
Chapter 8 Geometric problems 8.1 Projection on a set The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as dist(x 0,C) = inf{ x 0 x x C}. The infimum here is always achieved.
More informationApplications of Linear Programming
Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Nonlinear programming In case of LP, the goal
More informationExample: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma
41 Algebra and Duality P. Parrilo and S. Lall 2006.06.07.01 4. Algebra and Duality Example: nonconvex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone of valid
More informationSparse Covariance Selection using Semidefinite Programming
Sparse Covariance Selection using Semidefinite Programming A. d Aspremont ORFE, Princeton University Joint work with O. Banerjee, L. El Ghaoui & G. Natsoulis, U.C. Berkeley & Iconix Pharmaceuticals Support
More informationELE539A: Optimization of Communication Systems Lecture 6: Quadratic Programming, Geometric Programming, and Applications
ELE539A: Optimization of Communication Systems Lecture 6: Quadratic Programming, Geometric Programming, and Applications Professor M. Chiang Electrical Engineering Department, Princeton University February
More informationPrimalDual Geometry of Level Sets and their Explanatory Value of the Practical Performance of InteriorPoint Methods for Conic Optimization
PrimalDual Geometry of Level Sets and their Explanatory Value of the Practical Performance of InteriorPoint Methods for Conic Optimization Robert M. Freund M.I.T. June, 2010 from papers in SIOPT, Mathematics
More informationInterior Point Methods for Mathematical Programming
Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO  2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained
More informationA direct formulation for sparse PCA using semidefinite programming
A direct formulation for sparse PCA using semidefinite programming A. d Aspremont, L. El Ghaoui, M. Jordan, G. Lanckriet ORFE, Princeton University & EECS, U.C. Berkeley Available online at www.princeton.edu/~aspremon
More informationConvex Optimization. Lieven Vandenberghe Electrical Engineering Department, UCLA. Joint work with Stephen Boyd, Stanford University
Convex Optimization Lieven Vandenberghe Electrical Engineering Department, UCLA Joint work with Stephen Boyd, Stanford University Ph.D. School in Optimization in Computer Vision DTU, May 19, 2008 Introduction
More informationc 2000 Society for Industrial and Applied Mathematics
SIAM J. OPIM. Vol. 10, No. 3, pp. 750 778 c 2000 Society for Industrial and Applied Mathematics CONES OF MARICES AND SUCCESSIVE CONVEX RELAXAIONS OF NONCONVEX SES MASAKAZU KOJIMA AND LEVEN UNÇEL Abstract.
More information4. Algebra and Duality
41 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: nonconvex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone
More informationNewton s Method. Javier Peña Convex Optimization /36725
Newton s Method Javier Peña Convex Optimization 10725/36725 1 Last time: dual correspondences Given a function f : R n R, we define its conjugate f : R n R, f ( (y) = max y T x f(x) ) x Properties and
More information