Introduction to Convex Optimization

Similar documents
IOE 611/Math 663: Nonlinear Programming

1. Introduction. mathematical optimization. least-squares and linear programming. convex optimization. example. course goals and topics

CS295: Convex Optimization. Xiaohui Xie Department of Computer Science University of California, Irvine

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Convex Optimization and l 1 -minimization

Course Outline. FRTN10 Multivariable Control, Lecture 13. General idea for Lectures Lecture 13 Outline. Example 1 (Doyle Stein, 1979)

January 29, Introduction to optimization and complexity. Outline. Introduction. Problem formulation. Convexity reminder. Optimality Conditions

Example 1 linear elastic structure; forces f 1 ;:::;f 100 induce deections d 1 ;:::;d f i F max i, several hundred other constraints: max load p

Convex Functions. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lecture 1: Introduction

The Q-parametrization (Youla) Lecture 13: Synthesis by Convex Optimization. Lecture 13: Synthesis by Convex Optimization. Example: Spring-mass System

Multiuser Downlink Beamforming: Rank-Constrained SDP

FRTN10 Multivariable Control, Lecture 13. Course outline. The Q-parametrization (Youla) Example: Spring-mass System

Lecture 6: Conic Optimization September 8

Advances in Convex Optimization: Theory, Algorithms, and Applications

Sum-Power Iterative Watefilling Algorithm

Canonical Problem Forms. Ryan Tibshirani Convex Optimization

Primal/Dual Decomposition Methods

COM Optimization for Communications 8. Semidefinite Programming

Handout 8: Dealing with Data Uncertainty

Relaxations and Randomized Methods for Nonconvex QCQPs

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

Convex Optimization & Machine Learning. Introduction to Optimization

Minimax Problems. Daniel P. Palomar. Hong Kong University of Science and Technolgy (HKUST)

Distributionally Robust Convex Optimization

Applications of Robust Optimization in Signal Processing: Beamforming and Power Control Fall 2012

Agenda. 1 Cone programming. 2 Convex cones. 3 Generalized inequalities. 4 Linear programming (LP) 5 Second-order cone programming (SOCP)

Optimization in Wireless Communication

Mathematical Optimization Models and Applications

Lecture 9 Sequential unconstrained minimization

Robust portfolio selection under norm uncertainty

EE 227A: Convex Optimization and Applications April 24, 2008

Minimax MMSE Estimator for Sparse System

Convex Optimization in Communications and Signal Processing

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

Space-Time Processing for MIMO Communications

A SEMIDEFINITE RELAXATION BASED CONSERVATIVE APPROACH TO ROBUST TRANSMIT BEAMFORMING WITH PROBABILISTIC SINR CONSTRAINTS

IE 521 Convex Optimization

Convex Optimization Problems. Prof. Daniel P. Palomar

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

Optimization in. Stephen Boyd. 3rd SIAM Conf. Control & Applications. and Control Theory. System. Convex

12. Interior-point methods

4y Springer NONLINEAR INTEGER PROGRAMMING

Duality. Peter Bro Mitersen (University of Aarhus) Optimization, Lecture 9 February 28, / 49

Robust linear optimization under general norms

2 Since these problems are restrictions of the original problem, the transmission /$ IEEE

IE598 Big Data Optimization Introduction

QoS Constrained Robust MIMO Transceiver Design Under Unknown Interference

8. Geometric problems

1 Robust optimization

Lecture 4: Linear and quadratic problems

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

Alternative Decompositions for Distributed Maximization of Network Utility: Framework and Applications

DOWNLINK transmit beamforming has recently received

Handout 6: Some Applications of Conic Linear Programming

Optimisation and Operations Research

Interior Point Methods for Mathematical Programming

Lecture 15: October 15

FINANCIAL OPTIMIZATION

Lecture Note 5: Semidefinite Programming for Stability Analysis

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Statistical Pattern Recognition

Convex Optimization M2

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Stochastic Subgradient Methods

Disciplined Convex Programming

ELE539A: Optimization of Communication Systems Lecture 6: Quadratic Programming, Geometric Programming, and Applications

Robust Optimization for Risk Control in Enterprise-wide Optimization

A Minimax Theorem with Applications to Machine Learning, Signal Processing, and Finance

Proximal Methods for Optimization with Spasity-inducing Norms

Algorithm-Hardware Co-Optimization of Memristor-Based Framework for Solving SOCP and Homogeneous QCQP Problems

Tutorial on Convex Optimization: Part II

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Barrier Method. Javier Peña Convex Optimization /36-725

From Convex Optimization to Linear Matrix Inequalities

Efficient Nonlinear Optimizations of Queuing Systems

8. Geometric problems

Determinant maximization with linear. S. Boyd, L. Vandenberghe, S.-P. Wu. Information Systems Laboratory. Stanford University

Lecture: Examples of LP, SOCP and SDP

Outline. Relaxation. Outline DMP204 SCHEDULING, TIMETABLING AND ROUTING. 1. Lagrangian Relaxation. Lecture 12 Single Machine Models, Column Generation

1. Introduction. mathematical optimization. least-squares and linear programming. convex optimization. example. course goals and topics

1. Introduction. mathematical optimization. least-squares and linear programming. convex optimization. example. course goals and topics

N. L. P. Formulation examples & techniques

Characterization of Convex and Concave Resource Allocation Problems in Interference Coupled Wireless Systems

Convex optimization examples

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

New Rank-One Matrix Decomposition Techniques and Applications to Signal Processing

Lecture 3: Semidefinite Programming

Primal-Dual Interior-Point Methods. Ryan Tibshirani Convex Optimization

ELEG5481 SIGNAL PROCESSING OPTIMIZATION TECHNIQUES 6. GEOMETRIC PROGRAM

A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

Convex envelopes, cardinality constrained optimization and LASSO. An application in supervised learning: support vector machines (SVMs)

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 18

ELE539A: Optimization of Communication Systems Lecture 16: Pareto Optimization and Nonconvex Optimization

Interior-Point Methods for Linear Optimization

12. Interior-point methods

664 IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 58, NO. 2, FEBRUARY 2010

Duality in Linear Programs. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Transcription:

Introduction to Convex Optimization Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong

Outline of Lecture Optimization problems Examples Solving optimization problems More examples Course goals References (Acknowledgement to Stephen Boyd for material for this lecture.) Daniel P. Palomar 1

Optimization Problem General optimization problem in standard form: where minimize x f 0 (x) subject to f i (x) 0 i = 1,..., m h i (x) = 0 i = 1,..., p x = (x 1,..., x n ) is the optimization variable f 0 : R n R is the objective function f i : R n R, i = 1,..., m are inequality constraint functions h i : R n R, i = 1,..., p are equality constraint functions. Goal: find an optimal solution x that minimizes f 0 while satisfying all the constraints. Daniel P. Palomar 2

Examples Convex optimization is currently used in many different areas: circuit design (start-up named Barcelona in Silicon Valley) signal processing (e.g., filter design) communication systems (e.g., transceiver design, beamforming design, ML detection, power control in wireless) financial engineering (e.g., portfolio design, index tracking) image proc. (e.g., deblurring, compressive sensing, blind separation) machine learning biomedical applications (e.g., analysis of DNA) Daniel P. Palomar 3

Examples: Elements in the Formulation An optimization problem has three basic elements: 1) variables, 2) constraints, and 3) objective. Example: device sizing in electronic circuits: variables: device widths and lengths constraints: manufacturing limits, timing requirements, max area objective: power consumption Example: portfolio optimization: variables: amounts invested in different assets constraints: budget, max investments per asset, min return objective: overall risk or return variance. Daniel P. Palomar 4

Example: Power Control in Wireless Networks Consider a wireless network with n logical transmitter/receiver pairs: Goal: design the power allocation so that each receiver receives minimum interference from the other links. Daniel P. Palomar 5

The signal-to-inerference-plus-noise-ratio (SINR) at the ith receiver is p i G ii sinr i = j i p jg ij + σi 2 where p i is the power used by the ith transmitter G ij is the path gain from transmitter j to receiver i σi 2 is the noise power at the ith receiver. Problem: maximize the weakest SINR subject to power constraints 0 p i p max i : maximize p subject to min i=1,...,n p i G ii j i p jg ij +σ i 2 0 p i p max i i = 1,..., n. Daniel P. Palomar 6

Solving Optimization Problems General optimization problems are very difficult to solve (either long computation time or not finding the best solution). Exceptions: least-squares problems, linear programming problems, and convex optimization problems. Least-squares (LS): minimize x Ax b 2 2 solving LS problems: closed-form solution x = ( A T A ) 1 A T b for which there are reliable and efficient algorithms; mature technology using LS: easy to recognize Daniel P. Palomar 7

Linear Programming (LP): minimize x c T x subject to a T i x b i, i = 1,..., m solving LP problems: no closed-form solution, but reliable and efficient algorithms and software; mature technology using LP: not as easy to recognize as LS problems, a few standard tricks to convert problems into LPs Convex optimization: minimize x f 0 (x) subject to f i (x) b i, i = 1,..., m solving convex problems: no closed-form solution, but still reliable and efficient algorithms and software; almost a technology using convex optimization: often difficult to recognize, many tricks for transforming problems into convex form. Daniel P. Palomar 8

Nonconvex Optimization Nonconvex optimization problems are generally very difficult to solve, although there are some rare exceptions. In general, they require either a long computation time or the compromise of not always finding the optimal solution: local optimization: fast algorithms, but no guarantee of global optimality, only local solution around the initial point global optimization: worst-case complexity grows exponentially with problem size, but finds global solution. Daniel P. Palomar 9

Example: Lamp Illumination Problem Consider m lamps illuminating n small flat patches: Goal: achieve a desired illumination I des on all patches with bounded lamp powers. Daniel P. Palomar 10

The intensity I k at patch k depends linearly on the lamp powers p j : m I k = a kj p j j=1 where the coefficients a kj are given by a kj = cos θ kj /r 2 kj. Problem formulation: since the illumination is perceived logarithmically by the eye, a good formulation of the problem is minimize I 1,...,I n,p 1,...,p m max k log I k log I des subject to 0 p j p max, j = 1,..., m I k = m j=1 a kjp j, k = 1,..., n. How to solve the problem? The answer is: it depends on how much you know about optimization. Daniel P. Palomar 11

Solving the problem: 1. If you don t know anything, then you just take a heuristic guess like using a uniform power p j = p, perhaps trying different values of p. Daniel P. Palomar 12

Solving the problem: 1. If you don t know anything, then you just take a heuristic guess like using a uniform power p j = p, perhaps trying different values of p. 2. If you know about least-squares, then approximate the problem as minimize n I 1,...,I n,p 1,...,p k=1 (I k I des ) 2 m and then clip p j if p j > p max or p j < 0. Daniel P. Palomar 13

Solving the problem: 1. If you don t know anything, then you just take a heuristic guess like using a uniform power p j = p, perhaps trying different values of p. 2. If you know about least-squares, then approximate the problem as minimize n I 1,...,I n,p 1,...,p k=1 (I k I des ) 2 m and then clip p j if p j > p max or p j < 0. 3. If you know about linear programming, then approximate the problem as minimize I 1,...,I n,p 1,...,p m max k I k I des subject to 0 p j p max, j = 1,..., m. Daniel P. Palomar 14

4. If you know about convex optimization, after staring at the problem long enough, you may realize that you can actually reformulate the original problem in convex form and then find the global solution: minimize I 1,...,I n,p 1,...,p m max k h (I k /I des ) subject to 0 p j p max, j = 1,..., m. where h (u) = max {u, 1/u}. Daniel P. Palomar 15

Additional constraints: does adding the constraints below complicate the problem? (a) no more than half of total power is in any 10 lamps (b) no more than half of the lamps are on (p j > 0). Daniel P. Palomar 16

Additional constraints: does adding the constraints below complicate the problem? (a) no more than half of total power is in any 10 lamps (b) no more than half of the lamps are on (p j > 0). Answer: adding (a) does not complicate the problem, whereas adding (b) makes the problem extremely difficult. Moral: untrained intuition doesn t always work; one needs to obtain the proper background and develop the right intuition to discern between difficult and easy problems. Daniel P. Palomar 17

History Snapshop of Convex Optimization Theory (convex analysis): ca1900-1970 (e.g. Rockafellar) Algorithms: 1947: simplex algorithm for linear programming (Dantzig) 1960s: early interior-point methods (Fiacco & McCormick, Dikin) 1970s: ellipsoid method and other subgradient methods 1980s: polynomial-time interior-point methods for linear programming (Karmakar 1984) late 1980s-now: polynomial-time interior-point methods for nonlinear convex optimization (Nesterov & Nemirovski 1994) Applications: before 1990s: mostly in operations research; few in engineering since 1990: many new applications in engineering and new problem classes (SDP, SOCP, robust optim.) Daniel P. Palomar 18

Course Goals and Topics Goal: to introduce convex optimization theory and to illustrate its use with many recent applications with emphasis on i) the art of unveiling the hidden convexity of problems ii) a proper characterization of the solution either analytically or algorithmically. The course follows a case-study approach by considering applications such as filter/beamforming design, circuit design, robust designs under uncertainty, portfolio optimization in finance, transceiver design for MIMO channels, image processing, blind separation, design of multiple access channels, multiuser detection, duality in information theory, network optimization, distributed algorithms, wireless network power control, machine learning, etc. Daniel P. Palomar 19

References Stephen Boyd and Lieven Vandenberghe, Convex Optimization. Cambridge, U.K.: Cambridge University Press, 2004. http://www.stanford.edu/ boyd/cvxbook/ Daniel P. Palomar and Yonina C. Eldar, Eds., Convex Optimization in Signal Processing and Communications, Cambridge University Press, 2009. Ben Tal & Nemirovsky, Lectures on Modern Convex Optimization. SIAM 2001. Nesterov & Nemirovsky, Interior-point Polynomial Algorithms in Convex Programming. SIAM 1994. Daniel P. Palomar 20