ELE539A: Optimization of Communication Systems Lecture 6: Quadratic Programming, Geometric Programming, and Applications

Similar documents
ELE539A: Optimization of Communication Systems Lecture 16: Pareto Optimization and Nonconvex Optimization

Geometric Programming for Communication Systems

POWER CONTROL BY GEOMETRIC PROGRAMMING

2640 IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, VOL. 6, NO. 7, JULY Power Control By Geometric Programming

4. Convex optimization problems

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

Convex Optimization & Lagrange Duality

Lecture 18: Optimization Programming

Efficient Nonlinear Optimizations of Queuing Systems

Lecture: Convex Optimization Problems

Convex optimization problems. Optimization problem in standard form

4. Convex optimization problems

Tutorial on Convex Optimization: Part II

Duality. Geoff Gordon & Ryan Tibshirani Optimization /

5. Duality. Lagrangian

Convex Optimization M2

Convex Optimization M2

12. Interior-point methods

minimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1,

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

Duality in Linear Programs. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Semidefinite Programming Basics and Applications

Lecture 6: Conic Optimization September 8

A Brief Review on Convex Optimization

Lecture 7: Convex Optimizations

CSCI : Optimization and Control of Networks. Review on Convex Optimization

Convex Optimization Boyd & Vandenberghe. 5. Duality

Lecture: Duality.

ICS-E4030 Kernel Methods in Machine Learning

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

Lecture 4: Linear and quadratic problems

Lecture: Duality of LP, SOCP and SDP

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

Constrained Optimization and Lagrangian Duality

Tutorial on Convex Optimization for Engineers Part II

Lecture: Examples of LP, SOCP and SDP

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Lecture 20: November 1st

12. Interior-point methods

Distributed Geometric-Programming-Based Power Control in Cellular Cognitive Radio Networks

Exercises. Exercises. Basic terminology and optimality conditions. 4.2 Consider the optimization problem

Nonlinear Optimization: What s important?

Convex Optimization and Modeling

Relaxations and Randomized Methods for Nonconvex QCQPs

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Convex Optimization. Lecture 12 - Equality Constrained Optimization. Instructor: Yuanzhang Xiao. Fall University of Hawaii at Manoa

Dual Decomposition.

minimize x subject to (x 2)(x 4) u,

Interior Point Algorithms for Constrained Convex Optimization

15. Conic optimization

l p -Norm Constrained Quadratic Programming: Conic Approximation Methods

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

8. Geometric problems

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

Advances in Convex Optimization: Theory, Algorithms, and Applications

11. Equality constrained minimization

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 18

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

Homework Set #6 - Solutions

8. Geometric problems

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

IE 521 Convex Optimization

CS-E4830 Kernel Methods in Machine Learning

EE364 Review Session 4

subject to (x 2)(x 4) u,

Lagrangian Duality Theory

Optimization for Machine Learning

ECE Optimization for wireless networks Final. minimize f o (x) s.t. Ax = b,

EE/AA 578, Univ of Washington, Fall Duality

Convex Optimization for Signal Processing and Communications: From Fundamentals to Applications

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

Applications of Robust Optimization in Signal Processing: Beamforming and Power Control Fall 2012

Convex Optimization and SVM

EE 227A: Convex Optimization and Applications October 14, 2008

Support Vector Machines, Kernel SVM

Determinant maximization with linear. S. Boyd, L. Vandenberghe, S.-P. Wu. Information Systems Laboratory. Stanford University

4. Algebra and Duality

1 Strict local optimality in unconstrained optimization

Duality Theory of Constrained Optimization

Pattern Classification, and Quadratic Problems

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

MATH 4211/6211 Optimization Constrained Optimization

COM Optimization for Communications 8. Semidefinite Programming

EE364a Review Session 5

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

Primal/Dual Decomposition Methods

CS711008Z Algorithm Design and Analysis

Optimality, Duality, Complementarity for Constrained Optimization

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

Regional Solution of Constrained LQ Optimal Control

Generalization to inequality constrained problem. Maximize

The Q-parametrization (Youla) Lecture 13: Synthesis by Convex Optimization. Lecture 13: Synthesis by Convex Optimization. Example: Spring-mass System

Convex Optimization and l 1 -minimization

Convex Optimization and Modeling

Transcription:

ELE539A: Optimization of Communication Systems Lecture 6: Quadratic Programming, Geometric Programming, and Applications Professor M. Chiang Electrical Engineering Department, Princeton University February 17, 2006

Lecture Outline Quadratic constrained quadratic programming (QCQP) Least-squares Second order cone programming (SOCP) Dual quadratic programming Geometric programming (GP) Dual geometric programming Wireless network power control Thanks: Stephen Boyd (some materials and graphs from Boyd and Vandenberghe)

Convex QCQP (Convex) QP (with linear constraints) in x: minimize (1/2)x T Px + q T x + r subject to Gx h Ax = b where P S n +, G Rm n, A R p n (Convex) QCQP in x: minimize (1/2)x T P 0 x + q0 T x + r 0 subject to (1/2)x T P i x + qi T x + r i 0, i = 1,2,..., m Ax = b where P S n +, i = 0,..., m

f 0 (x ) x P

Least-squares Minimize Ax b 2 2 = xt A T Ax 2b T Ax + b T b over x. Unconstrained QP, Regression analysis, Least-squares approximation Analytic solution: x = A b where, for A R m n, A = (A T A) 1 A T if rank of A is n, and A = A T (AA T ) 1 if rank of A is m. If not full rank, then by singular value decomposition. Constrained least-squares (no general analytic solution). For example: minimize Ax b 2 2 subject to l i x i u i, i = 1,..., n

LP with Random Cost minimize c T x subject to Gx h Ax = b Cost c R n is random, with mean c and covariance Ω Expected cost: c T x. Cost variance x T Ωx Minimize both expected cost and cost variance (with a weight γ): minimize subject to c T x + γx T Ωx Gx h Ax = b

SOCP Second Order Cone Programming: minimize subject to f T x A i x + b i 2 c T i x + d i, i = 1,..., m Fx = g Variables: x R n. And A i R n i n, F R p n If c i = 0, i, SOCP is equivalent to QCQP If A i = 0, i, SOCP is equivalent to LP

Robust LP Consider inequality constrained LP: minimize subject to c T x a T i x b i, i = 1,..., m Parameters a i are not accurate. They are only known to lie in given ellipsoids described by ā i and P i R n n : a i E i = {ā i + P i u u 2 1} Since sup{a T i x a i E} = ā T i x + P T i x 2, Robust LP (satisfy constraints for all possible a i ) formulated as SOCP: minimize subject to c T x ā T i x + P T i x 2 b i, i = 1,..., m

Dual QCQP Primal (convex) QCQP minimize (1/2)x T P 0 x + q0 T x + r 0 subject to (1/2)x T P i x + qi T x + r i 0, i = 1,2,..., m Ax = b Lagrangian: L(x, λ) = (1/2)x T P(λ)x + q(λ) T x + r(λ) where P(λ) = P 0 + mx λ i P i, q(λ) = q 0 + mx λ i q i, r(λ) = r 0 + mx λ i r i i=1 i=1 i=1 Since λ 0, we have P(λ) 0 if P 0 0 and Lagrange dual problem: g(λ) = inf x L(x, λ) = (1/2)q(λ)T P(λ) 1 q(λ) + r(λ) maximize subject to λ 0 (1/2)q(λ) T P(λ) 1 q(λ) + r(λ)

KKT Conditions for QP Primal (convex) QP with linear equality constraints: minimize (1/2)x T Px + q T x + r subject to Ax = b KKT conditions: Ax = b, Px + q + A T ν = 0 which can be written in matrix form: 2 3 2 4 P AT A 0 5 4 x ν 3 5 = 2 4 q b 3 5 Solving a system of linear equations is equivalent to solving equality constrained convex quadratic minimization

Monomials and Posynomials Monomial as a function f : R n + R: f(x) = dx a(1) 1 x a(2) 2... x a(n) n where the multiplicative constant d 0 and the exponential constants a (j) R, j = 1,2,..., n Sum of monomials is called a posynomial: f(x) = KX k=1 d k x a(1) k 1 x a(2) k 2... x a(n) k n. where d k 0, k = 1,2,..., K, and a (j) k R, j = 1, 2,..., n, k = 1,2,..., K Example: 2x 0.5 y π z is a monomial, x y is not a posynomial

GP GP standard form with variables x: minimize f 0 (x) subject to f i (x) 1, i = 1,2,..., m, h l (x) = 1, l = 1,2,..., M where f i, i = 0,1,..., m are posynomials and h l, l = 1,2,..., M are monomials Log transformation: y j = log x j, b ik = log d ik, b l = log d l GP convex form with variables y: minimize p 0 (y) = log P K 0 k=1 exp(at 0k y + b 0k) subject to p i (y) = log P K i k=1 exp(at ik y + b ik) 0, i = 1, 2,..., m, q l (y) = a T l y + b l = 0, l = 1, 2,..., M In convex form, GP with only monomials reduces to LP

Convexity of LogSumExp Log sum inequality (readily proved by the convexity of f(t) = t log t, t 0):! nx a i log a P nx n i i=1 a i log a i P b n i i=1 b i i=1 where a i, b i R +, i = 1,2,..., n i=1 Let ˆb i = log b i and P n i=1 a i = 1: log nx i=1 eˆb i! a Tˆb X n a i log a i i=1 So LogSumExp is the conjugate function of negative entropy Since all conjugate functions are convex, LogSumExp is convex

GP and Convexity Convexity is not invariant under nonlinear change of coordinates 5 120 4.5 60 3.5 Function 100 80 60 Function 4 3.5 3 2.5 Function 40 20 0 Function 3 2.5 40 20 0 10 5 X 0 10 5 Y 0 2 1.5 1 0.5 4 2 B 0 0 2 A 4 20 10 Y 5 0 0 5 X 10 2 3 2 B 1 0 0 1 A 2 3

Extensions of GP The following problem can be turned into an equivalent GP: maximize x/y subject to 2 x 3 x 2 + 3y/z y x/y = z 2 minimize x 1 y subject to 2x 1 1, (1/3)x 1 x 2 y 1/2 + 3y 1/2 z 1 1 xy 1 z 2 = 1 Let p, q be posynomials and r monomial minimize subject to p(x)/(r(x) q(x)) r(x) > q(x)

which is equivalent to minimize t subject to p(x) t(r(x) q(x)) (q(x)/r(x)) < 1 which is in turn equivalent to minimize t subject to (p(x)/t + q(x))/r(x) 1 (q(x)/r(x)) < 1 Generalized posynomials: composition of two posynomials. Generalized GP: minimize generalized posynomials over upper bound inequality constraints on other generalized posynomials Generalized GP can be turned into equivalent GP

Dual GP Primal problem: unconstrained GP in variables y minimize log Lagrange dual in variables ν: NX exp(a T i y + b i). i=1 maximize b T ν P N i=1 ν i log ν i subject to 1 T ν = 1, ν 0, A T ν = 0

Dual GP Primal problem: General GP in variables y minimize log P k 0 j=1 exp(at 0j y + b 0j) subject to Lagrange dual problem: log P k i j=1 exp(at ij y + b ij) 0, i = 1,..., m, maximize b T 0 ν 0 P k 0 j=1 ν 0j log ν 0j + P m i=1 b T i ν i P k i j=1 ν ij log ν ij 1 T ν i subject to ν i 0, i = 0,..., m, 1 T ν 0 = 1, P mi=0 A T i ν i = 0 where variables are ν i, i = 0,1,..., m A 0 is the matrix of the exponential constants in the objective function, and A i, i = 1,2,..., m are the matrices of the exponential constants in the constraint functions

Wireless Network Power Control Wireless CDMA cellular or multihop networks: Cell Phones Base Station Competing users all want: O1: High data rate O2: Low queuing delay O3: Low packet drop probability due to channel outage Optimize over transmit powers P such that: O1, O2 or O3 optimized for premium QoS class (or for maxmin fairness) Minimal QoS requirements on O1, O2 and O3 met for all users

A Sample of Power Control Problems 2 classes of traffic traverse a multihop wireless network: maximize Total System Throughput subject to Data Rate 1 Rate Requirement 1 Data Rate 2 Rate Requirement 2 Queuing Delay 1 Delay Requirement 1 Queuing Delay 2 Delay Requirement 2 Drop Prob 1 Drop Requirement 1 variables Drop Prob 2 Drop Requirement 2 Powers

Wireless Channel Models Signal Interference Ratio: SIR i (P) = P i G ii P N j i P jg ij + n i. Attainable data rate at high SIR: c i (P) = 1 T log 2 (KSIR i(p)). Outage probability on a wireless link: P o,i (P) = Prob{SIR i (P) SIR th } Average (Markovian) queuing delay with Poisson(Λ i ) arrival: D i (P) = 1 Γc i (P) Λ i

Conversion to GP Maximize the system throughput equivalent to minimize posynomial Q i ISR i Extension: maximize weighted sum of data rates: P i w ir i Outage probability bound: P o,i = Prob{SIR i SIR th } 1 = 1 Y k i 1 + SIR thg ik P k G ii P i Delay bound: 1 Γ T log 2 (SIR D i,max i) Λ i ISR i (P) 2 T Γ ( D 1 max +Λ i)

Cellular Case maximize subject to SIR i SIR k SIR k,min, k, Pj I k P j d γ j j α j < c k, k, P k1 G k1 = P k2 G k2, P k P k,max, k, P k 0, k. Recovers classical near-far problem in CDMA Max-min fairness objective function: maximize min k SIR k

GP Formulations Any combination is GP in high SIR Any combination without (C,D,c) is GP in any SIR Objective Function (A) Maximize R i (B) Maximize min i R i (C) Maximize P i R i (D) Maximize P i w ir i (E) Minimize P i P i Constraints (a) R i R i,min (b) P i1 G i1 = P i2 G i2 (c) P i R i R system,min (d) P o,i P o,i,max (e) 0 P i P i,max

Power Control by GP This suite of nonlinear nonconvex power control problems can be solved by GP (in standard form) Global optimality obtained efficiently For many combination of objectives and constraints Multi-rate, Multi-class, Multi-hop Feasibility Admission control Reduction in objective Admission pricing Key advantage: Allows nonlinear objectives and seemingly nonconvex constraints Key idea: These functions of SIR can be written as posynomials in P New results: low SIR regime and distributed algorithms

Numerical Example: Optimal Throughput-Delay Tradeoff 6 nodes, 8 links, 5 multi-hop flows, Direct Sequence CDMA max. power 1mW, target BER 10 3, path loss = distance 4 Maximized throughput of network increases as delay bound relaxes 290 Optimized throughput for different delay bounds B 2 C 280 A 1 4 7 8 3 6 D Maximized system throughput (kbps) 270 260 250 240 E 5 F 230 220 0.03 0.04 0.05 0.06 0.07 0.08 0.09 0.1 Delay bound (s) Heuristics: Delay bound violation or system throughput reduction

Lecture Summary First type of nonlinearity: quadratic. Second type of nonlinearity: Posynomial or LogSumExp. Nonlinear problems that are, or can be converted into, convex optimization: QCQP (SOCP) and GP. Both cover LP as special cases. Significantly extend scope of optimization-theoretic modelling, analysis, and design. Readings: Section 4.4, 4.5, 5.5, 5.7 in Boyd and Vandenberghe M. Chiang, Geometric Programming for Communication Systems, Trends and Foundations in Information and Communications Theory, vol. 2, no. 1, pp. 1-156, 2005.