ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

Similar documents
Convex Optimization M2

Semidefinite Programming Basics and Applications

COM Optimization for Communications 8. Semidefinite Programming

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

12. Interior-point methods

15. Conic optimization

Lecture: Introduction to LP, SDP and SOCP

ELE539A: Optimization of Communication Systems Lecture 6: Quadratic Programming, Geometric Programming, and Applications

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

4. Convex optimization problems

7. Statistical estimation

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

Lecture: Examples of LP, SOCP and SDP

Preliminaries Overview OPF and Extensions. Convex Optimization. Lecture 8 - Applications in Smart Grids. Instructor: Yuanzhang Xiao

Convex Optimization M2

Lecture: Convex Optimization Problems

IE 521 Convex Optimization

Convex Optimization & Lagrange Duality

Agenda. 1 Cone programming. 2 Convex cones. 3 Generalized inequalities. 4 Linear programming (LP) 5 Second-order cone programming (SOCP)

EE 227A: Convex Optimization and Applications October 14, 2008

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

EE Applications of Convex Optimization in Signal Processing and Communications Dr. Andre Tkacenko, JPL Third Term

4. Convex optimization problems

Convex optimization problems. Optimization problem in standard form

CSCI : Optimization and Control of Networks. Review on Convex Optimization

Convex Optimization for Signal Processing and Communications: From Fundamentals to Applications

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

A Brief Review on Convex Optimization

A function(al) f is convex if dom f is a convex set, and. f(θx + (1 θ)y) < θf(x) + (1 θ)f(y) f(x) = x 3

Interior Point Algorithms for Constrained Convex Optimization

Semidefinite Programming

Convex Optimization in Communications and Signal Processing

Sparse Covariance Selection using Semidefinite Programming

Determinant maximization with linear. S. Boyd, L. Vandenberghe, S.-P. Wu. Information Systems Laboratory. Stanford University

7. Statistical estimation

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions

Sum-Power Iterative Watefilling Algorithm

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions

5. Duality. Lagrangian

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

SEMIDEFINITE PROGRAM BASICS. Contents

12. Interior-point methods

Solution to EE 617 Mid-Term Exam, Fall November 2, 2017

Linear and non-linear programming

Multiuser Downlink Beamforming: Rank-Constrained SDP

Common-Knowledge / Cheat Sheet

1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method

III. Applications in convex optimization

Lecture 7: Convex Optimizations

ELE539A: Optimization of Communication Systems Lecture 16: Pareto Optimization and Nonconvex Optimization

CS295: Convex Optimization. Xiaohui Xie Department of Computer Science University of California, Irvine

L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones. definition. spectral decomposition. quadratic representation. log-det barrier 18-1

Course Outline. FRTN10 Multivariable Control, Lecture 13. General idea for Lectures Lecture 13 Outline. Example 1 (Doyle Stein, 1979)

l p -Norm Constrained Quadratic Programming: Conic Approximation Methods

Lecture 14: Optimality Conditions for Conic Problems

Convex Optimization and l 1 -minimization

Lecture 20: November 1st

Lecture 6: Conic Optimization September 8

Handout 6: Some Applications of Conic Linear Programming

Global Optimization of Polynomials

Semidefinite Programming

CS675: Convex and Combinatorial Optimization Fall 2016 Convex Optimization Problems. Instructor: Shaddin Dughmi

FRTN10 Multivariable Control, Lecture 13. Course outline. The Q-parametrization (Youla) Example: Spring-mass System

The Q-parametrization (Youla) Lecture 13: Synthesis by Convex Optimization. Lecture 13: Synthesis by Convex Optimization. Example: Spring-mass System

Convex Optimization Boyd & Vandenberghe. 5. Duality

Problem 1 (Exercise 2.2, Monograph)

Exercises. Exercises. Basic terminology and optimality conditions. 4.2 Consider the optimization problem

Lagrangian Duality Theory

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion

Maximum likelihood estimation

Lecture: Duality.

Module 04 Optimization Problems KKT Conditions & Solvers

Constrained Optimization and Lagrangian Duality

Lecture 2: Convex functions

Introduction to Semidefinite Programming I: Basic properties a

Optimisation convexe: performance, complexité et applications.

More First-Order Optimization Algorithms

EE364 Review Session 4

Relaxations and Randomized Methods for Nonconvex QCQPs

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as

New Rank-One Matrix Decomposition Techniques and Applications to Signal Processing

Lecture 7: Weak Duality

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008

Lecture: Duality of LP, SOCP and SDP

LECTURE 13 LECTURE OUTLINE

Matrix Rank Minimization with Applications

Lecture 9 Sequential unconstrained minimization

Agenda. 1 Duality for LP. 2 Theorem of alternatives. 3 Conic Duality. 4 Dual cones. 5 Geometric view of cone programs. 6 Conic duality theorem

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs

Lecture 4: January 26

On the Sandwich Theorem and a approximation algorithm for MAX CUT

ORF 523 Lecture 9 Spring 2016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, March 10, 2016

The Simplest Semidefinite Programs are Trivial

Convex Optimization in Classification Problems

Lecture 7: Positive Semidefinite Matrices

Transcription:

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March 14, 2007

Lecture Outline Cones and dual cones Generalized inequality and convexity Conic programming Semidefinite programming (SDP) Application: ML estimation Application: Covariance estimation Application: Multi-user detection Thanks: Stephen Boyd (Some materials from Boyd and Vandenberghe)

Cones and Convex Cones C is a cone if for every x C and θ 0, we have θx C C is a convex cone if it is convex and a cone: for any x 1, x 2 C and θ 1, θ 2 0 θ 1 x 1 + θ 2 x 2 C x 1 x 2 0

Norm Cones Given a norm, norm cone is a convex cone: Example: second order cone: C = {(x, t) R n+1 x t} C = {(x, t) R n+1 x 2 t} 82 3 2 3 >< = 4 x 5 4 x T 2 5 4 I 0 >: t t 0 1 3 2 5 4 x t 3 9 >= 5 0, t 0 >; 1 t 0.5 1 0 0 x 2 1 1 0 x 1 1

Positive Semidefinite Cone Matrix A R n n is positive semidefinite A 0 if for all x R n, x T Ax 0 Matrix A R n n is positive definite A 0 if for all x R n, x T Ax > 0 Set of symmetric positive semidefinite matrices: S n + = {X Rn n X = X T, X 0} S n + is a convex cone: if θ 1, θ 2 0 and A, B S n +, then θ 1A + θ 2 B S n +, since for all x R n : x T (θ 1 A + θ 2 B)x = θ 1 x T Ax + θ 2 x T Bx 0

Proper Cones and Generalized Inequalities A cone K is a proper cone if K is convex K is closed K has nonempty interior K has no lines (x K, x K x = 0) Proper cone K induces a generalized inequality (partial ordering on R n ): x K y y x K x K y y x int K

Examples Nonnegative orthant and componentwise inequality: K = R n + is a proper cone x K y means x i y i, i = 1,..., n x K y means x i < y i, i = 1,..., n Positive semidefinite cone and matrix inequality: K = S n + is a proper cone in the set of symmetric matrices Sn X K Y means Y X is positive semidefinite X K Y means Y X is positive definite.

Properties of Generalized Inequalities If x K y and u K v, then x + u K y + v If x K y and y K z, then x K z If x K y and α 0, then αx K αy If x K y and y K x, then x = y If x i K y i for i = 1,..., and x i x and y i y as i, then x K y

Dual Cones Given a cone K. Dual cone of K: K = {y x T y 0 x K} K is always a convex cone K is proper cone K is a proper cone and K = K K y x K Nonnegative orthant cone is self-dual

Dual PSD Cone Consider inner product X T Y = tr(xy ) = P i,j X ijy ij on S n. PSD cone S n + is self-dual: tr(xy ) 0 X 0 Y 0 Proof: Forward direction: suppose Y 0. Then q R n such that q T Y q = tr(qq T Y ) < 0. Therefore, X = qq T satisfies tr(xy ) < 0. Reverse direction: suppose X, Y 0. Express X by eigenvalue decomposition: X = P n i=1 λ iq i qi T where λ i 0, i = 1,..., n. Then! nx nx trxy = tr Y λ i q i qi T = λ i qi T Y q i 0 i=1 i=1

Dual Generalized Inequality Given proper cone K and generalized inequality K. Dual cone K is also proper and induces dual generalized inequality K Relationship between K and K : 1. x K y iff λ T x λ T y for all λ K 0 2. x K y iff λ T x < λ T y for all λ K 0, λ 0 Since K = K, these properties hold if K and K are interchanged

Generalized Inequality Induced Monotinicity f : R n R is K-nondecreasing (increasing) if x K y f(x) (<)f(y) First order condition for differentiable f with convex domain: f is K-nondecreasing iff: f(x) K 0 f is K-increasing if: f(x) K 0 Example: For PSD cone, tr(x 1 ) is matrix decreasing on S n ++ and det X is matrix increasing on S n +

Generalized Inequality Induced Convexity f : R n R m is K-convex if for all x, y and θ [0,1], f(θx + (1 θ)y) K θf(x) + (1 θ)f(y) f : R n R m is strictly K-convex if for all x y and θ (0,1), f(θx + (1 θ)y) K θf(x) + (1 θ)f(y) First order condition: For differentiable f with convex domain, f is K-convex iff for all x, y dom f, f(y) K f(x) + Df(x)(y x)

Matrix Convexity f is a symmetric-matrix-valued function. f : R n m S m. f is convex with respect to matrix inequality if for any x, y dom f and θ [0,1], f(θx + (1 θ)y) θf(x) + (1 θ)f(y) Equivalently, f is matrix convex iff scaler-valued function z T f(x)z is convex for all z f(x) = XX T is matrix convex f(x) = X p is matrix convex for p [1,2] or p [ 1,0], and matrix concave for p [0, 1] f(x) = e X is not matrix convex (unless X is a scalar)

Generalized Inequality Constraints Convex optimization with generalized inequality constraints on vector-valued functions: minimize f 0 (x) subject to f i (x) Ki 0, i = 1,2,..., m f 0 : R n R, f i : R n R k i are K i -convex for some proper cones K i Feasible set is convex Local optimality global optimality

Conic Programming Linear programming with linear generalized inequality constraint: minimize c T x subject to Fx + G K 0 Ax = b When K is nonnegative orthant, conic program reduces to LP When K is PSD cone, write inequality constraints as Linear Matrix Inequalities (LMI): x 1 F 1 +... + x n F n + G 0 where F i, G S k. When they are diagonal, LMI reduces to linear inequalities

SDP SDP: Minimize linear objective over linear equalities and LMI on variables x R n minimize c T x subject to x 1 F 1 +... + x n F n + G 0 Ax = b SDP in standard form: Minimize a matrix inner product over equality constraints on inner products on variables X S n minimize subject to tr(cx) tr(a i X) = b i, i = 1,2,..., p X 0

LP and SOCP as SDP LP as SDP: minimize c T x subject to diag(gx h) 0 Ax = b SOCP: minimize c T x subject to A i x + b i 2 c T i x + d i, i = 1,..., N Fx = g SOCP as SDP: minimize subject to c T x 2 4 (c ix + d i )I A i x + b i (A i x + b i ) T (c i x + d i )I Fx = g 3 5 0, i = 1,..., N

Matrix Norm Minimization A(x) = A 0 + x 1 A 1 +... x n A n where A i R p q. Consider unconstrained spectral norm (max. singular value) minimization over x: minimize A(x) 2 which is equivalent to convex optimization with LMI on (x, t) minimize subject to t A(x) T A(x) t 2 I which is equivalent to SDP minimize subject to t 2 4 ti A(x) T A(x) ti 3 5 0

Related Problems SDP problems: Minimize largest eigenvalue Minimize sum of r largest eigenvalues Maximize log determinant SDP approximations: Combinatorial optimization Rank minimization

ML Estimation Estimate x R n from a set of measurements y R m : y i = a T i x + v i, i = 1,..., m where v i are i.i.d. noise with distribution p on R By independence, likelihood function is my p(y i a T i x) i=1 Maximum Likelihood estimate is any optimal point for the problem in x: maximize mx log p(y i a T i x) i=1 Since many distributions are log-concave, the above problem are often convex optimization

Examples Gaussian noise: v i are Gaussian with zero mean and variance σ 2 : p(z) = (2πσ 2 ) 1/2 exp( z 2 /2σ 2 ) Log-likelihood function: (1/2) log(2πσ) (1/2σ 2 ) Ax y 2 2 ˆx = argmin x Ax y 2 where rows of A are a T i. Solution of least-squares Laplacian noise: v i are Laplacian: p(z) = (1/2a)exp( z a ), a > 0 ˆx = argmin x Solution of l 1 norm minimization Ax y 1

Covariance Estimation for Gaussian Distribution y R n is multivariate Gaussian with zero mean and covariance matrix R = E yy T S n ++ with distribution: p R (y) = (2π) n/2 det(r) 1/2 exp( y T R 1 y/2) Goal: estimate covariance R based on N i.i.d. samples y 1,..., y N R n Sample covariance matrix NX Y = 1 N k=1 y k y T k Log likelihood function log p R (y 1,..., y N ) not concave in R: l(r) = (Nn/2)log(2π) (N/2) log det R (N/2)tr(R 1 Y ) Information matrix S = R 1, log likelihood function is concave in S: l(s) = (Nn/2)log(2π) + (N/2) log det S (N/2)tr(SY )

Covariance Estimation for Gaussian Distribution ML estimate of S found by solving convex optimization: maximize log det S tr(sy ) subject to S S, S 0 where S is a convex constraint set of S based on prior information about R e.g., bounds on R: L R U (U 1 S L 1 ) e.g., condition number constraint on R: λ max (R) κ max λ min (R) (ui S κ max ui, variables: (S, u))

Multi-user Detection Received signal of K-user basic synchronous CDMA channel: y(t) = KX A k b k s k (t) + n(t), t [0, T] k=1 Amplitude A k, signature waveform s k (t), information bit b k { 1,+1}, noise n(t), period T ML detection: find b that minimizes Z " 2 T KX y(t) A k b k s k (t)# dt = 0 Z T 0 k=1 " X K # A k b k s k (t) dt 2 k=1 Z T 0 " X K # A k b k s k (t) y(t)dt k=1 = b T Hb 2b T Ay

Boolean Constrained QP Formulation A = diag(a): amplitude matrix R ij = R T 0 s i(t)s j (t)dt: cross-correlation matrix y = RAb + n: sampled matched filter output Notation: H = ARA and p = 2Ay minimize subject to x T Hx + p T x x i { 1,+1}, i = 1,..., K Equivalent form: let X = xx T subject to X ii = 1, i = 1,..., K, X 0, rank(x) = 1

SDP Relaxation Neglect rank constraint: subject to X ii = 1, i = 1,..., K, X 0 Let ˆX = 2 4 xxt x x T 1 3 5 and C = 2 4 H p/2 p T /2 1 3 5, reformulate as SDP: minimize tr(c ˆX) subject to ˆXii = 1, i = 1,..., K + 1, ˆX 0 Factorization and randomization methods to recover solution to original boolean constrained QP from SDP relaxation solutions

Dual Relaxation Relax x T x = K to x T x K: minimize subject to x T Hx + p T x x T x K Lagrange dual problem: minimize 1 4 pt (H + λi) 1 p λk subject to λ 0 Gradient-descent method solves this convex optimization in one variable λ Recover primal optimal solution: x = (H + λ I) 1 Ay

Complexity Performance Tradeoff Exhaustive search: NP-hard O(2 K ) Successive interference cancellation Relaxation methods: SDP relaxation: polynomial time O(K 3 ) (SDP solution dominates randomization in terms of computational load) Unconstrained relaxation: analytic solution (related to decorrelator and linear MMSE) Bound relaxation Dual relaxation Better complexity-tradeoff possible through duality, randomization, and other relaxation methods?

Lecture Summary Proper cones induce generalized inequalities in R n and S n, which induces generalized convex inequality constraints Convex optimization with generalized inequalities: conic programming SDP is a conic programming over PSD cone with LMI, includes LP, QP, QCQP, SOCP as special cases Relaxation for multi-user detection problems Reading: Sections 2.1, 2.6, 3.6, 4.6, 5.9 in Boyd and Vandenberghe. L. Vandenberghe and S. Boyd, Semidefinite programming, SIAM Review, March 1996. W. K. Ma, T. N. Davidson, K. M. Wong, Z. Q. Luo, and P. C. Ching, Quasi-maximum-likelihood multiuser detection using semidefinite relaxation with applications to synchronous CDMA, IEEE Trans. Signal Proc., vol. 50, no. 4, pp. 912-922, April 2002.