EE Applications of Convex Optimization in Signal Processing and Communications Dr. Andre Tkacenko, JPL Third Term

Similar documents
Convex Functions. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

minimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1,

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

4. Convex optimization problems

A function(al) f is convex if dom f is a convex set, and. f(θx + (1 θ)y) < θf(x) + (1 θ)f(y) f(x) = x 3

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions

3. Convex functions. basic properties and examples. operations that preserve convexity. the conjugate function. quasiconvex functions

Lecture: Examples of LP, SOCP and SDP

Lecture: Convex Optimization Problems

Semidefinite Programming Basics and Applications

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

EE5138R: Problem Set 5 Assigned: 16/02/15 Due: 06/03/15

Exercises. Exercises. Basic terminology and optimality conditions. 4.2 Consider the optimization problem

EE 546, Univ of Washington, Spring Proximal mapping. introduction. review of conjugate functions. proximal mapping. Proximal mapping 6 1

In English, this means that if we travel on a straight line between any two points in C, then we never leave C.

4. Convex optimization problems

The proximal mapping

ORF 523 Lecture 9 Spring 2016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, March 10, 2016

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

EE364 Review Session 4

Solution to EE 617 Mid-Term Exam, Fall November 2, 2017

Lecture 6: Conic Optimization September 8

Lecture: Duality of LP, SOCP and SDP

Convex optimization problems. Optimization problem in standard form

Homework 4. Convex Optimization /36-725

Introduction and Math Preliminaries

5. Duality. Lagrangian

Linear and non-linear programming

Convex functions. Definition. f : R n! R is convex if dom f is a convex set and. f ( x +(1 )y) < f (x)+(1 )f (y) f ( x +(1 )y) apple f (x)+(1 )f (y)

Convex Optimization Boyd & Vandenberghe. 5. Duality

Lecture Note 5: Semidefinite Programming for Stability Analysis

EE 227A: Convex Optimization and Applications October 14, 2008

Symmetric Matrices and Eigendecomposition

Convex Optimization & Lagrange Duality

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Convex Optimization M2

Convex Optimization and Modeling

IE 521 Convex Optimization

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

13. Convex programming

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as

Lecture: Duality.

Convex envelopes, cardinality constrained optimization and LASSO. An application in supervised learning: support vector machines (SVMs)

12. Interior-point methods

Additional Exercises for Introduction to Nonlinear Optimization Amir Beck March 16, 2017

Lecture 2: Convex Sets and Functions

L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones. definition. spectral decomposition. quadratic representation. log-det barrier 18-1

Review problems for MA 54, Fall 2004.

LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM

Relaxations and Randomized Methods for Nonconvex QCQPs

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

2. Matrix Algebra and Random Vectors

Lecture 7: Convex Optimizations

Semidefinite approximations of the matrix logarithm

Lecture 5: September 15

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

8. Conjugate functions

Convex Optimization Problems. Prof. Daniel P. Palomar

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

Lecture 4: Convex Functions, Part I February 1

The Q-parametrization (Youla) Lecture 13: Synthesis by Convex Optimization. Lecture 13: Synthesis by Convex Optimization. Example: Spring-mass System

Lecture 3: Review of Linear Algebra

Optimization for Machine Learning

Lecture 3: Review of Linear Algebra

Semidefinite Programming Duality and Linear Time-invariant Systems

Definition of convex function in a vector space

Subgradient Method. Ryan Tibshirani Convex Optimization

Constrained Optimization and Lagrangian Duality

Nonlinear Optimization for Optimal Control

Lecture 15 Newton Method and Self-Concordance. October 23, 2008

10-725/36-725: Convex Optimization Prerequisite Topics

Convex Optimization M2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 4

FRTN10 Multivariable Control, Lecture 13. Course outline. The Q-parametrization (Youla) Example: Spring-mass System

Sparse Optimization Lecture: Basic Sparse Optimization Models

Convex Optimization and Modeling

Tutorial on Convex Optimization for Engineers Part II

Lecture 3. Optimization Problems and Iterative Algorithms

Canonical Problem Forms. Ryan Tibshirani Convex Optimization

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

CSCI : Optimization and Control of Networks. Review on Convex Optimization

Homework Set #6 - Solutions

1 Computing with constraints

j=1 x j p, if 1 p <, x i ξ : x i < ξ} 0 as p.

A Brief Review on Convex Optimization

Lecture 7: Positive Semidefinite Matrices

Convex Optimization. (EE227A: UC Berkeley) Lecture 4. Suvrit Sra. (Conjugates, subdifferentials) 31 Jan, 2013

Lecture 2: Convex functions

The Hilbert Space of Random Variables

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

MAT 419 Lecture Notes Transcribed by Eowyn Cenek 6/1/2012

Assignment 1: From the Definition of Convexity to Helley Theorem

Bindel, Fall 2016 Matrix Computations (CS 6210) Notes for

Linear Models Review

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Linear algebra and applications to graphs Part 1

A : k n. Usually k > n otherwise easily the minimum is zero. Analytical solution:

Course Outline. FRTN10 Multivariable Control, Lecture 13. General idea for Lectures Lecture 13 Outline. Example 1 (Doyle Stein, 1979)

Transcription:

EE 150 - Applications of Convex Optimization in Signal Processing and Communications Dr. Andre Tkacenko JPL Third Term 2011-2012 Due on Thursday May 3 in class. Homework Set #4 1. (10 points) (Adapted from CO-BV Exercise 3.49) (Log-concave functions:) Show that the following functions are log-concave. (a) Logistic function: (b) Harmonic mean: (c) Product over sum: (d) Determinant over trace: ex 1 + e x dom(f(x)) = R. 1 1 x 1 + + 1 dom(f(x)) = R n ++. x n n x k dom(f(x)) = R n ++. x k f(x) = det(x) tr(x) dom(f(x)) = Sn ++. Hint: The Cauchy-Schwarz inequality may be useful here which states that for any inner product we have x y 2 x x y y with equality if and only if x and y are linearly dependent which is equivalent to saying that either y = Cx or x = Cy for some C F. 2. (10 points) (Adapted from CO-BV Exercise 4.7) (Convex-affine/convex-concave fractional functions:) (a) Convex-affine fractional functions: Consider a problem of the following form: minimize f 0 (x) c T x + d f k (x) 0 k = 1... m Ax = b where f 0 (x) f 1 (x)... f m (x) are convex and the domain of the objective function is defined as { x dom(f 0 (x)) : c T x + d > 0 }. i. Show that this is a quasiconvex optimization problem.

ii. Show that the problem is equivalent to minimize g 0 (y t) g k (y t) 0 k = 1... m Ay = bt c T y + td = 1 where g k (x t) is the perspective of f k (x) i.e. g k (x t) = tf k (x/t). The variables are y R n and t R. Show that this problem is convex. (b) Convex-concave fractional functions: Analogous to the problem analyzed above consider a problem of the form: minimize f 0 (x) h(x) f k (x) 0 k = 1... m Ax = b where f 0 (x) f 1 (x)... f m (x) are convex h(x) is concave and the domain of the objective function is defined as {x dom(f 0 (x)) dom(h(x)) : h(x) > 0} and f 0 (x) 0 everywhere. i. Show that this is a quasiconvex optimization problem. ii. Show that the problem is equivalent to minimize g 0 (y t) g k (y t) 0 k = 1... m Ay = bt h(y t) 1 where g k (x t) is the perspective of f k (x) and h(x t) is the perspective of h(x). Show that this problem is convex. iii. As an example apply the technique derived in the previous part to the unconstrained problem with f 0 (x) = 1 m tr(f(x)) h(x) = (det(f(x))) 1 m ( ) with dom f0 (x) h(x) = {x : F(x) 0} where F(x) = F 0 + x 1 F 1 + + x n F n for given F k S m. In this problem we minimize the ratio of the arithmetic mean over the geometric mean of the eigenvalues of an affine matrix function F(x). 3. (10 points) (Adapted from CO-BV Exercise 4.24) (Complex l 1 - l 2 - and l -norm approximation:) Consider the problem minimize Ax b p where A C m n b C m and the variable is x C n. Recall that the (real or complex) l p -norm is defined by ( m ) 1 y p y k p p

for p 1 and y = max y k. For p = 1 2 and express the complex l p -norm...m approximation problem as a second-order cone program (SOCP) or quadratically constrained quadratic program (QCQP) with real variables and data. 4. (10 points) (Adapted from CO-AE Exercise 3.11) (Using Schur complements to express matrix-based optimization problems as semidefinite programs:) Formulate each of the following optimization problems as a semidefinite program (SDP). The variable is x R n and the function F(x) is defined as F(x) = F 0 + x 1 F 1 + + x n F n with F k S m. The domain of f(x) in each subproblem is dom(f(x)) = {x R n : F(x) 0}. (a) Minimize c T (F(x)) 1 c where c R m. (b) Minimize (c) Minimize max...k ct k (F(x)) 1 c k where c k R m for k = 1... K. sup c 2 1 c T (F(x)) 1 c. [ ] (d) Minimize E c T (F(x)) 1 c where c is a random vector with mean c = E[c] [ ] and covariance S = E (c c) (c c) T. Assume here that the covariance matrix has m the following representation: S = s k s T k where s k R m for k = 1... m. *5. (30 points) (Adapted from CO-AE Exercise 2.19) (Majorization and symmetric functions of eigenvalues:) Here we use x [k] to denote the k-th largest element of a vector x R n. Thus x [1] x [2]... x [n] are the elements of x sorted in decreasing order. We say that a vector y R n majorizes a vector x R n if y [1] x [1] y [1] + y [2] x [1] + x [2] y [1] + y [2] + y [3] x [1] + x [2] + x [3]. y [1] + y [2] + + y [n 1] x [1] + x [2] + + x [n 1] y [1] + y [2] + + y [n] = x [1] + x [2] + + x [n]. In other words the descending-ordered partial sums of y are greater than or equal to those of x and the sum of the components of y and x are equal. (a) It can be shown that y majorizes x if and only if there exists a doubly stochastic matrix P such that x = Py. A doubly stochastic matrix is one with nonnegative entries whose rows and columns add up to unity: P kl 0 k l = 1... n P1 = 1 1 T P = 1 T. Use this characterization to show the following: if f : R R is a convex function and y majorizes x then f(y k ) f(x k ).

(b) We use the notation λ k (X) to denote the k-th largest eigenvalue of a matrix X S n so that λ 1 (X)... λ n (X) are the eigenvalues of X sorted in decreasing order. Let r be any integer in the set {1... n}. Show that λ 1 (X) + + λ r (X) = sup {tr(xz) : Z S n 0 Z I tr(z) = r}. (1) What does this tell us about the convexity properties of the function g r (X) = λ 1 (X) + + λ r (X) (i.e. the sum of the largest r eigenvalues of X)? Hint: Show that the right-hand side of (1) is equal to the left-hand side by using an eigenvalue decomposition of X to reduce the maximization in (1) to a simple linear program whose solution is the sum of the r largest eigenvalues of X. (c) Let X = θu + (1 θ) V be a convex combination of two matrices U V S n. Use the results of the previous part to show that the vector λ 1 (U) λ 1 (V) a θ. + (1 θ). λ n (U) λ n (V) majorizes the vector b [ λ 1 (X) λ n (X) ] T. (d) Combine the results of parts (a) and (c) to show that if f : R R is convex then the function h : S n R defined as is convex. h(x) = f(λ k (X)) For example by taking x log(x) we can conclude that the function h(x) = λ k (X) log(λ k (X)) is convex on S n ++. This function arises in quantum information theory where it is known as the (negative) Von Neumann entropy. When X is diagonal i.e. X = diag(x) it reduces to the negative Shannon entropy x k log x k. k Reading assignments: 1. Read through Chapter 3 and begin Chapter 4 of CO-BV. Reminders: Late homework policy for EE 150: Late homeworks will not be accepted. There will be no exceptions to this other than institute established emergency reasons in which case a signed letter is required from an authorized official. NCT Problems: Remember that problems with an asterisk such as *7 are no collaboration type

(NCT) problems. Texts: The abbreviation CO-BV corresponds to the textbook Convex Optimization by Stephen Boyd and Lieven Vandenberghe. In addition CO-AE refers to the Additional Exercises for Convex Optimization also by Boyd and Vandenberghe. Finally CVX corresponds to the cvx Users Guide by Michael Grant and Stephen Boyd.