CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization

Similar documents
CSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming

Approximation Algorithms

CS295: Convex Optimization. Xiaohui Xie Department of Computer Science University of California, Irvine

SDP Relaxations for MAXCUT

Semidefinite Programming Basics and Applications

6.854J / J Advanced Algorithms Fall 2008

Convex Optimization M2

Lecture: Examples of LP, SOCP and SDP

Lecture Semidefinite Programming and Graph Partitioning

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

Convex relaxation. In example below, we have N = 6, and the cut we are considering

Lecture 20: Goemans-Williamson MAXCUT Approximation Algorithm. 2 Goemans-Williamson Approximation Algorithm for MAXCUT

COURSE ON LMI PART I.2 GEOMETRY OF LMI SETS. Didier HENRION henrion

Mathematical Optimization Models and Applications

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

Lecture 10. Semidefinite Programs and the Max-Cut Problem Max Cut

Convex relaxation. In example below, we have N = 6, and the cut we are considering

LMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009

4. Convex optimization problems

Handout 6: Some Applications of Conic Linear Programming

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications

Convex optimization problems. Optimization problem in standard form

Lecture: Convex Optimization Problems

EE Applications of Convex Optimization in Signal Processing and Communications Dr. Andre Tkacenko, JPL Third Term

The maximal stable set problem : Copositive programming and Semidefinite Relaxations

SEMIDEFINITE PROGRAM BASICS. Contents

Lecture 4: January 26

Convex Optimization in Classification Problems

4. Convex optimization problems

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 2: Linear Algebra Review

Convex Optimization & Machine Learning. Introduction to Optimization

Lecture Note 5: Semidefinite Programming for Stability Analysis

Lecture 20: November 1st

Lecture 22: Hyperplane Rounding for Max-Cut SDP

Canonical Problem Forms. Ryan Tibshirani Convex Optimization

CS675: Convex and Combinatorial Optimization Fall 2016 Convex Optimization Problems. Instructor: Shaddin Dughmi

Real Symmetric Matrices and Semidefinite Programming

IE 521 Convex Optimization

III. Applications in convex optimization

Assignment 1: From the Definition of Convexity to Helley Theorem

Dimension reduction for semidefinite programming

HW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.

Conic Linear Programming. Yinyu Ye

Convex hull of two quadratic or a conic quadratic and a quadratic inequality

COM Optimization for Communications 8. Semidefinite Programming

4. Algebra and Duality

Lecture 1 Introduction

Relaxations and Randomized Methods for Nonconvex QCQPs

ORF 523 Lecture 9 Spring 2016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, March 10, 2016

A A x i x j i j (i, j) (j, i) Let. Compute the value of for and

What can be expressed via Conic Quadratic and Semidefinite Programming?

Semidefinite Programming

Mathematics 530. Practice Problems. n + 1 }

l p -Norm Constrained Quadratic Programming: Conic Approximation Methods

8. Geometric problems

Primal-Dual Geometry of Level Sets and their Explanatory Value of the Practical Performance of Interior-Point Methods for Conic Optimization

L. Vandenberghe EE236C (Spring 2016) 18. Symmetric cones. definition. spectral decomposition. quadratic representation. log-det barrier 18-1

Solution to EE 617 Mid-Term Exam, Fall November 2, 2017

Lecture 6 Positive Definite Matrices

Convex sets, conic matrix factorizations and conic rank lower bounds

Hands-on Matrix Algebra Using R

Nonlinear Programming Models

Convex and Semidefinite Programming for Approximation

16.1 L.P. Duality Applied to the Minimax Theorem

CSC Linear Programming and Combinatorial Optimization Lecture 12: The Lift and Project Method

INTERIOR-POINT METHODS ROBERT J. VANDERBEI JOINT WORK WITH H. YURTTAN BENSON REAL-WORLD EXAMPLES BY J.O. COLEMAN, NAVAL RESEARCH LAB

IE 521 Convex Optimization

Grothendieck s Inequality

Introduction to Semidefinite Programming I: Basic properties a

Acyclic Semidefinite Approximations of Quadratically Constrained Quadratic Programs

1 Inner Product and Orthogonality

1 Strict local optimality in unconstrained optimization

Finding normalized and modularity cuts by spectral clustering. Ljubjana 2010, October

Linear Regression. In this problem sheet, we consider the problem of linear regression with p predictors and one intercept,

Duke University, Department of Electrical and Computer Engineering Optimization for Scientists and Engineers c Alex Bronstein, 2014

Cutting Planes for First Level RLT Relaxations of Mixed 0-1 Programs

Conic optimization under combinatorial sparsity constraints

Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 4

A Hierarchy of Polyhedral Approximations of Robust Semidefinite Programs

Lecture 8: The Goemans-Williamson MAXCUT algorithm

Basic Concepts in Matrix Algebra

Geometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as

Semidefinite and Second Order Cone Programming Seminar Fall 2012 Project: Robust Optimization and its Application of Robust Portfolio Optimization

Second Order Cone Programming Relaxation of Positive Semidefinite Constraint

Chapter 2: Preliminaries and elements of convex analysis

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

A CONIC DANTZIG-WOLFE DECOMPOSITION APPROACH FOR LARGE SCALE SEMIDEFINITE PROGRAMMING

An Introduction to Correlation Stress Testing

U.C. Berkeley CS294: Spectral Methods and Expanders Handout 11 Luca Trevisan February 29, 2016

EE364 Review Session 4

Randomized Coordinate Descent Methods on Optimization Problems with Linearly Coupled Constraints

Semidefinite Programming

Equivalent relaxations of optimal power flow

Lecture 17 (Nov 3, 2011 ): Approximation via rounding SDP: Max-Cut

U.C. Berkeley CS294: Beyond Worst-Case Analysis Handout 12 Luca Trevisan October 3, 2017

8. Geometric problems

Transcription:

CSCI 1951-G Optimization Methods in Finance Part 10: Conic Optimization April 6, 2018 1 / 34

This material is covered in the textbook, Chapters 9 and 10. Some of the materials are taken from it. Some of the figures are from S. Boyd, L. Vandenberge s book Convex Optimization https://web.stanford.edu/~boyd/cvxbook/. 2 / 34

Outline 1. Cones and conic optimization 2. Converting quadratic constraints into cone constraints 3. Benchmark-relative portfolio optimization 4. Semidefinite programming 5. Approximating covariance matrices 3 / 34

Cones A set C is a cone if for every x C and θ 0, θx C Example: {(x, x ), x R} R 2 Is this set convex? 4 / 34

Convex Cones A set C is a convex cone if, for every x 1, x 2 C and θ 1, θ 2 0, Example: θ 1 x 1 + θ 2 x 2 C. x 1 x 2 0 Figure 2.4 The pie slice shows all points of the form θ 1x 1 + θ 2x 2,where θ 1, θ 2 0. The apex of the slice (which corresponds to θ 1 = θ 2 = 0) is at 0; its edges (which correspond to θ 1 = 0 or θ 2 = 0) pass through the points x 1 and x 2. 5 / 34

Conic optimization Conic optimization problem in standard form: min c T x Ax = b x C where C is a convex cone in finite-dimensional vector space X. Note: linear objective function, linear constraints. If X = R n and C = R n +, then...we get an LP! Conic optimization is a unifying framework for linear programming, second-order cone programming (SOCP), semidefinite programming (SDP). 6 / 34

Norm cones Let be any norm in R n 1. The norm cone associated to is the set C = {x = (x 1,..., x n ) : x 1 (x 2,..., x n ) } It is a convex set. 7 / 34

Second-order cone in R 3 The second-order cone is the norm cone for the Euclidean norm 2. 1 t 0.5 0 1 0 x 2 1 1 x 1 0 1 Figure 2.10 Boundary of second-order cone in R 3, {(x 1,x 2,t) (x 2 1+x 2 2) 1/2 t}. What happens when we slice the second-order cone? I.e., when we take the intersection with a hyperplane? We obtain ellipsoidal sets. 8 / 34

Rewriting constraints Let s rewrite C = {x = (x 1,..., x n ) : x 1 (x 2,..., x n ) 2 } as x 1 0, x 2 1 x 2 2 x 2 n 0 This is a combination of an linear and a quadratic constraints. Also: convex quadratic constraints can be expressed as second-order cone membership constraints. 9 / 34

Rewriting constraints Quadratic constraint: x T P x + 2q T x + γ 0 Assume P w.l.o.g. positive definite, so the constraint is...convex. Also assume, for technical reasons, that q T P q γ 0. Goal: rewrite the above constraint as a combination of linear and second-order cone membership constraints. 10 / 34

Rewriting constraints Because P is positive definitive, it has a Cholesky decomposition: invertible R s.t. P = RR T. Rewrite the constraint as: (R T x) T (R T x) + 2q T x + γ 0 Let y = (y 1,..., y n ) T = R T x + R 1 q The above is a bijection between x and y. We are going to rewrite the constraint as a constraint on y. 11 / 34

Rewriting constraints The constraint: It holds (R T x) T (R T x) + 2q T x + γ 0 y T y = (R T x) T (R T x) + 2q T x + q T P 1 q Since there is a bijection between y and x, the constraint can be satisfied if and only if y s.t. y = R T x + R 1 q, y T y q T P q γ 12 / 34

The constraint is equivalent to: Rewriting constraints y s.t. y = R T x + R 1 q, y T y q T P q γ Lets denote with y 0 the square root of the r.h.s. of the right inequality: y 0 = q T P q γ R + Consider the vector (y 0, y 1,..., y }{{ n ). } y The right inequality then is y 2 0 y T y = n i=1 Taking the square root on both sides: y 0 n yi 2 = y 2 i=1 This is the membership constraint for the second-order cone in R n+1. y 2 i 13 / 34

Rewriting constraints We rewrite the convex quadratic constraint as x T P x + 2q T x + γ 0 (y 1,..., y n ) T = R T x + R 1 q (y 0, y 1,..., y n ) C y 0 = q T P q γ R + which is a combination of linear and second-order cone membership constraints. 14 / 34

Outline 1. Cones and conic optimization 2. Converting quadratic constraints into cone constraints 3. Benchmark-relative portfolio optimization 4. Semidefinite programming 5. Approximating covariance matrices 6. SDP and approximation algorithms 15 / 34

Benchmark-relative portfolio optimization Given a benchmark strategy x B (e.g., an index) develop a portfolio x that tracks x B, but adds value by beating it. I.e., we want a portfolio x with positive expected excess return: µ T (x x B ) 0 and specifically want to maximize the expected excess return. Challenge: balance expected excess return with its variance. 16 / 34

Tracking error and volatility constraints The (predicted) tracking error of the portfolio x is TE(x) = (x x B ) T Σ(x x B ) It measure the variability of excess returns. In benchmark-relative portfolio optimization, we solve mean-variance optimization w.r.t. the expected excess return and tracking error: max µ T (x x B ) (x x B ) T Σ(x x B ) T 2 Ax = b 17 / 34

Comparison with mean-variance optimization We have seen MVO as: min 1 2 xt Σx µ T x R Ax = b or max µ T x δ 2 Σx Ax = b How do they differ from max µ T (x x B ) (x x B ) T Σ(x x B ) T 2 Ax = b The latter is not a standard quadratic program: it has a nonlinear constraint. 18 / 34

max µ T (x x B ) (x x B ) T Σ(x x B ) T 2 Ax = b The nonlinear constraint is...convex quadratic We can rewrite it as a combination of linear and second-order cone membership, and solve the resulting convex conic problem. 19 / 34

Outline 1. Cones and conic optimization 2. Converting quadratic constraints into cone constraints 3. Benchmark-relative portfolio optimization 4. Semidefinite programming 5. Approximating covariance matrices 6. SDP and approximation algorithms 20 / 34

SemiDefinite Programming (SDP) The variables are the entries of a symmetric matrix in the cone of positive semidefinite matrices. 1 z 0.5 0 1 0 y 1 0 x 0.5 1 Figure 2.12 Boundary of positive semidefinite cone in S 2. 21 / 34

Application: approximating covariance matrices Portfolio Optimization almost always requires covariance matrices. These are not directly available, but are estimated. Estimation of covariance matrices is a very challenging task, mathematically and computationally, because the matrices must satisfy various properties (e.g., symmetry, positive semidefiniteness). To be efficient, many estimation methods do not impose problem-dependent constraints. Typically, one is interested in finding the smallest distortion of the original estimate that satisfies the desired constraints; 22 / 34

Application: approximating covariance matrices Let ˆΣ S n be an estimate of a covariance matrix ˆΣ is symmetric ( S n ) but not positive semidefinite. Goal: find the positive semidefinite matrix that is closest to ˆΣ w.r.t. the Frobenius norm: d F (Σ, ˆΣ) = (Σ ij ˆΣ ij ) 2 i,j Formally: nearest covariance matrix problem: min Σ d F (Σ, ˆΣ) Σ C n s where C n s is the cone of n n symmetric and positive semidefinite matrices. 23 / 34

Application: approximating covariance matrices min Σ d F (Σ, ˆΣ) Σ C n s Introduce a dummy variable t and rewrite the problem as min t d F (Σ, ˆΣ) t Σ C n s The first constraint can be written as a second-order cone constraint, so the problem is transformed into a conic optimization problem. 24 / 34

Application: approximating covariance matrices Variation of the problem with additional linear constraints: Let E {(i, j) : 1 i n} Let (l ij, u ij ), for (i, j) E be lower/upper bounds to impose to the entries. We want to solve: min Σ d F (Σ, ˆΣ) l ij < Σ ij < u ij, (i, j) E Σ C n s 25 / 34

Application: approximating covariance matrices For example, let ˆΣ be an estimation of a correlation matrix. Correlation matrix have all diagonal entries equal to 1. We want to solve the nearest correlation matrix problem. We choose E = {(i, i), 1 i n}, l i = 1 = u i, 1 i n 26 / 34

Application: approximating covariance matrices Many other variants are possible: Force some entries of ˆΣ to remain the same in Σ; Weight the changes to different entries differently, because we trust some more than other; Impose lower bounds to the minimum eigenvalue of Σ, to reduce instability; All of these can be easily solved with SDP software. 27 / 34

Outline 1. Cones and conic optimization 2. Converting quadratic constraints into cone constraints 3. Benchmark-relative portfolio optimization 4. Semidefinite programming 5. Approximating covariance matrices 6. SDP and approximation algorithms 28 / 34

A different point of view to SDP A n n matrix A is positive semidefinite if there are vectors x i,... x j such that A ij = x T i x j. We can then write a semidefinite program as a program involving only linear combinations of the inner products of the vectors x i,... x j : min i,j [n] i,j [n] c ij x T i x j a ijk x T i x j b j, k This form is particularly useful to develop approximation algorithms. 29 / 34

The MaxCut problem Given a graph G = (V, E), output a 2-partition of V so as to maximize the number of edges crossing from one side to the other. Integer quadratic program: max (i,j) E 1 v i v j 2 v i { 1, 1}, 1 i n The decision version of the problem is NP-complete. 30 / 34

The MaxCut problem Steps for an approximation algorithm for MaxCut: 1 Relax the original problem to an SDP; 2 Solve the SDP; 3 Round the SDP solution to obtain an integer solution to the original problem. 31 / 34

The MaxCut problem Integer quadratic program: max (i,j) E 1 v i v j 2 v i { 1, 1}, 1 i n SDP Relaxation: max (i,j) E 1 v T i v j 2 v i 2 2 1, 1 i n v i R n It is a relaxation: the optimal obj. value will be larger than the one for the original problem. 32 / 34

The MaxCut problem max (i,j) E 1 v T i v j 2 v i 2 2 1, 1 i n v i R n The optimal solution is a set of unit vectors in R n To obtain a solution for the original problem, we need to round this solution and assign each vector to one value in { 1, 1}. Goemans and Willamson 1995: choose a random hyperplane that goes through the origin, and split the vectors depending on the side of the hyperplane. Approximation ratio: 0.87856 - ε (essentially optimal) 33 / 34

Outline 1. Cones and conic optimization 2. Converting quadratic constraints into cone constraints 3. Benchmark-relative portfolio optimization 4. Semidefinite programming 5. Approximating covariance matrices 6. SDP and approximation algorithms 34 / 34