IE 521 Convex Optimization Homework #1 Solution

Similar documents
1 Review of last lecture and introduction

Approximate Farkas Lemmas in Convex Optimization

Lecture 8. Strong Duality Results. September 22, 2008

Summer School: Semidefinite Optimization

GEORGIA INSTITUTE OF TECHNOLOGY H. MILTON STEWART SCHOOL OF INDUSTRIAL AND SYSTEMS ENGINEERING LECTURE NOTES OPTIMIZATION III

3. Linear Programming and Polyhedral Combinatorics

Subgradient. Acknowledgement: this slides is based on Prof. Lieven Vandenberghes lecture notes. definition. subgradient calculus

CO 250 Final Exam Guide

EE364a Review Session 5

IE 521 Convex Optimization

Shiqian Ma, MAT-258A: Numerical Optimization 1. Chapter 4. Subgradient

Chap 2. Optimality conditions

Assignment 1: From the Definition of Convexity to Helley Theorem

Math 4317 : Real Analysis I Mid-Term Exam 1 25 September 2012

Week 3 Linear programming duality

MAT-INF4110/MAT-INF9110 Mathematical optimization

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

Linear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Lecture 1: Convex Sets January 23

3. Linear Programming and Polyhedral Combinatorics

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

Iowa State University. Instructor: Alex Roitershtein Summer Homework #1. Solutions

LP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra

LINEAR PROGRAMMING II

Chapter 3, Operations Research (OR)

Convex Optimization M2

Real Analysis Notes. Thomas Goller

Math 341: Convex Geometry. Xi Chen

CS261: A Second Course in Algorithms Lecture #9: Linear Programming Duality (Part 2)

Chapter 2. Convex Sets: basic results

1 Review Session. 1.1 Lecture 2

CS261: Problem Set #3

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Homework 4, 5, 6 Solutions. > 0, and so a n 0 = n + 1 n = ( n+1 n)( n+1+ n) 1 if n is odd 1/n if n is even diverges.

PROBLEMS. (b) (Polarization Identity) Show that in any inner product space

Convex Analysis Background

Lagrange duality. The Lagrangian. We consider an optimization program of the form

UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems

Theorems. Theorem 1.11: Greatest-Lower-Bound Property. Theorem 1.20: The Archimedean property of. Theorem 1.21: -th Root of Real Numbers

Input: System of inequalities or equalities over the reals R. Output: Value for variables that minimizes cost function

minimize x subject to (x 2)(x 4) u,

Farkas Lemma, Dual Simplex and Sensitivity Analysis

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

(c) For each α R \ {0}, the mapping x αx is a homeomorphism of X.

Conic Linear Optimization and its Dual. yyye

Lecture Notes 3: Duality

Optimality Conditions for Nonsmooth Convex Optimization

Lecture 15: October 15

Lecture Note 18: Duality

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

Locally convex spaces, the hyperplane separation theorem, and the Krein-Milman theorem

Math General Topology Fall 2012 Homework 1 Solutions

2. Dual space is essential for the concept of gradient which, in turn, leads to the variational analysis of Lagrange multipliers.

Lecture 5. Theorems of Alternatives and Self-Dual Embedding

A Review of Linear Programming

Constrained Optimization and Lagrangian Duality

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

Optimization methods NOPT048

Benders Decomposition

5. Duality. Lagrangian

Functional Analysis Exercise Class

Math 5052 Measure Theory and Functional Analysis II Homework Assignment 7

Set, functions and Euclidean space. Seungjin Han

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016

Inequality Constraints

MA651 Topology. Lecture 10. Metric Spaces.

Lecture 1. 1 Conic programming. MA 796S: Convex Optimization and Interior Point Methods October 8, Consider the conic program. min.

Lecture 8 Plus properties, merit functions and gap functions. September 28, 2008

Convex Sets with Applications to Economics

B. Appendix B. Topological vector spaces

The Simplex Algorithm

Farkas Lemma. Rudi Pendavingh. Optimization in R n, lecture 2. Eindhoven Technical University. Rudi Pendavingh (TUE) Farkas Lemma ORN2 1 / 15

Lecture 9 Monotone VIs/CPs Properties of cones and some existence results. October 6, 2008

Chapter 2: Preliminaries and elements of convex analysis

MATH 4200 HW: PROBLEM SET FOUR: METRIC SPACES

OPTIMISATION 3: NOTES ON THE SIMPLEX ALGORITHM

Linear Programming Inverse Projection Theory Chapter 3

Discrete Optimization

Chapter 1 Linear Programming. Paragraph 5 Duality

Motivating examples Introduction to algorithms Simplex algorithm. On a particular example General algorithm. Duality An application to game theory

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

On duality gap in linear conic problems

Zero sum games Proving the vn theorem. Zero sum games. Roberto Lucchetti. Politecnico di Milano

Mathematical Optimisation, Chpt 2: Linear Equations and inequalities

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45

Optimality Conditions for Constrained Optimization

4TE3/6TE3. Algorithms for. Continuous Optimization

Appendix A: Separation theorems in IR n

Nonlinear Programming

Introduction to Proofs

Functional Analysis. Franck Sueur Metric spaces Definitions Completeness Compactness Separability...

Convex Optimization and Modeling

Introduction to Nonlinear Stochastic Programming

Lecture 5. Ch. 5, Norms for vectors and matrices. Norms for vectors and matrices Why?

Convex Sets. Prof. Dan A. Simovici UMB

Helly's Theorem and its Equivalences via Convex Analysis

Introduction to Convex and Quasiconvex Analysis

Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016

This chapter reviews some basic geometric facts that we will need during the course.

Transcription:

IE 521 Convex Optimization Homework #1 Solution your NAME here your NetID here February 13, 2019 Instructions. Homework is due Wednesday, February 6, at 1:00pm; no late homework accepted. Please use the provided L A TEX file as a template. You can discuss with others, but please write your own solutions. 1

Problem 1: Norms and Dual Norms A real-valued function on R n is called a norm, denoted as, if it satisfies the three properties: (positivity): x R n, x 0; x = 0 if an only if x = 0; (homogeneity): x R n, α R: αx = α x ; (triangle inequality): x, y R n : x + y x + y. The standard norms on R n are the l p -norms (p 1): [Euclidean norm]: p = 2, x 2 := n x2 i ; [Manhattan norm]: p = 1, x 1 := n x i ; ( n ) 1/p x p := x i p [Maximum norm]: p =, x := max 1 i n x i. The dual norm of on R n is defined as x = max y R n : y 1 xt y. Exercise 1.1 (Definition) Prove that is a valid norm. (a) The positivity property is true, because we know x = The homogeneity property is true, because we know max y R n : y 1 xt y x T 0 = 0 αx = max y R n : y 1 αxt y = α max y R n : y 1 xt y = α x The triangle inequality property is true, because we know x + z = max (x + y R n : y 1 z)t y = max y R n : y 1 (xt y + z T y) Let y be the optimal solution of the above. Then, we have x + z = x T y + z T y max y R n : y 1 xt y + max y R n : y 1 zt y = x + z Exercise 1.2 (Example) Denote x p, as the dual norm to the l p -norm (p 1). Show that x p, = x q, where 1 p + 1 q = 1. Hence, 2, = 2, 1, =,, = 1. Hint: Use Hölder s inequality. 2

(b) For 1 < p, q <, we have x p, = Now consider the following with Hölder s inequality x T y max y R n : y xt y p 1 x i y i y p x q x q Now the remaining part is to find a y such that the equality can hold. i [1, n]. We calculate Let z i = sign(x i ) x i q 1 for all Further, we calculate Now constuct y = z p p = x i z i = z i p = x i sign(x i ) x i q 1 = p sign(x i) x i q 1 = x i q = x q q x i (q 1)p = z z p, and we have y p = 1. Then we have n x i q = x q q x i y i = x i z i = 1 z p z p x i z i = 1 x q/p q x q q = x q Therefore, x p, = x q For p = 1 and q = or p = and q = 1, they are the trivial cases. The proof is similar but make 1 and be the pairs accordingly. 3

Problem 2: Convex Sets Exercise 2.1 (Unit Ball) The unit ball of any norm is the set B = {x R n : x 1}. One can easily see that B is symmetric w.r.t. the origin (x B if and only x B ) and compact (closed and bounded) with nonempty interior. Show that the set B is convex. For any x, y B, and for λ [0, 1]. Then we have λx + (1 λ)y λx + (1 λ)y λ x + (1 λ) y λ + (1 λ) = 1 Therefore, λx + (1 λ)y B. Hence, B is a convex set. Exercise 2.2 (Vice Versa) Let B be convex, symmetric w.r.t. the origin, and compact with nonempty interior. Show that the following function B : x B = inf{t > 0 : x t B} is a valid norm. Moreover, its unit ball is exactly the set B. The positivity property is true, because we know x B = inf{t > 0 : x t B} 0 If x B = 0, then x tb, t > 0, so x = 0. The homogeneity property is true, because we know αx B = inf{t > 0 : αx t B} = inf{αt > 0 : αx αt B} = α inf{t > 0 : x t B} = α x B To show the triangle inequality property is true, it is sufficient to show that This is true because by definition of x B and y B, x + y x B + y B B x x B B and y y B B. Since B is a convex set, Moreover, x + y x B + y B = x B x B + y B x x B + y B x B + y B y y B B B B = {x R n : x B 1} = {x R n : inf{t > 0 : x t = {x R n : x B} B} 1} The last step come from the fact that if x B, then inf{t > 0 : x t B} > 1, which is a contradiction. Exercise 2.3 (Neighborhood of Convex Sets) Let X R n be a convex set and let ɛ > 0. The ɛ-neighborhood of the set X under norm is defined as Prove that X ɛ is a convex set. X ɛ = {x R n : inf x y ɛ}. y X 4

Let x and y be two arbitrary elements in X ɛ. Then we have for any ɛ > ɛ, u, v X, s.t. x u ɛ, y v ɛ. For any λ [0, 1], λu + (1 λ)v X, such that [λx + (1 λ)y] [λu + (1 λ)v] λ x u + (1 λ) y b ɛ. This implies that λx + (1 λ)y X ɛ. 5

Problem 3: Strong Duality Recall that in class we have shown the [Farkas Lemma] Let A R n m and b R m, exactly one of the two sets must be empty: (i) : {x R n : Ax = b, x 0} (ii) : {y R m : A T y 0, b T y > 0} Exercise 3.1 (Variant of Farkas Lemma) of the following sets must be empty: Prove the following variant of Farkas Lemma: exactly one (i) : {x R n : Ax b} (ii) : {y R m : y 0, A T y = 0, b T y < 0} We first show that system (i) is feasible system (ii) infeasible Suppose there is a x R n such that Ax b, then for any y 0, we have x T (A T y) = (Ax) T y b T y If system (ii) is feasible, then there exists y 0, A T y = 0, b T y < 0, which implies 0 = x T (Ay) b T y < 0 leading to a contradiction. Therefore, (ii) is infeasible. We now show that system (i) infeasible system (ii) feasible We can rewrite Ax b as A(x + x )+s = b, where (x +, x, s) 0. Hence, x + {x : Ax b} x : [ A A I ] x + x + x = b, x 0 s s s By Farkas Lemma, the infeasibility of this system implies that there exists y such that b T y < 0, [ A A I ] T y 0 This implies that y 0, A T y = 0, b T y < 0. Namely, system (ii) is feasible. Overall, we know exactly one of them is empty. Exercise 3.2 (Duality of Linear Program) The above lemma can be used to derive strong duality of linear programs. Consider the primal and dual pair of linear programs: min x c T x s.t. Ax = b, x 0 (P ) max y b T y s.t. A T y c (D) Assume that the problem (P ) has an optimal solution x with finite optimal value p. (a) Let y be a feasible solution to (D), show that b T y p. (b) Apply the above variant Farkas lemma to show that the following system {y : A T y c, y T b p } is nonempty, namely, there exists y that satisfies [ ] [ ] A T c b T y. p 6

(a) Let y be a feasible solution to (D), then A T y c. Hence, b T y = (Ax ) T y = x T (A T y) x T c = p. [ λ (b) Suppose the above system is not feasible. By the variant of Farkas Lemma, λ = 0, such λ0] that { Aλ λ 0 b = 0 λ T c λ 0 p < 0 If λ 0 > 0, then A( λ λ 0 ) = b and c T ( λ λ 0 ) < p, contradicts with p being optimal If λ 0 = 0, then Aλ = 0, λ T c < 0. then we can construct a feasible solution x = x + λ that leads to small objective, c T x = c T (x + λ) < c T x = p which contradicts with the optimality of x. Therefore, the assumption is not true, i.e. above system must be feasible. Combining (a) and (b), we can see that if problem (P) is solvable, then the dual problem (D) is also solvable, and the optimal values are the same. 7

Problem 4: Helley s Theorem and Duality Exercise 4.1 (Special Case of Helley s Theorem) Using linear programming duality, prove Helleys theorem for the special case when the convex sets are halfspaces. When the convex sets are halfspaces, the equivalent statement is as following. If m halfspaces in R n, m > d, do not have a point in common, then there exist some d + 1 of them that do not have a point in common either. Let s define the m halfspaces by Ax b do not have a point in common, then that means that the following linear program has no solution: max 0 s.t. Ax b By strong duality, its dual problem is either infeasible or unbounded: min b T y s.t. A T y = 0, y 0 Since we know y = 0 is feasible for the dual, then the dual must be unbounded. With respect to the simplex method, an unbounded problem means that there exists some basic feasible solution with an entering variable but no leaving variable(refer section 3.2 in introduction to Linear Optimization) Assume, without loss of generality, that y 1, y 2, y 3,..., y d are the basic variables for this basic feasible solution. As a reminder, there are d of these because the dual has d constrains. Assume that y d+1 is the entering variable for which there is no leaving variable. By increasing the value of y d+1 enough we can obtain a solution y such that: 1. y is feasible 2. b T y < 0 3. The remaining non-basic variable y d+2 = y d+3 =... = y m = 0. Now, let A T D, y D, and b D denote the restriction of A T, y, and b, respectively, to the variables y 1, y 2,..., y d+1. Because the remaining non-basic variable is zero, we havea T D y D = 0 and b T D y D < 0.(y D 0 as well) Finally, consider the intersection of d + 1 halfspaces defined by A D x b D. If there exists an x in this intersection, then x T A T D b T D x T A T Dy D b T Dy D 0 b T Dy D which is a contradiction. Therefore, the intersection of d + 1 halfspaces defined by A D x b D is empty, and so we have found a set of d + 1 of the m halfspaces that do not have a point in common. 8