Homework Set #6 - Solutions
|
|
- Leonard Chase
- 5 years ago
- Views:
Transcription
1 EE 15 - Applications of Convex Optimization in Signal Processing and Communications Dr Andre Tkacenko JPL Third Term 11-1 Homework Set #6 - Solutions 1 a The feasible set is the interval [ 4] The unique optimal point or solution is x = and the optimal value is p = 5 b Here we have f x = x 1 and f 1 x = x x 4 = x 6x 8 Thus the Lagrangian Lx λ is given by Lx λ = f x λf 1 x = λ 1 x 6λx 8λ 1 A plot of the objective function the constraint function the feasible set the optimal point and value and the Lagrangian Lx λ for several positive values of λ is shown in Figure 1 From the plot it is clear that minimum value of Lx λ over x ie gλ is always less than p It increases as λ varies from to reaches its maximum at λ = and then decreases again as λ increases above We have equality namely p = gλ for λ = Lx 3 Lx Lx 1 f 1 5 x p f Figure 1: Plot of the objective function f x the constraint function f 1 x the feasible set [ 4] the optimal point and value x p = 5 along with Lagrangian Lx λ for several positive values of λ x For λ > 1 the Lagrangian is a convex parabola and reaches its minimum at x = 3λ/ 1 λ On the other hand for λ 1 the Lagrangian is a concave parabola and as such is unbounded below Thus the dual function gλ is given by gλ = 9λ λ 1 8λ 1 λ > 1 λ 1
2 A plot of the dual function gλ is shown in Figure 6 4 gλ λ Figure : Plot of the dual function gλ versus λ We can verify that the dual function is concave that its value is equal to p = 5 for λ = and less than p for other values of λ Specifically by taking the derivative of gλ with respect to λ and setting the result to zero leads to the condition λ λ 4 = which for λ > 1 can only be satisfied for λ = For λ = we have g = 5 = p indeed c The Lagrange dual problem is maximize 9λ λ 1 8λ 1 λ A second derivative test on the objective gλ yields g λ = 18 λ 1 3 < λ > 1 Thus the objective is a concave function of λ over the domain of the problem and so the dual problem is indeed a concave maximization problem Setting the derivative of the objective with respect to λ equal to zero yields the condition λ λ 4 = which for λ can only be satisfied for λ = We have g = d = 5 here Hence the dual optimal solution is λ = and the dual optimal value is d = 5 As p = 5 we can verify that strong duality indeed holds for this example as it must since Slater s constraint qualification is satisfied
3 d The perturbed problem is infeasible for u < 1 since the infimum of f 1 x about all x R is equal to 1 For u 1 the feasible set is the interval [ 3 u 1 3 u 1 ] given by the two roots of x 6x 8 = u For 1 u 8 the optimum solution is x u = 3 1 u On the other hand for u > 8 the optimum is the unconstrained minimum of f ie x u = In summary we have the following undefined u < 1 x u = 3 u 1 1 u 8 u > 8 u < 1 = p u = 11 u 6 u 1 1 u 8 1 u > 8 A plot of the optimal value function p u and its epigraph is shown in Figure 3 In addition a supporting hyperplane or more appropriately a line in this case at u = is shown epi p u p u 4 p λ u Figure 3: Plot of the optimal value p u as a function of the perturbation u Included with this is a plot of the epigraph of the optimal value along with a supporting hyperplane at u = corresponding to the nominal optimal value u From our expression for p u above it can be seen that it is a differentiable function of u at u = Note that we have Thus we have dp u du = 1 dp du 3 u 1 for 1 u 8 = = λ a Consider the objective function of the LP namely x T y Note that we have the following x T y = x k y k = x [k] y ik
4 where i k for k = 1 n is a permutation of the index set {1 n} corresponding to the elements of x arranged in descending order In other words we have x ik = x [k] for all k Under the constraints that y k 1 and n y k = r it is clear that we have r r x T y = x [k] y ik x [k] y ik x [k] = fx k=r1 In other words the upper bound on x T y the constraints on y is obtained by extracting the r largest components of x and assigning the maximum possible weight of one to each of these components But this bound can be achieved by setting y i1 = = y ir = 1 and y ir1 = = y in = Hence fx is equal to the optimal value of the LP maximize x T y y 1 1 T y = r with y R n as the variable This can be more compactly written as fx = sup { x T y : y 1 1 T y = r } b First change the objective from maximization to minimization as follows: minimize x T y y 1 1 T y = r Let us introduce a Lagrange multiplier λ for the lower bound on y ie y u for the upper bound on y ie y 1 and t for the equality constraint namely 1 T y = r This yields the Lagrangian Ly λ u t = x T y λ T y u T y 1 t 1 T y r Minimizing over y yields the dual function = rt 1 T u x λ u t1 T y gλ u t = { rt 1 T u x λ u t1 = otherwise The dual problem is to maximize g λ and u This yields the problem maximize rt 1 T u λ = t1 u x λ u By changing the objective to minimization by minimizing the negative of the objective above and eliminating the slack variable λ in the equality constraint we obtain the problem minimize rt 1 T u as desired t1 u x u
5 c The constraint that no more than half of the total power is in any m lamps is equivalent to the condition m p [k] 1 p k = 1 1T p In other words if n m and r = m then the constraint is equivalent to fp α where α 1 1T p But from the results of part b we know that this condition is true if and only if there exist s R and q R m such that m s 1 T q α s1 q p q Thus with this constraint in effect the patch illumination problem becomes minimize t a T k p I dest k = 1 n [ ] Ides t a T k p t a T k p k = 1 n p p max 1 α = 1 1T p m s 1 T q α s1 q p q with variables t R p R m α R s R and q R m Eliminating the trivial equality constraint α = 1 1T p we obtain minimize t a T k p I dest k = 1 n [ ] Ides t a T k p t a T k p k = 1 n p p max 1 m s 1 T q 1 1T p s1 q p q with variables t R p R m s R and q R m as desired 3 Introducing λ for the inequality constraint x ν for the scalar equality constraint 1 T x = 1 and z for the vector equality constraint Px = y the Lagrangian becomes Lx y λ ν z = c T x y i log y i λ T x ν 1 T x 1 z T Px y = c λ ν1 P T z T m x y i log y i z T y ν The minimum over x is bounded below if and only if c λ ν1 P T z =
6 To minimize over y we set the derivative with respect to y i equal to zero which gives 1 log y i z i = y i = e z i 1 Thus the dual function is e zi 1 z i 1 z i e zi 1 ν c λ ν1 P T z = gλ ν z = otherwise e zi 1 ν λ = P T z ν1 c = otherwise Hence the dual problem is maximize e zi 1 ν P T z ν1 c This can be simplified by introducing a new variable w = z ν1 and exploiting the fact that 1 = P T 1 Specifically the dual problem becomes maximize e wi ν 1 ν P T w c Finally we can maximize the objective function with respect to ν by setting the derivative equal to zero This yields m ν = log e w i 1 From this the objective function becomes m e ν e w i 1 ν = 1 log As such the dual problem can be simplified to which can be written as maximize minimize log m P T w c log m e w i 1 e w i e w i P T w c = log m with variable w R m Note that this is a GP in convex form with linear inequality constraints ie monomial inequality constraints in the associated standard form GP e w i
7 4 a Introducing λ for the inequality constraint x and ν for the equality constraint 1 T x = 1 it follows that the KKT conditions are as follows Primal feasibility: Dual feasibility: Complementary slackness: Stationarity: x 1 T x = 1 λ λ k x k = k = 1 n f x λ ν 1 = Here f x = log a T x log b T x and so we have f x k = 1 a T x a k 1 b T x b k f x = 1 a T x a 1 b T x b Thus the stationarity condition becomes λ = ν 1 1 a T x a 1 b T x b Combining the results so far we have the following simplifed KKT conditions x 1 T x 1 = 1 a T x 1 b T x b ν 1 x k ν 1 a T x a k 1 b T x b k = k = 1 n 1 We now show that x = 1/ 1/ and ν = satisfy all of the conditions in 1 and are hence primal and dual optimal respectively The feasibility conditions x 1 T x = 1 obviously hold and the complementary slackness conditions are trivially satisfied for k = n Thus from 1 it remains to verify the inequalities a k a T x b k b T x ν k = 1 n and the complementary slackness conditions x k ν 1 a T x a k 1 b T x b k = k = 1 n 3 For x = 1/ 1/ and ν = the inequality holds with equality for k = 1 and k = n since we have for k = 1 and a 1 a T x b 1 b T x = a 1 a n a T x b n b T x = a n /a 1 1/a 1 1/a n = a 1 a n a n a 1 = /a n 1/a 1 1/a n = a n a 1 a n a 1 =
8 for k = n Therefore also 3 is satisfied for k = 1 n as desired The remaining inequalities in reduce to a k a T x b k b T x = which is equivalent to a k /a k 1/a 1 1/a n = a k a 1 a n /a k k = n 1 a k a 1 a n /a k 1 k = n 1 4 To show that 4 is valid note that the function gt ta 1a n/t a 1 a n is convex for t R Since the inequality in 4 holds with equality for k = 1 and k = n it follows that t a 1 a n /t 1 t [a n a 1 ] Hence x = 1/ 1/ and ν satisfy all of the KKT conditions of 1 and are thus primal and dual optimal respectively b Express A in terms of its eigenvalue decomposition as A = QΛQ T and define a k λ k b k 1/λ k and x k = [ Q T u ] k Note that we clearly have x k for all k Also as u = 1 we have 1 T x = [ Q T u ] k = Q T u = ut QQ T u = u T u = 1 Thus the choice of x k = [ Q T u ] is indeed feasible Continuing further note that k a T [ x = λ k Q T u ] k = ut Au and that b T x = 1/λ k [ Q T u ] k = ut A 1 u Hence from the result proven in part a we have log u T Au log u T A 1 u logλ 1 / λ n / log1/ λ 1 1/ λ n After some algebraic manipulation this becomes u T Au u T A 1 u λ1 λ n 1 1 λ 1 λ n This can be further simplified to u T Au u T A 1 u 1 4 λ 1 /λ n λ n = 1 λ1 λ 1 4 λ n Taking square roots of both sides of the above inequality yields u T Au 1/ u T A 1 u 1/ λ1 λ n for all u with u = 1 which is Kantorovich s inequality = 1 4 λ 1 λ n λ 1 1 λ 1 n λn λ 1 λn λ 1
9 *5 Define the following quantities y T 1 1 d 1 y 1 [ A b C In n 1 1 n ym T 1 d m y m ] f [ n 1 1/ ] Also define z x t Then with this notation the problem becomes minimize Ax b z T Cz f T z = Introducing ν for the equality constraint we obtain the following for the Lagrangian Lz ν = Az b ν z T Cz f T z = z T A T Az b T Az b T b z T νc z νf T z = z T A T A νc z A T b νf T z b Note that this is bounded below as a function of z if and only if A T A νc A T b νf R A T A νc Therefore the KKT conditions are as follows Primal feasibility: z T Cz f T z = Dual feasibility: A T A νc A T b νf R A T A νc Stationarity: A T A νc z = A T b νf Note that this implies the range condition for dual feasibility Method 1: We derive the dual problem If ν is feasible then the dual function is given by gν = A T b νf T A T A νc # A T b νf b So the dual problem can be expressed as the SDP maximize s b [ A T A νc A T b νf A T b νf T s ] which is equivalent to the following SDP minimize s b [ A T A νc A T b νf A T b νf T s ]
10 Solving this in cvx gives ν = 5898 Using ν in the stationarity condition we get Hence x = z = A T A ν C 1 A T b ν f = Method : Alternatively we can solve the KKT conditions directly To simplify the equations we make a change of variables w = Q T L T z where L is the lower triangular matrix obtained from the Cholesky decomposition of A T A ie A T A = LL T and Q is the matrix of eigenvectors of L 1 CL T ie L 1 CL T = QΛQ T This transforms the KKT conditions to w T Λw g T w = I νλ I νλ w = h νg where we have g = Q T L 1 f h = Q T L 1 A T b From the last equation of the KKT conditions we find w k = h k νg k 1 νλ k k = 1 n 1 Substituting this into the first equation of the KKT conditions we get the following nonlinear equation in ν n1 λ k h k νg k rν = 1 νλ k g k h k νg k = 1 νλ k In our example the eigenvalues are λ 1 = 514 λ = 735 λ 3 = Plots of the function rν for this example are shown in Figures 4a and b In Figure 4a we have a zoomed out plot showing all three solutions to rν = whereas in Figure 4b we have a zoomed out plot showing the correct solution The correct solution of rν = is the one that satisfies 1 νλ k for k = 1 n 1 ie the solution to the right of the two singularities in this case This solution can be determined by using Newton s method by repeating the iteration ν := ν rν r ν a few times starting at a value close to the desired solution This gives ν = 5896 which is very close to the value obtained from cvx From ν we determine x as in the first method A contour plot of the objective f for the given problem data along with the sensor position vectors y k and optimal source position vector x is shown in Figure 5
11 rν 5 4 rν a ν b ν Figure 4: Plots of the nonlinear function rν used to determine the optimal ν which satisfies the KKT conditions: a zoomed out plot showing all three solutions to rν = and b zoomed in plot showing a close-up of the correct solution to rν = y 1 y 4 x 15 y 3 y y x Figure 5: Contour plot of the objective f x 1 x for the given problem data with the sensor position vectors y k and optimal source position vector x indicated by circles x 1
subject to (x 2)(x 4) u,
Exercises Basic definitions 5.1 A simple example. Consider the optimization problem with variable x R. minimize x 2 + 1 subject to (x 2)(x 4) 0, (a) Analysis of primal problem. Give the feasible set, the
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationConvex Optimization Boyd & Vandenberghe. 5. Duality
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationLecture: Duality.
Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong
More information14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.
CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity
More informationConvex Optimization M2
Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization
More informationConvex Optimization & Lagrange Duality
Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT
More informationLagrange duality. The Lagrangian. We consider an optimization program of the form
Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization
More informationI.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010
I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0
More informationEE/AA 578, Univ of Washington, Fall Duality
7. Duality EE/AA 578, Univ of Washington, Fall 2016 Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationIntroduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs
Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following
More informationExtreme Abridgment of Boyd and Vandenberghe s Convex Optimization
Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The
More informationLecture: Duality of LP, SOCP and SDP
1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:
More informationThe Lagrangian L : R d R m R r R is an (easier to optimize) lower bound on the original problem:
HT05: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford Convex Optimization and slides based on Arthur Gretton s Advanced Topics in Machine Learning course
More informationEE364a Review Session 5
EE364a Review Session 5 EE364a Review announcements: homeworks 1 and 2 graded homework 4 solutions (check solution to additional problem 1) scpd phone-in office hours: tuesdays 6-7pm (650-723-1156) 1 Complementary
More informationUC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009
UC Berkeley Department of Electrical Engineering and Computer Science EECS 227A Nonlinear and Convex Optimization Solutions 5 Fall 2009 Reading: Boyd and Vandenberghe, Chapter 5 Solution 5.1 Note that
More informationLecture 14: Optimality Conditions for Conic Problems
EE 227A: Conve Optimization and Applications March 6, 2012 Lecture 14: Optimality Conditions for Conic Problems Lecturer: Laurent El Ghaoui Reading assignment: 5.5 of BV. 14.1 Optimality for Conic Problems
More informationSupport Vector Machines
Support Vector Machines Support vector machines (SVMs) are one of the central concepts in all of machine learning. They are simply a combination of two ideas: linear classification via maximum (or optimal
More informationLecture 18: Optimization Programming
Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming
More informationOptimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers
Optimization for Communications and Networks Poompat Saengudomlert Session 4 Duality and Lagrange Multipliers P Saengudomlert (2015) Optimization Session 4 1 / 14 24 Dual Problems Consider a primal convex
More informationLagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)
Lagrange Duality Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2017-18, HKUST, Hong Kong Outline of Lecture Lagrangian Dual function Dual
More informationminimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1,
4 Duality 4.1 Numerical perturbation analysis example. Consider the quadratic program with variables x 1, x 2, and parameters u 1, u 2. minimize x 2 1 +2x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x
More informationLagrangian Duality Theory
Lagrangian Duality Theory Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapter 14.1-4 1 Recall Primal and Dual
More informationLecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem
Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R
More informationICS-E4030 Kernel Methods in Machine Learning
ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This
More informationDuality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities
Duality Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Lagrangian Consider the optimization problem in standard form
More informationEE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 17
EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 17 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory May 29, 2012 Andre Tkacenko
More informationOn the Method of Lagrange Multipliers
On the Method of Lagrange Multipliers Reza Nasiri Mahalati November 6, 2016 Most of what is in this note is taken from the Convex Optimization book by Stephen Boyd and Lieven Vandenberghe. This should
More informationMotivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:
CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through
More informationCSCI : Optimization and Control of Networks. Review on Convex Optimization
CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one
More informationOptimization for Machine Learning
Optimization for Machine Learning (Problems; Algorithms - A) SUVRIT SRA Massachusetts Institute of Technology PKU Summer School on Data Science (July 2017) Course materials http://suvrit.de/teaching.html
More informationA Brief Review on Convex Optimization
A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Duality in Nonlinear Optimization ) Tamás TERLAKY Computing and Software McMaster University Hamilton, January 2004 terlaky@mcmaster.ca Tel: 27780 Optimality
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationLecture 7: Convex Optimizations
Lecture 7: Convex Optimizations Radu Balan, David Levermore March 29, 2018 Convex Sets. Convex Functions A set S R n is called a convex set if for any points x, y S the line segment [x, y] := {tx + (1
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationLagrangian Duality and Convex Optimization
Lagrangian Duality and Convex Optimization David Rosenberg New York University February 11, 2015 David Rosenberg (New York University) DS-GA 1003 February 11, 2015 1 / 24 Introduction Why Convex Optimization?
More informationQuiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006
Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in
More informationApplications of Linear Programming
Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal
More informationEE 227A: Convex Optimization and Applications October 14, 2008
EE 227A: Convex Optimization and Applications October 14, 2008 Lecture 13: SDP Duality Lecturer: Laurent El Ghaoui Reading assignment: Chapter 5 of BV. 13.1 Direct approach 13.1.1 Primal problem Consider
More informationUC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 6 Fall 2009
UC Berkele Department of Electrical Engineering and Computer Science EECS 227A Nonlinear and Convex Optimization Solutions 6 Fall 2009 Solution 6.1 (a) p = 1 (b) The Lagrangian is L(x,, λ) = e x + λx 2
More informationLectures 9 and 10: Constrained optimization problems and their optimality conditions
Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained
More informationLecture Note 5: Semidefinite Programming for Stability Analysis
ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State
More informationGeneralization to inequality constrained problem. Maximize
Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum
More informationPrimal/Dual Decomposition Methods
Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients
More informationTutorial on Convex Optimization for Engineers Part II
Tutorial on Convex Optimization for Engineers Part II M.Sc. Jens Steinwandt Communications Research Laboratory Ilmenau University of Technology PO Box 100565 D-98684 Ilmenau, Germany jens.steinwandt@tu-ilmenau.de
More informationInterior Point Algorithms for Constrained Convex Optimization
Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems
More informationConvex Optimization. Newton s method. ENSAE: Optimisation 1/44
Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)
More informationConvex Optimization and Modeling
Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual
More informationLinear and non-linear programming
Linear and non-linear programming Benjamin Recht March 11, 2005 The Gameplan Constrained Optimization Convexity Duality Applications/Taxonomy 1 Constrained Optimization minimize f(x) subject to g j (x)
More informationNumerical Optimization
Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,
More informationConvex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014
Convex Optimization Dani Yogatama School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA February 12, 2014 Dani Yogatama (Carnegie Mellon University) Convex Optimization February 12,
More information10-725/ Optimization Midterm Exam
10-725/36-725 Optimization Midterm Exam November 6, 2012 NAME: ANDREW ID: Instructions: This exam is 1hr 20mins long Except for a single two-sided sheet of notes, no other material or discussion is permitted
More informationLinear and Combinatorial Optimization
Linear and Combinatorial Optimization The dual of an LP-problem. Connections between primal and dual. Duality theorems and complementary slack. Philipp Birken (Ctr. for the Math. Sc.) Lecture 3: Duality
More informationPart IB Optimisation
Part IB Optimisation Theorems Based on lectures by F. A. Fischer Notes taken by Dexter Chua Easter 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationLP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra
LP Duality: outline I Motivation and definition of a dual LP I Weak duality I Separating hyperplane theorem and theorems of the alternatives I Strong duality and complementary slackness I Using duality
More information4. Algebra and Duality
4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone
More informationExample: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma
4-1 Algebra and Duality P. Parrilo and S. Lall 2006.06.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone of valid
More informationConvex Optimization and SVM
Convex Optimization and SVM Problem 0. Cf lecture notes pages 12 to 18. Problem 1. (i) A slab is an intersection of two half spaces, hence convex. (ii) A wedge is an intersection of two half spaces, hence
More informationA Tutorial on Convex Optimization II: Duality and Interior Point Methods
A Tutorial on Convex Optimization II: Duality and Interior Point Methods Haitham Hindi Palo Alto Research Center (PARC), Palo Alto, California 94304 email: hhindi@parc.com Abstract In recent years, convex
More informationConvex Optimization. Lecture 12 - Equality Constrained Optimization. Instructor: Yuanzhang Xiao. Fall University of Hawaii at Manoa
Convex Optimization Lecture 12 - Equality Constrained Optimization Instructor: Yuanzhang Xiao University of Hawaii at Manoa Fall 2017 1 / 19 Today s Lecture 1 Basic Concepts 2 for Equality Constrained
More informationKarush-Kuhn-Tucker Conditions. Lecturer: Ryan Tibshirani Convex Optimization /36-725
Karush-Kuhn-Tucker Conditions Lecturer: Ryan Tibshirani Convex Optimization 10-725/36-725 1 Given a minimization problem Last time: duality min x subject to f(x) h i (x) 0, i = 1,... m l j (x) = 0, j =
More informationCS-E4830 Kernel Methods in Machine Learning
CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This
More informationAdditional Homework Problems
Additional Homework Problems Robert M. Freund April, 2004 2004 Massachusetts Institute of Technology. 1 2 1 Exercises 1. Let IR n + denote the nonnegative orthant, namely IR + n = {x IR n x j ( ) 0,j =1,...,n}.
More informationOptimality, Duality, Complementarity for Constrained Optimization
Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear
More informationFinite Dimensional Optimization Part III: Convex Optimization 1
John Nachbar Washington University March 21, 2017 Finite Dimensional Optimization Part III: Convex Optimization 1 1 Saddle points and KKT. These notes cover another important approach to optimization,
More information4. Convex optimization problems
Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization
More informationConvex Optimization and Support Vector Machine
Convex Optimization and Support Vector Machine Problem 0. Consider a two-class classification problem. The training data is L n = {(x 1, t 1 ),..., (x n, t n )}, where each t i { 1, 1} and x i R p. We
More informationMaximum likelihood estimation
Maximum likelihood estimation Guillaume Obozinski Ecole des Ponts - ParisTech Master MVA Maximum likelihood estimation 1/26 Outline 1 Statistical concepts 2 A short review of convex analysis and optimization
More informationSolving large Semidefinite Programs - Part 1 and 2
Solving large Semidefinite Programs - Part 1 and 2 Franz Rendl http://www.math.uni-klu.ac.at Alpen-Adria-Universität Klagenfurt Austria F. Rendl, Singapore workshop 2006 p.1/34 Overview Limits of Interior
More informationTutorial on Convex Optimization: Part II
Tutorial on Convex Optimization: Part II Dr. Khaled Ardah Communications Research Laboratory TU Ilmenau Dec. 18, 2018 Outline Convex Optimization Review Lagrangian Duality Applications Optimal Power Allocation
More informationNonlinear Programming and the Kuhn-Tucker Conditions
Nonlinear Programming and the Kuhn-Tucker Conditions The Kuhn-Tucker (KT) conditions are first-order conditions for constrained optimization problems, a generalization of the first-order conditions we
More informationLINEAR AND NONLINEAR PROGRAMMING
LINEAR AND NONLINEAR PROGRAMMING Stephen G. Nash and Ariela Sofer George Mason University The McGraw-Hill Companies, Inc. New York St. Louis San Francisco Auckland Bogota Caracas Lisbon London Madrid Mexico
More informationNonlinear Optimization: What s important?
Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global
More informationPrimal-dual Subgradient Method for Convex Problems with Functional Constraints
Primal-dual Subgradient Method for Convex Problems with Functional Constraints Yurii Nesterov, CORE/INMA (UCL) Workshop on embedded optimization EMBOPT2014 September 9, 2014 (Lucca) Yu. Nesterov Primal-dual
More informationConvex optimization problems. Optimization problem in standard form
Convex optimization problems optimization problem in standard form convex optimization problems linear optimization quadratic optimization geometric programming quasiconvex optimization generalized inequality
More informationEquality constrained minimization
Chapter 10 Equality constrained minimization 10.1 Equality constrained minimization problems In this chapter we describe methods for solving a convex optimization problem with equality constraints, minimize
More informationCO 250 Final Exam Guide
Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,
More informationCS711008Z Algorithm Design and Analysis
CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief
More informationTwo hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. xx xxxx 2017 xx:xx xx.
Two hours To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER CONVEX OPTIMIZATION - SOLUTIONS xx xxxx 27 xx:xx xx.xx Answer THREE of the FOUR questions. If
More informationEnhanced Fritz John Optimality Conditions and Sensitivity Analysis
Enhanced Fritz John Optimality Conditions and Sensitivity Analysis Dimitri P. Bertsekas Laboratory for Information and Decision Systems Massachusetts Institute of Technology March 2016 1 / 27 Constrained
More informationPart 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)
Part 4: Active-set methods for linearly constrained optimization Nick Gould RAL fx subject to Ax b Part C course on continuoue optimization LINEARLY CONSTRAINED MINIMIZATION fx subject to Ax { } b where
More informationDuality. Geoff Gordon & Ryan Tibshirani Optimization /
Duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Duality in linear programs Suppose we want to find lower bound on the optimal value in our convex problem, B min x C f(x) E.g., consider
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming
E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program
More information4. Convex optimization problems (part 1: general)
EE/AA 578, Univ of Washington, Fall 2016 4. Convex optimization problems (part 1: general) optimization problem in standard form convex optimization problems quasiconvex optimization 4 1 Optimization problem
More informationCS711008Z Algorithm Design and Analysis
.. CS711008Z Algorithm Design and Analysis Lecture 9. Algorithm design technique: Linear programming and duality Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China
More informationLecture 6: Conic Optimization September 8
IE 598: Big Data Optimization Fall 2016 Lecture 6: Conic Optimization September 8 Lecturer: Niao He Scriber: Juan Xu Overview In this lecture, we finish up our previous discussion on optimality conditions
More informationGauge optimization and duality
1 / 54 Gauge optimization and duality Junfeng Yang Department of Mathematics Nanjing University Joint with Shiqian Ma, CUHK September, 2015 2 / 54 Outline Introduction Duality Lagrange duality Fenchel
More informationNumerical optimization. Numerical optimization. Longest Shortest where Maximal Minimal. Fastest. Largest. Optimization problems
1 Numerical optimization Alexander & Michael Bronstein, 2006-2009 Michael Bronstein, 2010 tosca.cs.technion.ac.il/book Numerical optimization 048921 Advanced topics in vision Processing and Analysis of
More informationAdditional Exercises for Introduction to Nonlinear Optimization Amir Beck March 16, 2017
Additional Exercises for Introduction to Nonlinear Optimization Amir Beck March 16, 2017 Chapter 1 - Mathematical Preliminaries 1.1 Let S R n. (a) Suppose that T is an open set satisfying T S. Prove that
More information10 Numerical methods for constrained problems
10 Numerical methods for constrained problems min s.t. f(x) h(x) = 0 (l), g(x) 0 (m), x X The algorithms can be roughly divided the following way: ˆ primal methods: find descent direction keeping inside
More informationSemidefinite Programming Basics and Applications
Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent
More informationThe fundamental theorem of linear programming
The fundamental theorem of linear programming Michael Tehranchi June 8, 2017 This note supplements the lecture notes of Optimisation The statement of the fundamental theorem of linear programming and the
More informationConvex Optimization Overview (cnt d)
Convex Optimization Overview (cnt d) Chuong B. Do October 6, 007 1 Recap During last week s section, we began our study of convex optimization, the study of mathematical optimization problems of the form,
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationConvex Optimization M2
Convex Optimization M2 Lecture 8 A. d Aspremont. Convex Optimization M2. 1/57 Applications A. d Aspremont. Convex Optimization M2. 2/57 Outline Geometrical problems Approximation problems Combinatorial
More informationReview Solutions, Exam 2, Operations Research
Review Solutions, Exam 2, Operations Research 1. Prove the weak duality theorem: For any x feasible for the primal and y feasible for the dual, then... HINT: Consider the quantity y T Ax. SOLUTION: To
More informationCSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming
CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming
More information