Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. xx xxxx 2017 xx:xx xx.
|
|
- Bruce Allison
- 5 years ago
- Views:
Transcription
1 Two hours To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER CONVEX OPTIMIZATION - SOLUTIONS xx xxxx 27 xx:xx xx.xx Answer THREE of the FOUR questions. If more than three questions are attempted, credit will be given for the best three answers. Each Question is worth 2 marks. Electronic calculators are permitted, provided they cannot store text. of 9 P.T.O.
2 . Problem a) is similar to a problem from last year s exam. In addition, it tests understanding by requiring to think about the problem before solving it. Problems b) - d) are similar to problems discussed in the tutorials.) a) Let x and x 2 denote the number of texts shipped to Manchester from Liverpool and London, respectively, and x 3 and x 4 the units shipped to Oxford from Liverpool and London. The objective is the total cost of the shippings, minimize 5x + 5x 2 + x 3 + 4x 4. mark) The constraints are given by x + x 2 = 6 x 3 + x 4 = 4 x + x 3 7 x 2 + x 4 8 x i, i 4. 2 marks). The way the problem is posed, we can get all of our shipments to Manchester from Liverpool, and all the ones to Oxford from London, giving the optimal solution x = 6, x 2 =, x 3 = 4, x 4 =. 2 marks) Though this wasn t asked here, the problem can be transformed into one that depends only on 2 variables, by eliminating x 2 and x 4 using the equality constraints: x 2 = 6 x, x 4 = 4 x 3. b) We first do some rescaling, by dividing the bounds on the right by. The optimization problem is then minimize 5x + 3x 2 subject to x + x 2 7 x + x 2 2 x 6 x 2 4 x x 2 The feasible set and the objective can be visualised as follows. 2 marks) The matrix A and the vector b of this problem are 7 2 A =, b = of 9 P.T.O.
3 c x {4,5} As the image suggests, there are only 6 vertices among the 5 minors of A, and these are 2 marks) A {4,5} =, A {2,5} = A {,3} =, A {,4} =, A {3,6} =., A {2,6} =, From the diagram or by solving the equations with the corresponding values of b we get the vertices x {4,5} =, x 4 {2,5} =, x 2 {,4} =, x 4 {2,5} =, x {,3} =, x {3,5} =, The objective values at these points are 2, 6, 3,, 27, 3 This shows that x 4,5 is the optimal point. mark) c) The problem is of the form minimize c, x subject to Ax = b, x, with A =, ), b = 2, c =, ). The optimality conditions are In our specific case, this translates to Ax b = A y + s c = XSe = x s. x x 2 2 = y + s = y + s 2 = x s = x 2 s 2 = x, x 2, s, s 2. ) 3 of 9 P.T.O.
4 2 marks) The central path is the set x, y, s) that arises when we change the conditions x i s i = in ) to x i s i = τ, for τ >. To compute the central path, we first eliminate x 2 and s 2 from ): The conditions x i s i = τ then become x 2 = x 2, s 2 = y + = s. x s = τ, x 2) s ) = τ. 2 marks) Multiplying out the second of these and replacing x s with τ and s with τ/x in the resulting expression gives τ τ x + x = τ x 2 2τ + ) + 2τ =. Solving this quadratic equation gives the solution From this we get the expression for x 2, x = τ + ± τ 2 +. x 2 = x 2 = τ ± τ 2 +. ) Since τ ) 2 = τ 2 2τ + < τ 2 + for τ >, we get that only one the one with +) of the solutions in ) is positive, and thus valid. 2 marks) The central path is therefore defined by x = τ + + τ 2 + x 2 = τ + τ 2 + s = τ x s 2 = τ x 2 y = s 2. In particular, every τ > determines a unique point on the central path. d) Denote by a T i the rows of A. We use additional variables t,..., t m, and then get minimize t + + t m subject to t i a T i x b i t i, i m x j, j n. 4 marks) 4 of 9 P.T.O.
5 2. a) and b) are bookwork. c) and d) are similar to problems discussed in the tutorials. a) In supervised learning, we aim to learn a function h: X Y from a set of input-output paris x i, y i ) X Y, i N. Given a class of functions F for example, linear functions parametrized by their coefficients), and a loss function l: Y Y R +, the learning problem can be posed as an optimization problem 2 marks) minimize h F N N lhx i ), y i ). 2) i= i) Three applications would be text classification, image recognition, and linear regression. mark) ii) When applying Newton s method or gradient descent to a problem such as 2), one would have to compute the gradient and/or Hessian of each summand. Since the number of summands equals the number of data samples, this can lead to high computational costs. A way out of this problem if one is willing to sacrifice a bit of accuracy) is to use a randomized method such as stochastic gradient descent. If we denote each summand in 2) as f i w), where w is a vector of parameters for the function h, then in each step k, one chooses one index i k at random and computes x k+ = x k α k f ik w). mark) b) i) Let y R p be arbitrary. Then y T xx T )y = y T x)x T y) = y T x) 2, from which it follows that the matrix is positive semidefinite mark). It is in general not positive definite, since for x and p 2 there are always y such that y T x = mark). ii) To show that the function f is convex, we first simplify the notation by setting x = x,..., x n, ) T and w = w,..., w n, τ) T, so that we can write the function f as fw) = log + e ywt x. To show that this function is convex, we compute the Hessian matrix. The derivatives of f are yx i fw) = w i + e, 2 = ywt x w i w j + e yw T x ) 2 x ix j, where in the last equation we used that y 2 = mark). From the second expression we see that the Hessian is given by 2 fw) = + e yw T x ) 2 xxt, which was shown to be positive semidefinite in part i). From this we conclude that f is convex mark). 5 of 9 P.T.O.
6 iii) Given the data x,..., x N with labels y,..., y N, we construct the loss function fw, τ) = N N i= ) log + e xt i w τ)y i. mark) We obtain w and τ by solving the optimization problem The result is a linear classification function minimize w,τ fw, τ). hx) = w T x τ, and we classify a new vector x as belonging to class + if hx) >, and if hx) <. Applications of this procedure include spam filtering or medical diagnosis mark). c) If B is the matrix with the v i and w j as rows, and e denotes the vector with all ones, define A = [B, e]. Then any solution of the feasibility problem Ax <, with x R n+, has the property that x,..., x n )v i < x n+ and x,..., x n )w j > x n+, so we get the separating hyperplane from x 3 points). If the convex hulls of the v i and of the w j are disjoint, we have two disjoint, bounded, closed convex sets, and these can be separated by a hyperplane. The existence of a separating hyperplane is equivalent to the linear programming feasibility problem having a solution.2 marks) d) i) The objective function and the equality constraint are convex. However, for the problem to be convex, the equality constraint needs to be linear. Therefore, the problem is not convex. mark) ii) The objective function is linear and therefore convex), but the logarithmic inequality equality constraint is not convex. The problem is therefore not convex note, however, that the logarithmic inequality constraint can be rephrased as linear set of constraints < x ). 2 marks) iii) The function is not convex. This can be seen by examining the second derivative: mark) e2x f x) = + e x ) 2 iv) The equality constraint is linear. The objective function, however, is not convex. To see this, compute the Hessian matrix of second derivatives) 2x 2 2 fx) =. 2 This matrix is not positive semidefinite: take, for example, x =,,, ). Then x 2 fx)x = 2 <. mark) 6 of 9 P.T.O.
7 3. Problems a)-c) are based on problems discussed in the tutorials, problem d) is bookwork. a) We use x )x + ) = x 2. The Lagrangian function is Lx, λ) = x 2 + λx )x + 4). The term x )x + 4) has a minimum at x = 3/2. By setting the derivative of the Lagrangian to zero, it follows that for λ, the Lagrangian has a minimum at and 2 marks) x = 3λ 2 + λ) gλ) = inf Lx, λ) = 9λ 2 x 4 + λ) + λ 3λ ) λ) 3λ ) 2 + λ) + 4. For λ <, the problem is unbounded and therefore gλ) = because we can make the term x )x + 4) arbitrarily big). The Lagrange dual problem is 9λ 2 maximize λ 4 + λ) + λ 3λ ) λ) 3λ ) 2 + λ) marks) Strong duality holds, as Slater s conditions are satisfied the problem is convex and has a strictly feasible point). 2 mark) b) The level sets of fx, x 2 ) = x 2 + 2x 2 2 are ellipses. The constraints define one circle of radius centred at, ) and the line x 2 =. They intersect at the point, ), where the objective function has the value 3. 2 marks) x The optimal solution is x =, ) and the optimal value p = 3. The KKT conditions are x ) 2 + x 2 2 x 2 = λ 2x + 2λx ) = 4x 2 + 2λx 2 + µ = λx ) 2 + x 2 2 ) =. 7 of 9 P.T.O.
8 At the point, ) they take the particularly simple form λ, 2 =, 4 + 2λ + µ =. 2 marks) These equations have no solution, so there is no way to use the Lagrange multipliers to certify optimality. mark) c) First introduce a new variable t and write minimize t subject to c, x 2 t Ax + b. mark) The condition Ax + b can be written as a semidefinite condition diagax + b). mark) For the other condition, consider the matrix diagax + b) t c x. c x 2 marks) The 2 2 matrix on the lower diagonal is positive definite if c, x 2 t, so that we can take the positive definiteness of the above condition as our constraint. mark) d) Farkas Lemma states that the system Ax = b has a solution x if and only if there is no y such that A y and y, b <. 2 mark) The geometric interpretation is that if we can get the origin as a convex combination of the columns of A in such a way that each column plays a role, then no halfspace defined by y) contains one a i in its interior.2 marks) 4. Problems a)-d) are based on problems discussed in the tutorials. Problem d) is partly bookwork. a) Newton s method starts with a vector x such that 2 fx ). It then computes a sequence of points x, x 2,... by applying the rule Solve 2 fx k )v = fx k ) x k+ = x k + v. 2 marks) Newton s method minimizes a quadratic equation in one iteration, provided it is positive semidefinite. mark) To see this, consider the quadratic function fx) = x Ax + b x + c, with A. This is a convex function and has a unique minimum, which can be computed by setting the gradient to zero: fx) = 2Ax + b = = x = 2 A b. To apply Newton s method, note that the Hessian is f 2 x) = 2A. Newton s method starts with some x and then computes x = x 2A) 2Ax + b) = 2 A b. 2 marks) That is, Newton s method gives the exact result after one iteration. 8 of 9 P.T.O.
9 b) Newton s method for minimizing a function fx) has the form x k+ = x k 2 fx k ) fx k ). While normally, for large systems, one would not compute the inverse explicitly, in this case we can do this. The gradient and Jacobian are given by 2x + x fx, x 2 ) = 2 2) 2 4x 2 x + x 2, 2 4x2 fx, x 2 ) = 2) 4x 2 4x + 2x 2. marks) The inverse is given by Dfx, x 2 ) = 8x + 8x 2 2 4x + 2x 2 2 4x 2. 4x 2 2 mark) Since we are asked to perform one iteration with starting point x =, ), we can specialize the data we need to 2 f, ) =, Df, ) = 2 4, Df, ) f, ) = mark) An iteration of Newton s method with starting point, ) therefore gives the value x = =. 2 mark) The function value is f, ) =, and since fx, x 2 ), this is a minimizer. It follows that only one iteration of Newton s method gives the minimizer. mark) c) A sequence x k converges quadratically to x, if x k+ x r x k x 2 for some ratio r >. An example of a quadratically converging sequence would be x k =.5 2k. Then x k+ =.5 2 2k =.5 2k ) 2 = x 2 k, so this sequence converges quadratically. An easier example would be to just take a constant sequence x k = c. 2 marks) For a sequence that does not converge quadratically, take x k =.5 k. mark) Then x = and x k+ =.5x k for k. Assume this converges quadratically with rate r. Then x k+ =.5x k rx 2 k, which would imply.5 rx k, or r.5/x k. As x k with k, the expression.5/x k eventually becomes bigger than r, contradicting the assumption of quadratic convergence. 2 marks) d) i) The statement is false take C = D). It would be true if we added the requirement that C and D are disjoint. mark) ii) The statement is false. Any constant function fx) = c is convex, but has 2 fx) =. mark) iii) Yes, the statement is true. mark) iv) The statement is falls. For example, fx) = x 2 + cos2x) has a unique global minimum at x = but is not convex there are countless such examples, a good way to find one is to draw a picture of what such a function should look like). mark) END OF EXAMINATION PAPER 9 of 9
I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010
I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0
More information4. Algebra and Duality
4-1 Algebra and Duality P. Parrilo and S. Lall, CDC 2003 2003.12.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationExample: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma
4-1 Algebra and Duality P. Parrilo and S. Lall 2006.06.07.01 4. Algebra and Duality Example: non-convex polynomial optimization Weak duality and duality gap The dual is not intrinsic The cone of valid
More informationExtreme Abridgment of Boyd and Vandenberghe s Convex Optimization
Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The
More informationApplications of Linear Programming
Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationConstrained Optimization
1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange
More informationNonlinear Optimization: What s important?
Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global
More informationMath 164-1: Optimization Instructor: Alpár R. Mészáros
Math 164-1: Optimization Instructor: Alpár R. Mészáros First Midterm, April 20, 2016 Name (use a pen): Student ID (use a pen): Signature (use a pen): Rules: Duration of the exam: 50 minutes. By writing
More informationAssignment 1: From the Definition of Convexity to Helley Theorem
Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x
More informationConvex Optimization M2
Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization
More informationLecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem
Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More information14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.
CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More informationTwo hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. 29 May :45 11:45
Two hours MATH20602 To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER NUMERICAL ANALYSIS 1 29 May 2015 9:45 11:45 Answer THREE of the FOUR questions. If more
More informationConstrained optimization: direct methods (cont.)
Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a
More informationConvex Optimization Boyd & Vandenberghe. 5. Duality
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationConvex Optimization & Lagrange Duality
Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT
More informationLecture Note 5: Semidefinite Programming for Stability Analysis
ECE7850: Hybrid Systems:Theory and Applications Lecture Note 5: Semidefinite Programming for Stability Analysis Wei Zhang Assistant Professor Department of Electrical and Computer Engineering Ohio State
More informationLecture 18: Optimization Programming
Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming
More informationChap 2. Optimality conditions
Chap 2. Optimality conditions Version: 29-09-2012 2.1 Optimality conditions in unconstrained optimization Recall the definitions of global, local minimizer. Geometry of minimization Consider for f C 1
More informationThe Lagrangian L : R d R m R r R is an (easier to optimize) lower bound on the original problem:
HT05: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford Convex Optimization and slides based on Arthur Gretton s Advanced Topics in Machine Learning course
More informationA Brief Review on Convex Optimization
A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review
More informationOptimization for Machine Learning
Optimization for Machine Learning (Problems; Algorithms - A) SUVRIT SRA Massachusetts Institute of Technology PKU Summer School on Data Science (July 2017) Course materials http://suvrit.de/teaching.html
More informationSupport Vector Machines
Support Vector Machines Support vector machines (SVMs) are one of the central concepts in all of machine learning. They are simply a combination of two ideas: linear classification via maximum (or optimal
More informationThe Karush-Kuhn-Tucker (KKT) conditions
The Karush-Kuhn-Tucker (KKT) conditions In this section, we will give a set of sufficient (and at most times necessary) conditions for a x to be the solution of a given convex optimization problem. These
More informationLagrange duality. The Lagrangian. We consider an optimization program of the form
Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization
More informationConvex Optimization and SVM
Convex Optimization and SVM Problem 0. Cf lecture notes pages 12 to 18. Problem 1. (i) A slab is an intersection of two half spaces, hence convex. (ii) A wedge is an intersection of two half spaces, hence
More informationLecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016
Lecture 1: Entropy, convexity, and matrix scaling CSE 599S: Entropy optimality, Winter 2016 Instructor: James R. Lee Last updated: January 24, 2016 1 Entropy Since this course is about entropy maximization,
More informationCSCI : Optimization and Control of Networks. Review on Convex Optimization
CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one
More informationIntroduction to Nonlinear Stochastic Programming
School of Mathematics T H E U N I V E R S I T Y O H F R G E D I N B U Introduction to Nonlinear Stochastic Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio SPS
More informationLecture: Duality.
Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong
More information10-725/ Optimization Midterm Exam
10-725/36-725 Optimization Midterm Exam November 6, 2012 NAME: ANDREW ID: Instructions: This exam is 1hr 20mins long Except for a single two-sided sheet of notes, no other material or discussion is permitted
More informationCHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.
1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function
More informationCS-E4830 Kernel Methods in Machine Learning
CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This
More informationCSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods
CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods March 23, 2018 1 / 35 This material is covered in S. Boyd, L. Vandenberge s book Convex Optimization https://web.stanford.edu/~boyd/cvxbook/.
More informationUC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009
UC Berkeley Department of Electrical Engineering and Computer Science EECS 227A Nonlinear and Convex Optimization Solutions 5 Fall 2009 Reading: Boyd and Vandenberghe, Chapter 5 Solution 5.1 Note that
More informationsubject to (x 2)(x 4) u,
Exercises Basic definitions 5.1 A simple example. Consider the optimization problem with variable x R. minimize x 2 + 1 subject to (x 2)(x 4) 0, (a) Analysis of primal problem. Give the feasible set, the
More informationNumerical Optimization
Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,
More informationCO 250 Final Exam Guide
Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,
More informationConvex Optimization. Newton s method. ENSAE: Optimisation 1/44
Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)
More informationGeneralization to inequality constrained problem. Maximize
Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum
More informationFinite Dimensional Optimization Part III: Convex Optimization 1
John Nachbar Washington University March 21, 2017 Finite Dimensional Optimization Part III: Convex Optimization 1 1 Saddle points and KKT. These notes cover another important approach to optimization,
More informationOptimality, Duality, Complementarity for Constrained Optimization
Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming
E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program
More informationMath 10C - Fall Final Exam
Math 1C - Fall 217 - Final Exam Problem 1. Consider the function f(x, y) = 1 x 2 (y 1) 2. (i) Draw the level curve through the point P (1, 2). Find the gradient of f at the point P and draw the gradient
More informationConvex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014
Convex Optimization Dani Yogatama School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA February 12, 2014 Dani Yogatama (Carnegie Mellon University) Convex Optimization February 12,
More informationLMI Methods in Optimal and Robust Control
LMI Methods in Optimal and Robust Control Matthew M. Peet Arizona State University Lecture 02: Optimization (Convex and Otherwise) What is Optimization? An Optimization Problem has 3 parts. x F f(x) :
More informationHW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.
HW1 solutions Exercise 1 (Some sets of probability distributions.) Let x be a real-valued random variable with Prob(x = a i ) = p i, i = 1,..., n, where a 1 < a 2 < < a n. Of course p R n lies in the standard
More informationLECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE
LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE CONVEX ANALYSIS AND DUALITY Basic concepts of convex analysis Basic concepts of convex optimization Geometric duality framework - MC/MC Constrained optimization
More informationPart IB Optimisation
Part IB Optimisation Theorems Based on lectures by F. A. Fischer Notes taken by Dexter Chua Easter 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after
More informationLecture: Duality of LP, SOCP and SDP
1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:
More informationLecture 8. Strong Duality Results. September 22, 2008
Strong Duality Results September 22, 2008 Outline Lecture 8 Slater Condition and its Variations Convex Objective with Linear Inequality Constraints Quadratic Objective over Quadratic Constraints Representation
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Duality in Nonlinear Optimization ) Tamás TERLAKY Computing and Software McMaster University Hamilton, January 2004 terlaky@mcmaster.ca Tel: 27780 Optimality
More informationICS-E4030 Kernel Methods in Machine Learning
ICS-E4030 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 28. September, 2016 Juho Rousu 28. September, 2016 1 / 38 Convex optimization Convex optimisation This
More informationSolution to EE 617 Mid-Term Exam, Fall November 2, 2017
Solution to EE 67 Mid-erm Exam, Fall 207 November 2, 207 EE 67 Solution to Mid-erm Exam - Page 2 of 2 November 2, 207 (4 points) Convex sets (a) (2 points) Consider the set { } a R k p(0) =, p(t) for t
More informationAlgorithms for Constrained Optimization
1 / 42 Algorithms for Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University April 19, 2015 2 / 42 Outline 1. Convergence 2. Sequential quadratic
More informationExamination paper for TMA4180 Optimization I
Department of Mathematical Sciences Examination paper for TMA4180 Optimization I Academic contact during examination: Phone: Examination date: 26th May 2016 Examination time (from to): 09:00 13:00 Permitted
More informationConvex Optimization and Support Vector Machine
Convex Optimization and Support Vector Machine Problem 0. Consider a two-class classification problem. The training data is L n = {(x 1, t 1 ),..., (x n, t n )}, where each t i { 1, 1} and x i R p. We
More information12. Interior-point methods
12. Interior-point methods Convex Optimization Boyd & Vandenberghe inequality constrained minimization logarithmic barrier function and central path barrier method feasibility and phase I methods complexity
More informationMATH H53 : Final exam
MATH H53 : Final exam 11 May, 18 Name: You have 18 minutes to answer the questions. Use of calculators or any electronic items is not permitted. Answer the questions in the space provided. If you run out
More informationx +3y 2t = 1 2x +y +z +t = 2 3x y +z t = 7 2x +6y +z +t = a
UCM Final Exam, 05/8/014 Solutions 1 Given the parameter a R, consider the following linear system x +y t = 1 x +y +z +t = x y +z t = 7 x +6y +z +t = a (a (6 points Discuss the system depending on the
More informationECE580 Exam 1 October 4, Please do not write on the back of the exam pages. Extra paper is available from the instructor.
ECE580 Exam 1 October 4, 2012 1 Name: Solution Score: /100 You must show ALL of your work for full credit. This exam is closed-book. Calculators may NOT be used. Please leave fractions as fractions, etc.
More informationHomework Set #6 - Solutions
EE 15 - Applications of Convex Optimization in Signal Processing and Communications Dr Andre Tkacenko JPL Third Term 11-1 Homework Set #6 - Solutions 1 a The feasible set is the interval [ 4] The unique
More informationAlgorithms for constrained local optimization
Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained
More informationUNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems
UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction
More informationPenalty and Barrier Methods. So we again build on our unconstrained algorithms, but in a different way.
AMSC 607 / CMSC 878o Advanced Numerical Optimization Fall 2008 UNIT 3: Constrained Optimization PART 3: Penalty and Barrier Methods Dianne P. O Leary c 2008 Reference: N&S Chapter 16 Penalty and Barrier
More informationSupport Vector Machines and Kernel Methods
2018 CS420 Machine Learning, Lecture 3 Hangout from Prof. Andrew Ng. http://cs229.stanford.edu/notes/cs229-notes3.pdf Support Vector Machines and Kernel Methods Weinan Zhang Shanghai Jiao Tong University
More informationOptimization Problems with Constraints - introduction to theory, numerical Methods and applications
Optimization Problems with Constraints - introduction to theory, numerical Methods and applications Dr. Abebe Geletu Ilmenau University of Technology Department of Simulation and Optimal Processes (SOP)
More informationSupport Vector Machines: Maximum Margin Classifiers
Support Vector Machines: Maximum Margin Classifiers Machine Learning and Pattern Recognition: September 16, 2008 Piotr Mirowski Based on slides by Sumit Chopra and Fu-Jie Huang 1 Outline What is behind
More informationLinear & nonlinear classifiers
Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1396 1 / 44 Table
More informationMath 118, Fall 2014 Final Exam
Math 8, Fall 4 Final Exam True or false Please circle your choice; no explanation is necessary True There is a linear transformation T such that T e ) = e and T e ) = e Solution Since T is linear, if T
More informationLP Duality: outline. Duality theory for Linear Programming. alternatives. optimization I Idea: polyhedra
LP Duality: outline I Motivation and definition of a dual LP I Weak duality I Separating hyperplane theorem and theorems of the alternatives I Strong duality and complementary slackness I Using duality
More informationExam in TMA4180 Optimization Theory
Norwegian University of Science and Technology Department of Mathematical Sciences Page 1 of 11 Contact during exam: Anne Kværnø: 966384 Exam in TMA418 Optimization Theory Wednesday May 9, 13 Tid: 9. 13.
More informationPrimal/Dual Decomposition Methods
Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients
More informationDuality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities
Duality Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Lagrangian Consider the optimization problem in standard form
More informationLecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming
Lecture 11 and 12: Penalty methods and augmented Lagrangian methods for nonlinear programming Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lecture 11 and
More informationLecture 3. Optimization Problems and Iterative Algorithms
Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex
More informationGeometric problems. Chapter Projection on a set. The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as
Chapter 8 Geometric problems 8.1 Projection on a set The distance of a point x 0 R n to a closed set C R n, in the norm, is defined as dist(x 0,C) = inf{ x 0 x x C}. The infimum here is always achieved.
More informationOptimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers
Optimization for Communications and Networks Poompat Saengudomlert Session 4 Duality and Lagrange Multipliers P Saengudomlert (2015) Optimization Session 4 1 / 14 24 Dual Problems Consider a primal convex
More informationg(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to
1 of 11 11/29/2010 10:39 AM From Wikipedia, the free encyclopedia In mathematical optimization, the method of Lagrange multipliers (named after Joseph Louis Lagrange) provides a strategy for finding the
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationSF2822 Applied nonlinear optimization, final exam Wednesday June
SF2822 Applied nonlinear optimization, final exam Wednesday June 3 205 4.00 9.00 Examiner: Anders Forsgren, tel. 08-790 7 27. Allowed tools: Pen/pencil, ruler and eraser. Note! Calculator is not allowed.
More informationMotivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:
CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through
More informationLecture 7: Convex Optimizations
Lecture 7: Convex Optimizations Radu Balan, David Levermore March 29, 2018 Convex Sets. Convex Functions A set S R n is called a convex set if for any points x, y S the line segment [x, y] := {tx + (1
More informationDuality Theory of Constrained Optimization
Duality Theory of Constrained Optimization Robert M. Freund April, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 The Practical Importance of Duality Duality is pervasive
More informationLectures 9 and 10: Constrained optimization problems and their optimality conditions
Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained
More informationWHY DUALITY? Gradient descent Newton s method Quasi-newton Conjugate gradients. No constraints. Non-differentiable ???? Constrained problems? ????
DUALITY WHY DUALITY? No constraints f(x) Non-differentiable f(x) Gradient descent Newton s method Quasi-newton Conjugate gradients etc???? Constrained problems? f(x) subject to g(x) apple 0???? h(x) =0
More informationSpring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization
Spring 2017 CO 250 Course Notes TABLE OF CONTENTS richardwu.ca CO 250 Course Notes Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4, 2018 Table
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More informationWritten Examination
Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes
More informationMathematical Economics. Lecture Notes (in extracts)
Prof. Dr. Frank Werner Faculty of Mathematics Institute of Mathematical Optimization (IMO) http://math.uni-magdeburg.de/ werner/math-ec-new.html Mathematical Economics Lecture Notes (in extracts) Winter
More information(Kernels +) Support Vector Machines
(Kernels +) Support Vector Machines Machine Learning Torsten Möller Reading Chapter 5 of Machine Learning An Algorithmic Perspective by Marsland Chapter 6+7 of Pattern Recognition and Machine Learning
More informationMachine Learning. Support Vector Machines. Manfred Huber
Machine Learning Support Vector Machines Manfred Huber 2015 1 Support Vector Machines Both logistic regression and linear discriminant analysis learn a linear discriminant function to separate the data
More informationInterior Point Methods for Convex Quadratic and Convex Nonlinear Programming
School of Mathematics T H E U N I V E R S I T Y O H F E D I N B U R G Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming Jacek Gondzio Email: J.Gondzio@ed.ac.uk URL: http://www.maths.ed.ac.uk/~gondzio
More informationWeek 3 Linear programming duality
Week 3 Linear programming duality This week we cover the fascinating topic of linear programming duality. We will learn that every minimization program has associated a maximization program that has the
More informationNonlinear Programming
Nonlinear Programming Kees Roos e-mail: C.Roos@ewi.tudelft.nl URL: http://www.isa.ewi.tudelft.nl/ roos LNMB Course De Uithof, Utrecht February 6 - May 8, A.D. 2006 Optimization Group 1 Outline for week
More informationEE364a Review Session 5
EE364a Review Session 5 EE364a Review announcements: homeworks 1 and 2 graded homework 4 solutions (check solution to additional problem 1) scpd phone-in office hours: tuesdays 6-7pm (650-723-1156) 1 Complementary
More information