Exam in TMA4180 Optimization Theory
|
|
- Vivian Cain
- 5 years ago
- Views:
Transcription
1 Norwegian University of Science and Technology Department of Mathematical Sciences Page 1 of 11 Contact during exam: Anne Kværnø: Exam in TMA418 Optimization Theory Wednesday May 9, 13 Tid: Auxiliary materials: Simple calculator (Hewlett Packard HP3S or Citizen SR-7X) Rottmann: Matematisk formelsamling With preliminary solutions (not proofread). Problem 1 Let f(x) = x a) Find the gradient and the hessian of f. b) Find all minima of f. + x cos y. For the remaining part of this problem we discuss one step of a line search method, starting from x = (1, π/4), with search direction p = ( 1, ). c) Confirm that p is a descent direction from x. d) State the Wolfe-conditions. What are the purpose of these conditions? If c 1 =.1 and c =.8, what are the admissible values for the steplength α. e) Do one step of the line search method, using the exact value for the steplength. Solution:
2 Page of 11 a) Gradient: f(x) = (x + cos y, x sin y) T, ( ) 1 sin y Hessian: f(x) = sin y x cos y b) There are two sets of solutions for the first order condition: f(x) = : 1. The first solution is x =, y = (n + 1/)π for n =, ±1, ±,.... In this case the Hessian is ( ) 1 ( 1) n+1 f(x) = ( 1) n+1 with eigenvalues λ = 1 (1± 5), so these are not minima (they arabesquee saddle points).. The second solution is x = ( 1) n+1, y = nπ, for n =, ±1, ±,.... The Hessian in this point is simply the identity matrix, which obviously is SPD, so these are strict local minima. Further f ( ( 1) n+1, nπ ) = 1 + ( 1)n+1 cos nπ = 1 so we can conclude that these are also global minima. c) The direction p is descent if f(x ) T p <. In our case this is (1 + /, ( ) 1 /) = 1 / < so p is a descent direction. d) The Wolfe conditions are (general) f(x k + α k p k ) f(x k ) + c 1 α k f(x k ) T p k f(x k + α k p k ) T p k c f(x k ) T p k. for two parameters c 1 and c satisfying < c 1 < c < 1. Choosing the steplengths such that the Wolfe conditions are satisfied ensures sufficiently decrease of the objective function from one iteration to the next one, to ensure convergence the line search method, see Theorem 3. in N&W for details. This is critical when the one-dimensional minimization problem of a line search step is solved approximately. With the given f, x and p, this becomes or, with c 1 =.1 and c =.8 α (1 c 1 )( + ) α (1 c )(1 + )/.3414 α 3.73.
3 Page 3 of 11 e) One step with a line search method means to solve the following one-dimensional problem { } min f(x 1 + αp ) = min α> α> (1 α) + (1 α) with solution α = 1 + /. The new iterate is then x 1 = x + α p = ( ) T, π. 4 The value of the objective function is then f(x 1 ) =.
4 Page 4 of 11 Problem Given the linear problem min x 1 3x subject to x 1 + x 6 x 1 + x 5 x 1, x a) Make a sketch of the feasible domain, and solve the problem graphically. b) Write the problem in standard form. c) Write up the dual of this problem. State the relations between the primal and the dual problem (the duality theorem). d) Explain the idea of the simplex method for linear problems. Perform one step of the method on the given problem, starting from (x 1, x ) = (, ). Solution: a) The minimum is at the solution of that x 1 = 4/3 and x = 11/3. x 1 + x = 6, x 1 + x = 5,
5 Page 5 of 11 b) The standard form of an LP problem is min c T x subject to Ax = b, x >. In our case, using slack variables x 3 and x 4, this is subject to min x 1 3x x 1 + x + x 3 = 6 x 1 + x + x 4 = 5 x 1, x, x 3, x 4 so A = ( ) 1 1, b = (6, 5) T, c T = (1, 3,, ) c) The dual problem is max b T λ subject to A T λ c where λ is the vector of Lagrange multipliers for the equality constraints. In our case, this is: subject to max 6λ 1 + 5λ λ 1 + λ 1 λ 1 + λ 3 λ 1 λ For the relation between the primal and the dual problem, see Theorem 13.1 in the N&W. d) The rough idea of the simplex method is the following: The feasible domain Ω of a LP problem is a polytope. Start from a vertex (or a basic feasible point) of this polytope. Move from vertex to vertex along edges, reducing the objective function c T x. If a minimum exist, it is on a vertex, and will eventually be found. How this can be done mathematically is better explained with the example. In our case we start with x 1 = x =. Since the point has to be feasible, Ax = b has to be satisfied, so x 3 = 6 and x 4 = 5. This fulfill the definition of a basic feasible point (p. 363 in N&W), with B = {3, 4}. Do the corresponding splitting of the system, that is: x = [,, 6, 5] T = [ T, x T B] T, A = [N, B], c T = [c T 1, c T ]. From the definition of a feasible point, B has to be invertible. In this case, B is simply the identity matrix The idea is now to search for an x(t)
6 Page 6 of 11 Ω s.t. x() = [ T, x T B ]T and c T x(t) is reduced as t increases. Let x(t) = [v(t) T, y(t) T ] T. We get Ax(t) = Nv(t) + By(t) = b y(t) = B 1 (b Nv(t) = x B B 1 Nv(t). c T x(t) = c T v(t) + c T 1 y(t) = c T 1 x B + (c T c T 1 B 1 N)v(t) In our case, the vector c T c T 1 B 1 N = [1, 3] T. So, by choosing v(t) = [, t] T c T x(t) will decrease with increasing value of t. But the last two components are y(t) = [6, 5] T [, 1] T t. They should both be non-negative, so we can only increase t up to t = 3, in which case y(3) = [, ] T, v(3) = [, 3] T and the new point is x = [, 3,, ] T. Which is what was expexted from the figure in point a).
7 Page 7 of 11 Problem 3 a) Define a (strict) convex function and a convex set. b) Let the set Ω be defined by Ω = {x R n : c i (x), i = 1,..., m} Show that Ω is convex if all the functions c i, i = 1,,..., m are convex. Consider the following constrained optimization problem: subject to min x 1 x 1 x x 3 x + x 3 1 c) Set up the KKT conditions for this problem. d) Find the KKT point(s). What can you say about the optimality of these points (if they exist)? Solution: a) A set Ω is convex if, for all x, y Ω θx + (1 θ)y Ω, for all θ (, 1). A function f defined on a convex set Ω is convex iff for all x, y Ω f(θx + (1 θ)y) θf(x) + (1 θ)f(y), for all θ (, 1). It is strictly convex if the inequality above is strict. b) We have: c i (θx + (1 θ)y) θc i (x) + (1 θ)c i (y) c i is convex c i (x), c i (y), θ, 1 θ >. Which is true for all i = 1,..., n. So θx + (1 θ)y Ω and we can conclude that Ω is convex.
8 Page 8 of 11 c) The Lagrangian is L(x, λ) = x 1 λ 1 (x 1 x x 3) λ (x + x 3 1) and the KKT-conditions are x L(x, λ) = (a) λ i c i (x) =, i = 1, (b) λ i, i = 1, (c) c i (x), i = 1, (d) which becomes 1 λ 1 =, (a ) λ 1 x λ = λ 1 x 3 λ = λ 1 (x 1 x x 3) = λ (x + x 3 1) = λ 1, λ, x 1 x x 3, x + x 3 1 (b ) (c ) (d ) d) Clearly, λ 1 = 1, so (c ) is satisfied for this multiplier. Further, from (b ), x 1 = x + x 3. If λ = then from (a ) x = x 3 = and then x 1 =. But this solution do not satisfy the last inequality of (d ), so we conclude that λ. The last two equations of (a ) together with the last of (b ) is x λ =, x 3 λ =, x + x 3 1 =, with the solution λ = 1 and x = x 3 = 1/. We then have the following solution x 1 = x = x 3 = 1, λ 1 = λ = 1. (*) Finally, we have to prove that the LICQ-condition holds, that is c i (x) are linear independent for all active constraints in this point. In our case, both constraints are active, and c 1 (x) = (1, x, x 3 ), c (x) = (, 1, 1) which obviously are linear independent. So (*) is a KKT-point. Since the objective function x 1 is linear and thus convex, the feasible domain Ω is convex (since c i, i = 1, are both concave), and we have found an extreme point, we know that this is a global minimum. And since there was only one solution of the KKT-conditions, we know that the minimum is unique.
9 Page 9 of 11 Problem 4 be defined on Let F (y) = 1 (y (x)) dx x D = { C 1 [, 1] ; y(1) =, y() = 3}. a) Solve the problem min F (y). y D b) Solve the problem from a), but now with the additional constraint 1 y(x)dx = 1. Solution: a) First of all, notice that the integrand only depends on y. On D the function f(x, z) = z /x is strongly convex, since f(x, z + w) f(x, z) f z (x, z)w = 1 x with equality if and only if w =. The Euler-Lagrange equation d dx f z[y(x)] = f y [y(x)] becomes d y (x) = dx x for some constant C. So y (x) = Cx ( (z + w) z zw ) = w x y (x) x = C y(x) = 1 4 Cx + D Insert the boundary conditions from D, y(1) = and y() = 3, and the solution becomes y(x) = x 1 which is a minimizer, since F is strictly convex on D b) In this case, we want to minimize F (y) = 1 y (x) x dx + λ over D, The Euler-Lagrange equations becomes d y (x) dx x = λ 1 y(x)dx
10 Page 1 of 11 with solution y(x) = 1 6 λx + 1 Cx + D Insert the boundary conditions gives C = 7λ/9 and D = 1 + λ/9, so y(x) = 1 6 λ x3 + 1 ( 79 ) λ x λ Finally, λ is determined from the constraint: and the solution becomes 1 y(x)dx = λ = 1 λ = 7/13 y(x) = 1 13 (1x3 15x + 3). Since the constraint is linear and thus convex, F is strictly convex, and y(x) is the minimizer of the constrained problem.
11 Page 11 of 11 Problem 5 Consider the integral functional ) J(y) = (y(x) + (y (x)) + y(x) y (x) dx for y C [, 1]. a) Find the Gâteaux derivative δj(y; v) of J. b) Find y C [, 1] such that δj(y; v) = for all v C [, 1]. Is this a minimizer of J? Justify your answer. Solution: a) Use δj(y; v) = ε J(u + εv) ε=. which becomes δj(y; v) = = ) ((y + εv) + (y + εv ) + (y + εv) (y + εv ) ε= dx ε (v + y v + y v + yv ) dx By using partial integration the Gâteaux derivative can be rewritten: δj(y; v) = = vdx y vdx + y v 1 + y vdx + y vdx + yv 1 y v 1 (1 + y )vdx + yv 1 (1) b) From (1) we have that this is true if 1 + y =, and y() = y(1) =. which is y(x) = 1 x(1 x). This is not a minimizer, nor a maximizer. F.inst. y(x) = K (some constant) is a function in C [, 1]. For this choice of y(x), J(y) = K which can be as small or large we want. So J do not have ha minimizer.
minimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationExamination paper for TMA4180 Optimization I
Department of Mathematical Sciences Examination paper for TMA4180 Optimization I Academic contact during examination: Phone: Examination date: 26th May 2016 Examination time (from to): 09:00 13:00 Permitted
More informationConstrained Optimization and Lagrangian Duality
CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may
More informationConvex Optimization & Lagrange Duality
Convex Optimization & Lagrange Duality Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Convex optimization Optimality condition Lagrange duality KKT
More informationExtreme Abridgment of Boyd and Vandenberghe s Convex Optimization
Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The
More informationCSCI : Optimization and Control of Networks. Review on Convex Optimization
CSCI7000-016: Optimization and Control of Networks Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one
More informationQuiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006
Quiz Discussion IE417: Nonlinear Programming: Lecture 12 Jeff Linderoth Department of Industrial and Systems Engineering Lehigh University 16th March 2006 Motivation Why do we care? We are interested in
More informationA Brief Review on Convex Optimization
A Brief Review on Convex Optimization 1 Convex set S R n is convex if x,y S, λ,µ 0, λ+µ = 1 λx+µy S geometrically: x,y S line segment through x,y S examples (one convex, two nonconvex sets): A Brief Review
More informationNonlinear Optimization: What s important?
Nonlinear Optimization: What s important? Julian Hall 10th May 2012 Convexity: convex problems A local minimizer is a global minimizer A solution of f (x) = 0 (stationary point) is a minimizer A global
More information10-725/ Optimization Midterm Exam
10-725/36-725 Optimization Midterm Exam November 6, 2012 NAME: ANDREW ID: Instructions: This exam is 1hr 20mins long Except for a single two-sided sheet of notes, no other material or discussion is permitted
More informationLecture 18: Optimization Programming
Fall, 2016 Outline Unconstrained Optimization 1 Unconstrained Optimization 2 Equality-constrained Optimization Inequality-constrained Optimization Mixture-constrained Optimization 3 Quadratic Programming
More informationLagrange duality. The Lagrangian. We consider an optimization program of the form
Lagrange duality Another way to arrive at the KKT conditions, and one which gives us some insight on solving constrained optimization problems, is through the Lagrange dual. The dual is a maximization
More informationIntroduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs
Introduction to Mathematical Programming IE406 Lecture 10 Dr. Ted Ralphs IE406 Lecture 10 1 Reading for This Lecture Bertsimas 4.1-4.3 IE406 Lecture 10 2 Duality Theory: Motivation Consider the following
More information14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.
CS/ECE/ISyE 524 Introduction to Optimization Spring 2016 17 14. Duality ˆ Upper and lower bounds ˆ General duality ˆ Constraint qualifications ˆ Counterexample ˆ Complementary slackness ˆ Examples ˆ Sensitivity
More informationMarch 5, 2012 MATH 408 FINAL EXAM SAMPLE
March 5, 202 MATH 408 FINAL EXAM SAMPLE Partial Solutions to Sample Questions (in progress) See the sample questions for the midterm exam, but also consider the following questions. Obviously, a final
More informationInput: System of inequalities or equalities over the reals R. Output: Value for variables that minimizes cost function
Linear programming Input: System of inequalities or equalities over the reals R A linear cost function Output: Value for variables that minimizes cost function Example: Minimize 6x+4y Subject to 3x + 2y
More information5. Duality. Lagrangian
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationA function(al) f is convex if dom f is a convex set, and. f(θx + (1 θ)y) < θf(x) + (1 θ)f(y) f(x) = x 3
Convex functions The domain dom f of a functional f : R N R is the subset of R N where f is well-defined. A function(al) f is convex if dom f is a convex set, and f(θx + (1 θ)y) θf(x) + (1 θ)f(y) for all
More informationApplications of Linear Programming
Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationIntroduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research
Introduction to Machine Learning Lecture 7 Mehryar Mohri Courant Institute and Google Research mohri@cims.nyu.edu Convex Optimization Differentiation Definition: let f : X R N R be a differentiable function,
More informationNumerical Optimization
Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,
More informationIE 5531 Midterm #2 Solutions
IE 5531 Midterm #2 s Prof. John Gunnar Carlsson November 9, 2011 Before you begin: This exam has 9 pages and a total of 5 problems. Make sure that all pages are present. To obtain credit for a problem,
More informationWritten Examination
Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes
More informationAlgorithms for constrained local optimization
Algorithms for constrained local optimization Fabio Schoen 2008 http://gol.dsi.unifi.it/users/schoen Algorithms for constrained local optimization p. Feasible direction methods Algorithms for constrained
More informationGeneralization to inequality constrained problem. Maximize
Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum
More informationI.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010
I.3. LMI DUALITY Didier HENRION henrion@laas.fr EECI Graduate School on Control Supélec - Spring 2010 Primal and dual For primal problem p = inf x g 0 (x) s.t. g i (x) 0 define Lagrangian L(x, z) = g 0
More informationOptimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers
Optimization for Communications and Networks Poompat Saengudomlert Session 4 Duality and Lagrange Multipliers P Saengudomlert (2015) Optimization Session 4 1 / 14 24 Dual Problems Consider a primal convex
More informationLectures 9 and 10: Constrained optimization problems and their optimality conditions
Lectures 9 and 10: Constrained optimization problems and their optimality conditions Coralia Cartis, Mathematical Institute, University of Oxford C6.2/B2: Continuous Optimization Lectures 9 and 10: Constrained
More informationLecture 8 Plus properties, merit functions and gap functions. September 28, 2008
Lecture 8 Plus properties, merit functions and gap functions September 28, 2008 Outline Plus-properties and F-uniqueness Equation reformulations of VI/CPs Merit functions Gap merit functions FP-I book:
More informationConvex Optimization Boyd & Vandenberghe. 5. Duality
5. Duality Convex Optimization Boyd & Vandenberghe Lagrange dual problem weak and strong duality geometric interpretation optimality conditions perturbation and sensitivity analysis examples generalized
More informationConvex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014
Convex Optimization Dani Yogatama School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA February 12, 2014 Dani Yogatama (Carnegie Mellon University) Convex Optimization February 12,
More informationConvex Optimization and SVM
Convex Optimization and SVM Problem 0. Cf lecture notes pages 12 to 18. Problem 1. (i) A slab is an intersection of two half spaces, hence convex. (ii) A wedge is an intersection of two half spaces, hence
More informationNonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control
Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control 19/4/2012 Lecture content Problem formulation and sample examples (ch 13.1) Theoretical background Graphical
More informationCS-E4830 Kernel Methods in Machine Learning
CS-E4830 Kernel Methods in Machine Learning Lecture 3: Convex optimization and duality Juho Rousu 27. September, 2017 Juho Rousu 27. September, 2017 1 / 45 Convex optimization Convex optimisation This
More informationConvex Optimization and Support Vector Machine
Convex Optimization and Support Vector Machine Problem 0. Consider a two-class classification problem. The training data is L n = {(x 1, t 1 ),..., (x n, t n )}, where each t i { 1, 1} and x i R p. We
More information1. Gradient method. gradient method, first-order methods. quadratic bounds on convex functions. analysis of gradient method
L. Vandenberghe EE236C (Spring 2016) 1. Gradient method gradient method, first-order methods quadratic bounds on convex functions analysis of gradient method 1-1 Approximate course outline First-order
More informationAdditional Exercises for Introduction to Nonlinear Optimization Amir Beck March 16, 2017
Additional Exercises for Introduction to Nonlinear Optimization Amir Beck March 16, 2017 Chapter 1 - Mathematical Preliminaries 1.1 Let S R n. (a) Suppose that T is an open set satisfying T S. Prove that
More informationISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints
ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained
More informationLecture 4 September 15
IFT 6269: Probabilistic Graphical Models Fall 2017 Lecture 4 September 15 Lecturer: Simon Lacoste-Julien Scribe: Philippe Brouillard & Tristan Deleu 4.1 Maximum Likelihood principle Given a parametric
More informationSECTION C: CONTINUOUS OPTIMISATION LECTURE 11: THE METHOD OF LAGRANGE MULTIPLIERS
SECTION C: CONTINUOUS OPTIMISATION LECTURE : THE METHOD OF LAGRANGE MULTIPLIERS HONOUR SCHOOL OF MATHEMATICS OXFORD UNIVERSITY HILARY TERM 005 DR RAPHAEL HAUSER. Examples. In this lecture we will take
More informationLinear and Combinatorial Optimization
Linear and Combinatorial Optimization The dual of an LP-problem. Connections between primal and dual. Duality theorems and complementary slack. Philipp Birken (Ctr. for the Math. Sc.) Lecture 3: Duality
More informationFinite Dimensional Optimization Part III: Convex Optimization 1
John Nachbar Washington University March 21, 2017 Finite Dimensional Optimization Part III: Convex Optimization 1 1 Saddle points and KKT. These notes cover another important approach to optimization,
More informationTwo hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. xx xxxx 2017 xx:xx xx.
Two hours To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER CONVEX OPTIMIZATION - SOLUTIONS xx xxxx 27 xx:xx xx.xx Answer THREE of the FOUR questions. If
More informationDuality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities
Duality Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities Lagrangian Consider the optimization problem in standard form
More informationConvex Optimization. Newton s method. ENSAE: Optimisation 1/44
Convex Optimization Newton s method ENSAE: Optimisation 1/44 Unconstrained minimization minimize f(x) f convex, twice continuously differentiable (hence dom f open) we assume optimal value p = inf x f(x)
More informationx +3y 2t = 1 2x +y +z +t = 2 3x y +z t = 7 2x +6y +z +t = a
UCM Final Exam, 05/8/014 Solutions 1 Given the parameter a R, consider the following linear system x +y t = 1 x +y +z +t = x y +z t = 7 x +6y +z +t = a (a (6 points Discuss the system depending on the
More informationEC9A0: Pre-sessional Advanced Mathematics Course. Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1
EC9A0: Pre-sessional Advanced Mathematics Course Lecture Notes: Unconstrained Optimisation By Pablo F. Beker 1 1 Infimum and Supremum Definition 1. Fix a set Y R. A number α R is an upper bound of Y if
More informationSF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren
SF2822 Applied Nonlinear Optimization Lecture 9: Sequential quadratic programming Anders Forsgren SF2822 Applied Nonlinear Optimization, KTH / 24 Lecture 9, 207/208 Preparatory question. Try to solve theory
More informationConvex Optimization and Modeling
Convex Optimization and Modeling Duality Theory and Optimality Conditions 5th lecture, 12.05.2010 Jun.-Prof. Matthias Hein Program of today/next lecture Lagrangian and duality: the Lagrangian the dual
More informationMATH2070 Optimisation
MATH2070 Optimisation Nonlinear optimisation with constraints Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The full nonlinear optimisation problem with equality constraints
More informationCS711008Z Algorithm Design and Analysis
CS711008Z Algorithm Design and Analysis Lecture 8 Linear programming: interior point method Dongbo Bu Institute of Computing Technology Chinese Academy of Sciences, Beijing, China 1 / 31 Outline Brief
More informationLagrangian Duality Theory
Lagrangian Duality Theory Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapter 14.1-4 1 Recall Primal and Dual
More informationExamination paper for TMA4110 Matematikk 3
Department of Mathematical Sciences Examination paper for TMA11 Matematikk 3 Academic contact during examination: Eugenia Malinnikova Phone: 735557 Examination date: 6th May, 15 Examination time (from
More informationComputational Finance
Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples
More informationELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications
ELE539A: Optimization of Communication Systems Lecture 15: Semidefinite Programming, Detection and Estimation Applications Professor M. Chiang Electrical Engineering Department, Princeton University March
More information2.098/6.255/ Optimization Methods Practice True/False Questions
2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence
More informationMarch 8, 2010 MATH 408 FINAL EXAM SAMPLE
March 8, 200 MATH 408 FINAL EXAM SAMPLE EXAM OUTLINE The final exam for this course takes place in the regular course classroom (MEB 238) on Monday, March 2, 8:30-0:20 am. You may bring two-sided 8 page
More informationHomework Set #6 - Solutions
EE 15 - Applications of Convex Optimization in Signal Processing and Communications Dr Andre Tkacenko JPL Third Term 11-1 Homework Set #6 - Solutions 1 a The feasible set is the interval [ 4] The unique
More informationPart IB Optimisation
Part IB Optimisation Theorems Based on lectures by F. A. Fischer Notes taken by Dexter Chua Easter 2015 These notes are not endorsed by the lecturers, and I have modified them (often significantly) after
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Duality in Nonlinear Optimization ) Tamás TERLAKY Computing and Software McMaster University Hamilton, January 2004 terlaky@mcmaster.ca Tel: 27780 Optimality
More informationConstrained Optimization Theory
Constrained Optimization Theory Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Constrained Optimization Theory IMA, August
More informationLecture: Duality.
Lecture: Duality http://bicmr.pku.edu.cn/~wenzw/opt-2016-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/35 Lagrange dual problem weak and strong
More informationMathematical Economics. Lecture Notes (in extracts)
Prof. Dr. Frank Werner Faculty of Mathematics Institute of Mathematical Optimization (IMO) http://math.uni-magdeburg.de/ werner/math-ec-new.html Mathematical Economics Lecture Notes (in extracts) Winter
More informationSufficient Conditions for Finite-variable Constrained Minimization
Lecture 4 It is a small de tour but it is important to understand this before we move to calculus of variations. Sufficient Conditions for Finite-variable Constrained Minimization ME 256, Indian Institute
More informationConvex Optimization M2
Convex Optimization M2 Lecture 3 A. d Aspremont. Convex Optimization M2. 1/49 Duality A. d Aspremont. Convex Optimization M2. 2/49 DMs DM par email: dm.daspremont@gmail.com A. d Aspremont. Convex Optimization
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization
E5295/5B5749 Convex optimization with engineering applications Lecture 8 Smooth convex unconstrained and equality-constrained minimization A. Forsgren, KTH 1 Lecture 8 Convex optimization 2006/2007 Unconstrained
More informationMath 164-1: Optimization Instructor: Alpár R. Mészáros
Math 164-1: Optimization Instructor: Alpár R. Mészáros First Midterm, April 20, 2016 Name (use a pen): Student ID (use a pen): Signature (use a pen): Rules: Duration of the exam: 50 minutes. By writing
More informationCSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods
CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods March 23, 2018 1 / 35 This material is covered in S. Boyd, L. Vandenberge s book Convex Optimization https://web.stanford.edu/~boyd/cvxbook/.
More informationLinear and non-linear programming
Linear and non-linear programming Benjamin Recht March 11, 2005 The Gameplan Constrained Optimization Convexity Duality Applications/Taxonomy 1 Constrained Optimization minimize f(x) subject to g j (x)
More informationOptimality Conditions for Constrained Optimization
72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)
More informationConstrained Optimization
1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange
More informationTMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM
TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM H. E. Krogstad, IMF, Spring 2012 Karush-Kuhn-Tucker (KKT) Theorem is the most central theorem in constrained optimization, and since the proof is scattered
More informationLecture Note 18: Duality
MATH 5330: Computational Methods of Linear Algebra 1 The Dual Problems Lecture Note 18: Duality Xianyi Zeng Department of Mathematical Sciences, UTEP The concept duality, just like accuracy and stability,
More informationMath 5311 Constrained Optimization Notes
ath 5311 Constrained Optimization otes February 5, 2009 1 Equality-constrained optimization Real-world optimization problems frequently have constraints on their variables. Constraints may be equality
More informationOptimization for Machine Learning
Optimization for Machine Learning (Problems; Algorithms - A) SUVRIT SRA Massachusetts Institute of Technology PKU Summer School on Data Science (July 2017) Course materials http://suvrit.de/teaching.html
More informationSupport Vector Machines
Support Vector Machines Support vector machines (SVMs) are one of the central concepts in all of machine learning. They are simply a combination of two ideas: linear classification via maximum (or optimal
More informationInterior Point Algorithms for Constrained Convex Optimization
Interior Point Algorithms for Constrained Convex Optimization Chee Wei Tan CS 8292 : Advanced Topics in Convex Optimization and its Applications Fall 2010 Outline Inequality constrained minimization problems
More informationChap 2. Optimality conditions
Chap 2. Optimality conditions Version: 29-09-2012 2.1 Optimality conditions in unconstrained optimization Recall the definitions of global, local minimizer. Geometry of minimization Consider for f C 1
More informationSF2822 Applied nonlinear optimization, final exam Wednesday June
SF2822 Applied nonlinear optimization, final exam Wednesday June 3 205 4.00 9.00 Examiner: Anders Forsgren, tel. 08-790 7 27. Allowed tools: Pen/pencil, ruler and eraser. Note! Calculator is not allowed.
More informationLagrangian Duality and Convex Optimization
Lagrangian Duality and Convex Optimization David Rosenberg New York University February 11, 2015 David Rosenberg (New York University) DS-GA 1003 February 11, 2015 1 / 24 Introduction Why Convex Optimization?
More informationThe Lagrangian L : R d R m R r R is an (easier to optimize) lower bound on the original problem:
HT05: SC4 Statistical Data Mining and Machine Learning Dino Sejdinovic Department of Statistics Oxford Convex Optimization and slides based on Arthur Gretton s Advanced Topics in Machine Learning course
More informationLecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem
Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem Michael Patriksson 0-0 The Relaxation Theorem 1 Problem: find f := infimum f(x), x subject to x S, (1a) (1b) where f : R n R
More informationWHY DUALITY? Gradient descent Newton s method Quasi-newton Conjugate gradients. No constraints. Non-differentiable ???? Constrained problems? ????
DUALITY WHY DUALITY? No constraints f(x) Non-differentiable f(x) Gradient descent Newton s method Quasi-newton Conjugate gradients etc???? Constrained problems? f(x) subject to g(x) apple 0???? h(x) =0
More informationTMA947/MAN280 APPLIED OPTIMIZATION
Chalmers/GU Mathematics EXAM TMA947/MAN280 APPLIED OPTIMIZATION Date: 06 08 31 Time: House V, morning Aids: Text memory-less calculator Number of questions: 7; passed on one question requires 2 points
More informationCO 250 Final Exam Guide
Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,
More informationDuality in Linear Programs. Lecturer: Ryan Tibshirani Convex Optimization /36-725
Duality in Linear Programs Lecturer: Ryan Tibshirani Convex Optimization 10-725/36-725 1 Last time: proximal gradient descent Consider the problem x g(x) + h(x) with g, h convex, g differentiable, and
More informationNumerical Optimization. Review: Unconstrained Optimization
Numerical Optimization Finding the best feasible solution Edward P. Gatzke Department of Chemical Engineering University of South Carolina Ed Gatzke (USC CHE ) Numerical Optimization ECHE 589, Spring 2011
More informationLecture: Duality of LP, SOCP and SDP
1/33 Lecture: Duality of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2017.html wenzw@pku.edu.cn Acknowledgement:
More informationLinear Programming. Larry Blume Cornell University, IHS Vienna and SFI. Summer 2016
Linear Programming Larry Blume Cornell University, IHS Vienna and SFI Summer 2016 These notes derive basic results in finite-dimensional linear programming using tools of convex analysis. Most sources
More informationOptimization. A first course on mathematics for economists
Optimization. A first course on mathematics for economists Xavier Martinez-Giralt Universitat Autònoma de Barcelona xavier.martinez.giralt@uab.eu II.3 Static optimization - Non-Linear programming OPT p.1/45
More informationLecture 10: Linear programming duality and sensitivity 0-0
Lecture 10: Linear programming duality and sensitivity 0-0 The canonical primal dual pair 1 A R m n, b R m, and c R n maximize z = c T x (1) subject to Ax b, x 0 n and minimize w = b T y (2) subject to
More informationAM 205: lecture 18. Last time: optimization methods Today: conditions for optimality
AM 205: lecture 18 Last time: optimization methods Today: conditions for optimality Existence of Global Minimum For example: f (x, y) = x 2 + y 2 is coercive on R 2 (global min. at (0, 0)) f (x) = x 3
More informationsubject to (x 2)(x 4) u,
Exercises Basic definitions 5.1 A simple example. Consider the optimization problem with variable x R. minimize x 2 + 1 subject to (x 2)(x 4) 0, (a) Analysis of primal problem. Give the feasible set, the
More informationOptimality, Duality, Complementarity for Constrained Optimization
Optimality, Duality, Complementarity for Constrained Optimization Stephen Wright University of Wisconsin-Madison May 2014 Wright (UW-Madison) Optimality, Duality, Complementarity May 2014 1 / 41 Linear
More informationNumerical Optimization of Partial Differential Equations
Numerical Optimization of Partial Differential Equations Part I: basic optimization concepts in R n Bartosz Protas Department of Mathematics & Statistics McMaster University, Hamilton, Ontario, Canada
More informationOutline. Roadmap for the NPP segment: 1 Preliminaries: role of convexity. 2 Existence of a solution
Outline Roadmap for the NPP segment: 1 Preliminaries: role of convexity 2 Existence of a solution 3 Necessary conditions for a solution: inequality constraints 4 The constraint qualification 5 The Lagrangian
More informationLecture 2: Linear SVM in the Dual
Lecture 2: Linear SVM in the Dual Stéphane Canu stephane.canu@litislab.eu São Paulo 2015 July 22, 2015 Road map 1 Linear SVM Optimization in 10 slides Equality constraints Inequality constraints Dual formulation
More information5 Handling Constraints
5 Handling Constraints Engineering design optimization problems are very rarely unconstrained. Moreover, the constraints that appear in these problems are typically nonlinear. This motivates our interest
More informationExamination paper for TMA4110/TMA4115 Matematikk 3
Department of Mathematical Sciences Examination paper for TMA40/TMA45 Matematikk 3 Academic contact during examination: Gereon Quick Phone: 48 50 4 2 Examination date: 8 August 206 Examination time (from
More information