SF2822 Applied nonlinear optimization, final exam Wednesday June

Similar documents
SF2822 Applied nonlinear optimization, final exam Saturday December

SF2822 Applied nonlinear optimization, final exam Saturday December

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

minimize x subject to (x 2)(x 4) u,

Applications of Linear Programming

Nonlinear Optimization: What s important?

CE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark

Nonlinear optimization

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 10: Interior methods. Anders Forsgren. 1. Try to solve theory question 7.

TMA947/MAN280 APPLIED OPTIMIZATION

CS711008Z Algorithm Design and Analysis

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Lecture 8. Strong Duality Results. September 22, 2008

Interior Point Methods. We ll discuss linear programming first, followed by three nonlinear problems. Algorithms for Linear Programming Problems

CHAPTER 2: QUADRATIC PROGRAMMING

Introduction to Nonlinear Stochastic Programming

ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, Please leave fractions as fractions, but simplify them, etc.

Generalization to inequality constrained problem. Maximize

Written Examination

Semidefinite Programming

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

12. Interior-point methods

Lecture 18: Optimization Programming

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

Lagrange duality. The Lagrangian. We consider an optimization program of the form

minimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1,

Lagrangian Duality Theory

Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. xx xxxx 2017 xx:xx xx.

10 Numerical methods for constrained problems

EE 227A: Convex Optimization and Applications October 14, 2008

Primal-Dual Interior-Point Methods

Lecture 7: Convex Optimizations

LECTURE 10 LECTURE OUTLINE

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:

Additional Homework Problems

Convex Optimization & Lagrange Duality

March 8, 2010 MATH 408 FINAL EXAM SAMPLE

A priori bounds on the condition numbers in interior-point methods

2.098/6.255/ Optimization Methods Practice True/False Questions

Linear and non-linear programming

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

Support Vector Machines

Math 164-1: Optimization Instructor: Alpár R. Mészáros

Exam in TMA4180 Optimization Theory

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Constrained optimization: direct methods (cont.)

10-725/ Optimization Midterm Exam

Optimization. Yuh-Jye Lee. March 28, Data Science and Machine Intelligence Lab National Chiao Tung University 1 / 40

Interior Point Methods in Mathematical Programming

Semidefinite Programming Basics and Applications

12. Interior-point methods

Duality. Geoff Gordon & Ryan Tibshirani Optimization /

Nonlinear Programming Models

CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods

1 Computing with constraints


4. Algebra and Duality

Convexification of Mixed-Integer Quadratically Constrained Quadratic Programs

Miscellaneous Nonlinear Programming Exercises

Solving large Semidefinite Programs - Part 1 and 2

CONSTRAINED NONLINEAR PROGRAMMING

Duality in Linear Programs. Lecturer: Ryan Tibshirani Convex Optimization /36-725

Numerical Optimization. Review: Unconstrained Optimization

ECE580 Exam 1 October 4, Please do not write on the back of the exam pages. Extra paper is available from the instructor.

m i=1 c ix i i=1 F ix i F 0, X O.

4TE3/6TE3. Algorithms for. Continuous Optimization

March 5, 2012 MATH 408 FINAL EXAM SAMPLE

SDP Relaxations for MAXCUT

Optimality, Duality, Complementarity for Constrained Optimization

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

Lecture Note 5: Semidefinite Programming for Stability Analysis

Interior-Point Methods for Linear Optimization

Additional Exercises for Introduction to Nonlinear Optimization Amir Beck March 16, 2017

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Computational Optimization. Augmented Lagrangian NW 17.3

Lecture: Quadratic optimization

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

CO350 Linear Programming Chapter 6: The Simplex Method

Example: feasibility. Interpretation as formal proof. Example: linear inequalities and Farkas lemma

Lecture 13: Constrained optimization

Lagrangian Duality. Richard Lusby. Department of Management Engineering Technical University of Denmark

Algorithms for constrained local optimization

On the Relationship Between Regression Analysis and Mathematical Programming

Primal-Dual Interior-Point Methods. Javier Peña Convex Optimization /36-725

Lecture 3 Interior Point Methods and Nonlinear Optimization

ICS-E4030 Kernel Methods in Machine Learning

OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES

4TE3/6TE3. Algorithms for. Continuous Optimization

Homework 4. Convex Optimization /36-725

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

An E cient A ne-scaling Algorithm for Hyperbolic Programming

Transcription:

SF2822 Applied nonlinear optimization, final exam Wednesday June 3 205 4.00 9.00 Examiner: Anders Forsgren, tel. 08-790 7 27. Allowed tools: Pen/pencil, ruler and eraser. Note! Calculator is not allowed. Solution methods: Unless otherwise stated in the text, the problems should be solved by systematic methods, which do not become unrealistic for large problems. Motivate your conclusions carefully. If you use methods other than what have been taught in the course, you must explain carefully. Note! Personal number must be written on the title page. Write only one question per sheet. Number the pages and write your name on each page. 22 points are sufficient for a passing grade. For 20-2 points, a completion to a passing grade may be made within three weeks from the date when the results of the exam are announced.. Consider the quadratic program (QP ) defined by (QP ) 2 xt Hx + c T x subject to Ax b, where 0 0 ( ) ( ) 2 2 0 0 H =, c =, A = 0, b = 6. 2 9 0 6 4 The problem may be illustrated geometrically in the figure below, (a) Solve (QP ) by an active-set method. Start at x = (2 0) T with the constraint x 2 0 in the working set. You need not calculate any exact numerical values, but you may utilize the fact that problem is two-dimensional, and make a pure geometric solution. Illustrate your iterations in the figure corresponding to Question a which can be found at the last sheet of the exam. Motivate each step carefully............................................................(4p)

Page 2 of 4 Final exam June 3 205 SF2822 (b) Assume that b 3 is changed from 6 to 4, so that the third constraint reads x 4. Solve the corresponding quadratic program by an active-set method. Start at x = (2 0) T with no constraint in the working set. You need not calculate any exact numerical values, but you may utilize the fact that problem is two-dimensional, and make a pure geometric solution. Add the modified constraint and illustrate your iterations in the figure corresponding to Question b which can be found at the last sheet of the exam. Motivate each step carefully...........................................................................(6p) 2. Consider the nonlinear program (N LP ) given by (NLP ) (x ) 2 + 2 (x 2 2) 2 subject to 3 2 2 x2 2 x2 2 0. Assume that one wants to solve (NLP ) by a sequential quadratic programming method for the initial point x (0) = ( 2) T and λ (0) = 2. (a) Your friend AF claims that there is no need to perform any iterations. He claims that x (0) must be a global r to (NLP ), since (NLP ) is a convex optimization problem and f(x (0) ) = 0. Explain why he is wrong....... (2p) (b) Perform one iteration by sequential quadratic programming for solving (N LP ) for the given x (0) and λ (0), i.e., calculate x () and λ (). You may solve the subproblem in an arbitrary way that need not be systematic, and you do not need to perform any linesearch........................................... (8p) Remark: In accordance to the notation of the textbook, the sign of λ is chosen such that L(x, λ) = f(x) λ T g(x). 3. Consider again the nonlinear program (NLP ) of Question 2 given by (NLP ) (x ) 2 + 2 (x 2 2) 2 subject to 3 2 2 x2 2 x2 2 0. Assume that one wants to solve (NLP ) by a primal-dual interior method. Your friend AF insists on initializing by x (0) = ( 2) T and λ (0) = 2. (In accordance to the notation of the textbook, we define the Lagrangian as L(x, λ) = f(x) λ T g(x).) He claims that this can be done, but unfortunately he has forgotten how. Help AF set up a primal-dual interior method for which he can make use of the given x (0) and λ (0) as initial point. Let µ = 2 and set up the linear system of equations to be solved at the first iteration for the given x (0), λ (0) and µ. First give the linear equations on general form and then insert numerical values. You need not solve the linear equations, but state how you would use the solution to generate the next iterate x (), λ ()..............................................................(0p) Hint: You may want to introduce additional variables and/or constraints for which you need to set appropriate initial values.

SF282 Final exam June 3 205 Page 3 of 4 4. Consider the semidefinite programming problem (P ) defined as (P ) c T x subject to G(x) 0, where G(x) = n j= A j x j B for B and A j, j =,..., n, are symmetric m m- matrices. The corresponding dual problem is given by (D) maximize trace(by ) subject to trace(a j Y ) = c j, j =,..., n, Y = Y T 0. A barrier transformation of (P ) for a fixed positive barrier parameter µ gives the problem (P µ ) c T x µ ln(det(g(x))). (a) Show that the first-order necessary optimality conditions for (P µ ) are equivalent to the system of nonlinear equations c j trace(a j Y ) = 0, j =,..., n, G(x)Y µi = 0, assuming that G(x) 0 and Y 0 are kept implicitly................... (5p) (b) Show that a solution x(µ) and Y (µ) to the system of nonlinear equations, such that G(x(µ)) 0 and Y (µ) 0, is feasible to (P ) and (D) respectively with duality gap mµ......................................................... (3p) (c) In linear programming, when G(x) and Y are diagonal, it is not an issue how the equation G(x)Y µi = 0 is written. The linearizations of G(x)Y µi = 0 and Y G(x) µi = 0 are then identical. Explain why this is in general not the case for semidefinite programming and how it can be handled........... (2p) Remark: For a symmetric matrix M we above use M 0 and M 0 to denote that M is positive definite and positive semidefinite respectively. You may use the relations ln(det(g(x))) x j = trace(a j G(x) ) for j =,..., n, without proof. 5. Consider the indefinite quadratic program (QP ) defined by (QP ) 2 xt Dx + c T x subject to Ax b,

Page 4 of 4 Final exam June 3 205 SF2822 with 0 0 0 2 0 2 0 0 2 D =, c =, 0 0 3 0 4 0 0 0 4 3 ( ) ( ) A =, b = 4. You may throughout use the fact that D is diagonal. (a) Determine how many negative eigenvalues D has. Also determine the maximum number of active constraints (QP ) may have at any feasible point. Use these two facts to motivate why (QP ) cannot have any local r........(3p) (b) Is it possible to add a bound-constraint to (QP ) such that ( ) T is a local r to the resulting problem? If so, determine such a constraint....(7p) Note: A bound-constraint is a constraint on the form x i l i or x i u i, for some i, i =,..., 4, where l i or u i is a given numeric value. Good luck!

Name:................ Personal number:................ Sheet number:................ Figure for Question a: Figure for Question b: