SF2822 Applied nonlinear optimization, final exam Saturday December

Similar documents
SF2822 Applied nonlinear optimization, final exam Wednesday June

SF2822 Applied nonlinear optimization, final exam Saturday December

E5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 10: Interior methods. Anders Forsgren. 1. Try to solve theory question 7.

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 9: Sequential quadratic programming. Anders Forsgren

Solution to EE 617 Mid-Term Exam, Fall November 2, 2017

minimize x subject to (x 2)(x 4) u,

Semidefinite Programming Basics and Applications

Nonlinear optimization

TMA947/MAN280 APPLIED OPTIMIZATION

Convex Optimization M2

Generalization to inequality constrained problem. Maximize

Written Examination

Module 04 Optimization Problems KKT Conditions & Solvers

There are several approaches to solve UBQP, we will now briefly discuss some of them:

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

5. Duality. Lagrangian

Applications of Linear Programming

I.3. LMI DUALITY. Didier HENRION EECI Graduate School on Control Supélec - Spring 2010

Convex Optimization Boyd & Vandenberghe. 5. Duality

Global Optimization of Polynomials

12. Interior-point methods

minimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1,

Convex Optimization M2

CS711008Z Algorithm Design and Analysis

Nonlinear Optimization: What s important?

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Convexification of Mixed-Integer Quadratically Constrained Quadratic Programs

Lecture 7: Convex Optimizations

subject to (x 2)(x 4) u,

Additional Homework Problems

Lagrangian Duality. Richard Lusby. Department of Management Engineering Technical University of Denmark

MIT Algebraic techniques and semidefinite optimization February 14, Lecture 3

Lagrangian Duality. Evelien van der Hurk. DTU Management Engineering

Introduction to Nonlinear Stochastic Programming

Duality. Geoff Gordon & Ryan Tibshirani Optimization /

Lagrange duality. The Lagrangian. We consider an optimization program of the form

Practice Exam 1: Continuous Optimisation

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

Algorithms for constrained local optimization

Lecture: Duality.

Lecture 14: Optimality Conditions for Conic Problems

Constrained Optimization

Semidefinite Programming

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

2.098/6.255/ Optimization Methods Practice True/False Questions

Relaxations and Randomized Methods for Nonconvex QCQPs

Lecture Note 5: Semidefinite Programming for Stability Analysis

Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming

Multiple Choice Answers. MA 110 Precalculus Spring 2016 Exam 1 9 February Question

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Lagrange Relaxation and Duality

CE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions

EE364a Review Session 5

Linear and non-linear programming

ECE580 Final Exam December 14, Please leave fractions as fractions, I do not want the decimal equivalents.

Convex Optimization. (EE227A: UC Berkeley) Lecture 6. Suvrit Sra. (Conic optimization) 07 Feb, 2013

EE 227A: Convex Optimization and Applications October 14, 2008

Lecture 2: Linear SVM in the Dual

Canonical Problem Forms. Ryan Tibshirani Convex Optimization

Conic Linear Optimization and its Dual. yyye

Support Vector Machines

ELE539A: Optimization of Communication Systems Lecture 6: Quadratic Programming, Geometric Programming, and Applications

10-725/ Optimization Midterm Exam

OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES

On the Relationship Between Regression Analysis and Mathematical Programming

Lecture 8. Strong Duality Results. September 22, 2008

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

12. Interior-point methods

Lecture: Duality of LP, SOCP and SDP

Interior-Point Methods for Linear Optimization

LECTURE 25: REVIEW/EPILOGUE LECTURE OUTLINE

Lagrange multipliers. Portfolio optimization. The Lagrange multipliers method for finding constrained extrema of multivariable functions.

Modeling with semidefinite and copositive matrices

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Lagrangian Duality Theory

E5295/5B5749 Convex optimization with engineering applications. Lecture 8. Smooth convex unconstrained and equality-constrained minimization

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark

6-1 The Positivstellensatz P. Parrilo and S. Lall, ECC

CHAPTER 2: QUADRATIC PROGRAMMING

March 8, 2010 MATH 408 FINAL EXAM SAMPLE

CO350 Linear Programming Chapter 6: The Simplex Method

Math 164-1: Optimization Instructor: Alpár R. Mészáros

Optimality, Duality, Complementarity for Constrained Optimization

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

A General Framework for Convex Relaxation of Polynomial Optimization Problems over Cones

Optimization in Wireless Communication

SDP Relaxations for MAXCUT

Duality (Continued) min f ( x), X R R. Recall, the general primal problem is. The Lagrangian is a function. defined by

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:

Support Vector Machines for Classification and Regression. 1 Linearly Separable Data: Hard Margin SVMs

LECTURE 10 LECTURE OUTLINE

EE/AA 578, Univ of Washington, Fall Duality

Agenda. Applications of semidefinite programming. 1 Control and system theory. 2 Combinatorial and nonconvex optimization

Quadratic reformulation techniques for 0-1 quadratic programs

Interior Point Methods: Second-Order Cone Programming and Semidefinite Programming

LMI Methods in Optimal and Robust Control

Convex relaxation. In example below, we have N = 6, and the cut we are considering

15-780: LinearProgramming

Transcription:

SF2822 Applied nonlinear optimization, final exam Saturday December 20 2008 8.00 13.00 Examiner: Anders Forsgren, tel. 790 71 27. Allowed tools: Pen/pencil, ruler and eraser; plus a calculator provided by the department. Solution methods: Unless otherwise stated in the text, the problems should be solved by systematic methods, which do not become unrealistic for large problems. If you use methods other than what have been taught in the course, you must explain carefully. Note! Personal number must be written on the title page. Write only one exercise per sheet. Number the pages and write your name on each page. 22 points are sufficient for a passing grade. For 20-21 points, a completion to a passing grade may be made within three weeks from the date when the results of the exam are announced. 1. Consider the nonlinear optimization problem (N LP ) defined as (NLP ) e x 1 + 1 2 (x 1 + x 2 4) 2 + (x 1 x 2 ) 2 subject to (x 1 3) 2 x 2 2 + 9 0. You have obtained a printout from a sequential quadratic programming solver for this problem. The initial point is x = (0 0) T and λ = 0. Six iterations, without linesearch, have been performed. The printout, where the floating point numbers are given with four decimal places, reads: It x 1 x 2 λ f(x) g(x)λ 0 0 0 0 5.0000 1 0 1.3333 0.7222 1.9259 2 1.0117 2.9430 0.5596 1.1891 3 0.7084 2.1240 0.4384 0.2028 4 0.7990 2.0422 0.3294 0.0335 5 0.7983 2.0378 0.3227 0.0001 6 0.7984 2.0378 0.3227 0.0000 (a) Why cannot the above printout be correct? Give a reason that does not need any calculations in addition to the printout above....................... (2p) (b) Formulate the first QP problem. Solve this QP problem by any method, that need not be systematic.................................................. (6p) (c) The printout corresponds to a problem related to (NLP ). Which one?.. (2p) Note: According to the convention of the book we define the Lagrangian L(x, λ) as L(x, λ) = f(x) λ T g(x), where f(x) the objective function and g(x) is the constraint function, where the inequality constraint is written as g(x) 0. 1

Page 2 of 4 Final exam December 20 2008 SF2822 2. Consider the quadratic program (QP ) defined by (QP ) 3x 2 1 2x 1x 2 + 3x 2 2 24x 1 8x 2 subject to x 1 x 2 5, x 1 0, x 2 0. The problem may be illustrated geometrically in the figure below, (a) Solve (QP ) by an active-set method. Start at x = (1 0) T with the constraint x 2 0 active. You need not calculate any exact numerical values, but you may utilize the fact that problem is two-dimensional, and make a pure geometric solution. Illustrate your iterations in the figure corresponding to Exercise 2a which can be found at the last sheet of the exam. Motivate each step carefully...........................................................................(4p) (b) Assume that the constraint x 2 x 1 3 is added to (QP ), so that we obtain the problem (QP ) according to (QP ) 3x 2 1 2x 1x 2 + 3x 2 2 24x 1 8x 2 subject to x 1 x 2 5, x 1 0, x 2 0, x 2 x 1 3. Solve (QP ) by an active-set method. Start at x = (1 0) T with the constraint x 2 0 active. You need not calculate any exact numerical values, but you may utilize the fact that problem is two-dimensional, and make a pure geometric solution. Add the constraint x 2 x 1 3 and illustrate your iterations in the figure corresponding to Exercise 2b which can be found at the last sheet of the exam. Motivate each step carefully...................................... (6p)

SF2822 Final exam December 20 2008 Page 3 of 4 3. Consider the nonlinear programming problem (P ) f(x) subject to g(x) 0, where f : IR n IR and g : IR n IR m are continuously differentiable. A barrier transformation of (P ) for a fixed positive barrier parameter µ gives the problem (P µ ) m f(x) µ ln(g i (x)). i=1 Show that the first-order necessary optimality conditions for (P µ ) are equivalent to the system of nonlinear equations f(x) g(x)λ = 0, g i (x)λ i µ = 0, i = 1,..., m, assuming that g(x) > 0 and λ > 0 is kept implicitly......................... (10p) 4. Consider a nonlinear programming problem (N LP ) defined by (NLP ) e x 1 + 1 2 x2 1 + x 1x 2 + 1 2 x2 2 + x2 3 2x 1 x 2 3x 3 subject to x 2 1 + x2 2 + x2 3 + x 1 + x 2 x 3 = 0, x 2 1 x2 2 x2 3 2. Let x = (0 0 1) T. (a) Show that x fulfils the first-order necessary optimality conditions for (N LP )...........................................................................(3p) (b) Show that x fulfils the second-order sufficient optimality conditions for (N LP )...........................................................................(4p) (c) Does it hold that x is a global r to (NLP )?.................... (3p) 5. For a given symmetric n n-matrix W, the so-called two-way partitioning problem may be posed as x T W x x IR n subject to x 2 ii = 1, i = 1,..., n. which is a nonconvex problem in general. By Lagrangian relaxation, one may obtain the dual problem as a semidefinite program (SDP 1 ) on the form (SDP 1 ) e T y y IR n subject to diag(y) W,

Page 4 of 4 Final exam December 20 2008 SF2822 Alternatively, one may reformulate the original problem as X S n trace(w X) subject to X ii = 1, i = 1,..., n, rank(x) = 1, X 0, where S n denotes the set of symmetric n n matrices. If the rank constraint is relaxed, one obtains the semidefinite program (SDP 2 ) given by (SDP 2 ) X S n trace(w X) subject to X ii = 1, i = 1,..., n, X 0. What is the relation between (SDP 1 ) and (SDP 2 )?......................... (10p) Note: You need not consider the original problem, but you may rather consider (SDP 1 ) and (SDP 2 ) only. Hint 1: Relate (SDP 1 ) and (SDP 2 ) via duality. Recall that a primal-dual pair of semidefinite programs are given by (P SDP ) trace(cx) X S n subject to trace(a i X) = b i, i = 1,..., m, X 0, and (DSDP ) maximize mi=1 b i y i y IR m subject to m i=1 A i y i C. Hint 2: Note that if M is a symmetric n n-matrix and u is an n-vector, then u T Mu = trace(muu T ). Good luck!

Name:................ Personal number:................ Sheet nummer:................ Figure for Exercise 2a: Figure for Exercise 2b: