TMA947/MAN280 APPLIED OPTIMIZATION

Similar documents
Lecture 13: Constrained optimization

minimize x subject to (x 2)(x 4) u,

IE 5531 Midterm #2 Solutions

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Appendix A Taylor Approximations and Definite Matrices

SF2822 Applied nonlinear optimization, final exam Wednesday June

Algorithms for nonlinear programming problems II

Lagrange Relaxation and Duality

Algorithms for constrained local optimization

Applications of Linear Programming

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

Generalization to inequality constrained problem. Maximize

Convex Optimization and Support Vector Machine

Constrained Optimization

Algorithms for nonlinear programming problems II

Additional Exercises for Introduction to Nonlinear Optimization Amir Beck March 16, 2017

Convex Optimization. Ofer Meshi. Lecture 6: Lower Bounds Constrained Optimization

Convex Optimization and SVM

14. Duality. ˆ Upper and lower bounds. ˆ General duality. ˆ Constraint qualifications. ˆ Counterexample. ˆ Complementary slackness.

Written Examination

2.098/6.255/ Optimization Methods Practice True/False Questions

LINEAR AND NONLINEAR PROGRAMMING

5 Handling Constraints

Linear Programming Methods

Solving Dual Problems

Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation

SF2822 Applied nonlinear optimization, final exam Saturday December

CS711008Z Algorithm Design and Analysis

Lagrangian Duality Theory

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to

Lecture 3. Optimization Problems and Iterative Algorithms

SF2822 Applied nonlinear optimization, final exam Saturday December

Lagrange duality. The Lagrangian. We consider an optimization program of the form

DEPARTMENT OF STATISTICS AND OPERATIONS RESEARCH OPERATIONS RESEARCH DETERMINISTIC QUALIFYING EXAMINATION. Part I: Short Questions

- Well-characterized problems, min-max relations, approximate certificates. - LP problems in the standard form, primal and dual linear programs

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

Numerical Optimization. Review: Unconstrained Optimization

Computational Optimization. Augmented Lagrangian NW 17.3

10 Numerical methods for constrained problems

Computational Finance

Lectures 9 and 10: Constrained optimization problems and their optimality conditions

Machine Learning. Support Vector Machines. Manfred Huber

Introduction to Machine Learning Lecture 7. Mehryar Mohri Courant Institute and Google Research

Linear and Combinatorial Optimization

5. Duality. Lagrangian

Part 5: Penalty and augmented Lagrangian methods for equality constrained optimization. Nick Gould (RAL)

Primal/Dual Decomposition Methods

SECTION C: CONTINUOUS OPTIMISATION LECTURE 9: FIRST ORDER OPTIMALITY CONDITIONS FOR CONSTRAINED NONLINEAR PROGRAMMING

Pattern Classification, and Quadratic Problems

1 Review Session. 1.1 Lecture 2

Interior Point Algorithms for Constrained Convex Optimization

CONVERGENCE ANALYSIS OF AN INTERIOR-POINT METHOD FOR NONCONVEX NONLINEAR PROGRAMMING

IE 5531: Engineering Optimization I

December 2014 MATH 340 Name Page 2 of 10 pages

Multidisciplinary System Design Optimization (MSDO)

8 Barrier Methods for Constrained Optimization

Numerisches Rechnen. (für Informatiker) M. Grepl P. Esser & G. Welper & L. Zhang. Institut für Geometrie und Praktische Mathematik RWTH Aachen

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016

Primal-Dual Interior-Point Methods

Examination paper for TMA4180 Optimization I

OPTIMISATION /09 EXAM PREPARATION GUIDELINES

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Support Vector Machines

Exam in TMA4180 Optimization Theory


2.3 Linear Programming

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey

LP. Kap. 17: Interior-point methods

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

CONSTRAINED NONLINEAR PROGRAMMING

Network Flows. 6. Lagrangian Relaxation. Programming. Fall 2010 Instructor: Dr. Masoud Yaghini

Section Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control

More on Lagrange multipliers

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

10-725/ Optimization Midterm Exam

Interior-Point Methods for Linear Optimization

Duality. Geoff Gordon & Ryan Tibshirani Optimization /

Lagrange Relaxation: Introduction and Applications

3.10 Lagrangian relaxation

Linear Programming: Simplex

4TE3/6TE3. Algorithms for. Continuous Optimization

INTERIOR-POINT METHODS FOR NONCONVEX NONLINEAR PROGRAMMING: CONVERGENCE ANALYSIS AND COMPUTATIONAL PERFORMANCE

Barrier Method. Javier Peña Convex Optimization /36-725

MS&E 318 (CME 338) Large-Scale Numerical Optimization

Constrained Optimization and Lagrangian Duality

Convex Optimization Boyd & Vandenberghe. 5. Duality

Lecture Note 18: Duality

OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES

Nonlinear Optimization: What s important?

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

September Math Course: First Order Derivative

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

Support Vector Machine (SVM) and Kernel Methods

SF2822 Applied Nonlinear Optimization. Preparatory question. Lecture 10: Interior methods. Anders Forsgren. 1. Try to solve theory question 7.

Primal-dual relationship between Levenberg-Marquardt and central trajectories for linearly constrained convex optimization

Transcription:

Chalmers/GU Mathematics EXAM TMA947/MAN280 APPLIED OPTIMIZATION Date: 06 08 31 Time: House V, morning Aids: Text memory-less calculator Number of questions: 7; passed on one question requires 2 points of 3. Questions are not numbered by difficulty. To pass requires 10 points and three passed questions. Examiner: Michael Patriksson Teacher on duty: Elisabeth Wulcan (0762-721860) Result announced: 06 09 19 Short answers are also given at the end of the exam on the notice board for optimization in the MV building. Exam instructions When you answer the questions Use generally valid theory and methods. State your methodology carefully. Only write on one page of each sheet. Do not use a red pen. Do not answer more than one question per page. At the end of the exam Sort your solutions by the order of the questions. Mark on the cover the questions you have answered. Count the number of sheets you hand in and fill in the number on the cover.

TMA947/MAN280 APPLIED OPTIMIZATION 1 Question 1 (the Simplex method) Consider the following linear program: minimize z = x 1 + αx 2 + x 3, subject to x 1 + 2x 2 2x 3 0, x 1 + x 3 1, x 1, x 2, x 3 0. (2p) a) Solve this problem for α = 1 by using phase I and phase II of the simplex method. [Aid: Utilize the identity ( ) 1 a b = c d for producing basis inverses.] 1 ad bc ( d ) b c a b) Which values of α leads to an unbounded dual problem? Motivate without additional calculations! (3p) Question 2 (necessary local and sufficient global optimality conditions) Consider an optimization problem of the following general form: minimize f(x), subject to x S, (1a) (1b) where S R n is nonempty, closed and convex, and f : R n R {+ } is in C 1 on S. Establish the following two results on the local/global optimality of a vector x S in this problem. Proposition 1 (necessary optimality conditions, C 1 case) If x S is a local minimum of f over S then holds. f(x ) T (x x ) 0, x S (2)

TMA947/MAN280 APPLIED OPTIMIZATION 2 Theorem 2 (necessary and sufficient global optimality conditions, C 1 case) Suppose that f : R n R is convex on S. Then, x is a global minimum of f over S (2) holds. Question 3 (Newton s method revisited) Consider the unconstrained optimization problem to where f : R n R is in C 1 on R n. minimize f(x), subject to x R n, Notice that we may not have access to second derivatives of f at every point of R n. Newton s method referred to below should be understood as follows: in each iteration step, one solves the Newton equation, followed by a line search with respect to f in the direction obtained. (2p) a) Explain in some detail how Newton s method can be extended to the above problem. b) Suppose now that f C 2 on R n. Explain why Newton s method must be modified when the Hessian matrix is not guaranteed to be positive definite. Also, provide at least one such modification. (3p) Question 4 (modelling) You are responsible for the planning of a soccer tournament where all 14 teams in the Swedish national league will participate. The teams shall be put into two groups of 7 each, in which all teams will play each other once. The winners of the two groups will then play a final. The decision to make is which teams will play in which group. The objective is to minimize the total expected travelling

TMA947/MAN280 APPLIED OPTIMIZATION 3 distance. The distances between the home towns of two teams i and j are given by the constants d ij (= d ji ), i, j {1,..., 14}. The constants p i, i {1,..., 14}, represent the number of points team i took in the national league last year. Assume that the teams are sorted so that the team with the highest point is represented by i = 1, the team with second highest point by i = 2, and so on. The chance of a team i winning its group is assumed to be the ratio between p i and the sum of the p i s in its group. You are not allowed to put the two teams with the highest p i s (team 1 and team 2) in the same group. Neither are you allowed to arrange the groups so that the difference between the sum of points of the teams in one group compared to the sum of points of the teams in the other group exceeds 15% of the total number of points. All games are played at the home ground of one of the two participating teams; which one is not important since d ij = d ji. Your task is to model this problem as a nonlinear (integer) program. All functions defined have to be differentiable and explicit! Figure 1: The 14 teams shall be put into two groups of 7 each. [Note: Optimization problems of this type have been used, e.g., for the planning of college baseball series in the US.]

TMA947/MAN280 APPLIED OPTIMIZATION 4 Question 5 (interior penalty methods) Consider the problem to minimize f(x) := (x 1 2) 4 + (x 1 2x 2 ) 2, subject to g(x) := x 2 1 x 2 0. We attack this problem with an interior penalty (barrier) method, using the barrier function φ(s) = s 1. The penalty problem is to minimize f(x) + ν ˆχ S (x), (1) x n where ˆχ S (x) = φ(g(x)), for a sequence of positive, decreasing values of the penalty parameter ν. We repeat a general convergence result for the interior penalty method below. Theorem 3 (convergence of an interior point algorithm) Let the objective function f : R n R and the functions g i, i = 1,..., m, defining the inequality constraints be in C 1 (R n ). Further assume that the barrier function φ : R R + is in C 1 and that φ (s) 0 for all s < 0. Consider a sequence {x k } of points that are stationary for the sequence of problems (1) with ν = ν k, for some positive sequence of penalty parameters {ν k } converging to 0. Assume that lim k + x k = ˆx, and that LICQ holds at ˆx. Then, ˆx is a KKT point of the problem at hand. In other words, x k stationary in (1) x k ˆx as k + LICQ holds at ˆx = ˆx stationary in our problem. (2p) a) Does the above theorem apply to the problem at hand and the selection of the penalty function? b) Implementing the above-mentioned procedure, the first value of the penalty parameter was set to ν 0 = 10, which is then divided by ten in each iteration,

TMA947/MAN280 APPLIED OPTIMIZATION 5 and the initial problem (1) was solved from the strictly feasible point (0, 1) T. The algorithm terminated after six iterations with the following results: x 6 (0.94389, 0.89635) T, and the multiplier estimate (given by ν 6 φ (g(x 6 ))) ˆµ 6 3.385. Confirm that the vector x 6 is close to being a KKT point. Is it also near-globally optimal? Why/Why not? Question 6 (linear programming) Consider the linear program z(b) := maximum 2x 1 + 3x 2 + x 3, subject to x 1 x 2 + 2x 3 1, 4x 1 + 2x 2 x 3 b, x 1, x 2, x 3 0, a) For b = 2 its optimal dual solution is claimed to be y = (5/3, 7/3) T. Examine in a suitable way whether this is correct. (Here, it is not suitable to first solve the linear program or its corresponding LP dual problem!) b) Use linear programming duality to determine the value of z(b) for each b 0 and give a principal graphical description of the function z(b). Which are its most important mathematical properties? c) Find, for each b 0 the marginal value of an increase of the right-hand side of the second constraint, that is, find for each b 0 the value of the right derivative of the function z(b). Which marginal value is acheived for b = 2?

TMA947/MAN280 APPLIED OPTIMIZATION 6 Question 7 (Lagrangian duality) Consider the following linear programming problem: minimize z = x 2, (1) subject to x 1 3 2, (2) 2x 1 + 3x 2 6, (3) x 1, x 2 0. (4) We will attack this problem by using Lagrangian duality. a) Consider Lagrangian relaxing the complicating constraint (3). Write down explicitly the resulting Lagrangian subproblem of minimizing the Lagrange function over the remaining constraints. By varying the multiplier, construct an explicit formula for the Lagrangian dual function. Plot the dual function against the (only) dual variable, and state explicitly the Lagrangian dual problem. b) Pick three primal feasible vectors and evaluate their respective objective values. Pick also three dual feasible values and evaluate their respective objective values. Using these six numbers, provide an interval wherein the optimal value of both the primal and dual problem must lie, and thereby also illustrate the Weak Duality Theorem. c) Solve the Lagrangian dual problem from a). By using the primal dual optimality conditions from Chapter 6, generate the (unique) optimal primal solution to the problem given above. Verify the Strong Duality Theorem. Good luck!