IE 5531 Midterm #2 Solutions

Similar documents
IE 5531: Engineering Optimization I

IE 5531 Practice Midterm #2

IE 5531: Engineering Optimization I

Introduction to Mathematical Programming IE406. Lecture 10. Dr. Ted Ralphs

2.098/6.255/ Optimization Methods Practice True/False Questions

Lecture 18: Optimization Programming

IE 5531: Engineering Optimization I

10 Numerical methods for constrained problems

IE 5531: Engineering Optimization I

CS711008Z Algorithm Design and Analysis

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control

Econ Slides from Lecture 14

DEPARTMENT OF STATISTICS AND OPERATIONS RESEARCH OPERATIONS RESEARCH DETERMINISTIC QUALIFYING EXAMINATION. Part I: Short Questions

Written Examination

TMA947/MAN280 APPLIED OPTIMIZATION

Convex Optimization and Support Vector Machine

Linear and Combinatorial Optimization

Lecture 13: Constrained optimization

Review Solutions, Exam 2, Operations Research

Interior Point Algorithms for Constrained Convex Optimization

Mark your answers ON THE EXAM ITSELF. If you are not sure of your answer you may wish to provide a brief explanation.

Support Vector Machines: Maximum Margin Classifiers

Optimality, Duality, Complementarity for Constrained Optimization

Lecture 10: Linear programming duality and sensitivity 0-0

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

Linear Programming Duality

FIN 550 Exam answers. A. Every unconstrained problem has at least one interior solution.

1. f(β) 0 (that is, β is a feasible point for the constraints)

minimize x subject to (x 2)(x 4) u,

Constrained Optimization

LP. Kap. 17: Interior-point methods

21. Solve the LP given in Exercise 19 using the big-m method discussed in Exercise 20.

Part 4: Active-set methods for linearly constrained optimization. Nick Gould (RAL)

Primal-dual Subgradient Method for Convex Problems with Functional Constraints

Continuous Optimisation, Chpt 6: Solution methods for Constrained Optimisation

Interior-Point Methods for Linear Optimization

Interior Point Methods for Mathematical Programming

MATH 210 EXAM 3 FORM A November 24, 2014

5 Handling Constraints

IE 5531: Engineering Optimization I

Primal/Dual Decomposition Methods

Lagrange Duality. Daniel P. Palomar. Hong Kong University of Science and Technology (HKUST)

Exam in TMA4180 Optimization Theory

4TE3/6TE3. Algorithms for. Continuous Optimization

10-725/ Optimization Midterm Exam

LINEAR AND NONLINEAR PROGRAMMING

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

min 4x 1 5x 2 + 3x 3 s.t. x 1 + 2x 2 + x 3 = 10 x 1 x 2 6 x 1 + 3x 2 + x 3 14

Lagrangian Duality for Dummies

Today s class. Constrained optimization Linear programming. Prof. Jinbo Bi CSE, UConn. Numerical Methods, Fall 2011 Lecture 12

Applications of Linear Programming

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

Lecture 10: Linear programming. duality. and. The dual of the LP in standard form. maximize w = b T y (D) subject to A T y c, minimize z = c T x (P)

OPTIMISATION /09 EXAM PREPARATION GUIDELINES

Kernel Machines. Pradeep Ravikumar Co-instructor: Manuela Veloso. Machine Learning

Algorithms for constrained local optimization

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Interior Point Methods in Mathematical Programming

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

CS269: Machine Learning Theory Lecture 16: SVMs and Kernels November 17, 2010

1 Review Session. 1.1 Lecture 2

ECON 304 MIDTERM EXAM ANSWERS

Fundamentals of Operations Research. Prof. G. Srinivasan. Indian Institute of Technology Madras. Lecture No. # 15

CSCI 1951-G Optimization Methods in Finance Part 01: Linear Programming

More First-Order Optimization Algorithms

Quiz Discussion. IE417: Nonlinear Programming: Lecture 12. Motivation. Why do we care? Jeff Linderoth. 16th March 2006

Econ 172A, Fall 2007: Midterm A

Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for So far, we have considered unconstrained optimization problems.

Convex Optimization. Prof. Nati Srebro. Lecture 12: Infeasible-Start Newton s Method Interior Point Methods

Convex Optimization & Lagrange Duality

12. Interior-point methods

FIN 550 Practice Exam Answers. A. Linear programs typically have interior solutions.

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

4.6 Linear Programming duality

Constrained Optimization and Lagrangian Duality

Multidisciplinary System Design Optimization (MSDO)

AM 205: lecture 19. Last time: Conditions for optimality Today: Newton s method for optimization, survey of optimization methods

Advanced Operations Research Prof. G. Srinivasan Department of Management Studies Indian Institute of Technology, Madras

Nonlinear Optimization: What s important?

Operations Research Lecture 4: Linear Programming Interior Point Method

The Kuhn-Tucker Problem

AM 121 Introduction to Optimization: Models and Methods Example Questions for Midterm 1

March 13, Duality 3

minimize x x2 2 x 1x 2 x 1 subject to x 1 +2x 2 u 1 x 1 4x 2 u 2, 5x 1 +76x 2 1,

5. Duality. Lagrangian

ECON 582: Dynamic Programming (Chapter 6, Acemoglu) Instructor: Dmytro Hryshko

1 Computing with constraints

Nonlinear Optimization for Optimal Control

Linear Programming, Lecture 4

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Part IB Optimisation

CS 323: Numerical Analysis and Computing

1. Algebraic and geometric treatments Consider an LP problem in the standard form. x 0. Solutions to the system of linear equations

Support Vector Machines

Part 1. The Review of Linear Programming

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7

March 8, 2010 MATH 408 FINAL EXAM SAMPLE

Math 354 Summer 2004 Solutions to review problems for Midterm #1

Duality. Lagrange dual problem weak and strong duality optimality conditions perturbation and sensitivity analysis generalized inequalities

CS295: Convex Optimization. Xiaohui Xie Department of Computer Science University of California, Irvine

Transcription:

IE 5531 Midterm #2 s Prof. John Gunnar Carlsson November 9, 2011 Before you begin: This exam has 9 pages and a total of 5 problems. Make sure that all pages are present. To obtain credit for a problem, you must show all your work. if you use a formula to answer a problem, write the formula down. Do not open this exam until instructed to do so. 1

Problem 1: True or False (10 points) State whether each of the following statements is true or false. Assume all auxiliary information given is correct. For each part, only your answer, which should be one of True or False, will be graded. Any explanation will not be read. 1. Although its worst-case running time is faster than that of the simplex method, the ellipsoid method is rarely used because it performs poorly in practical situations. True. 2. One disadvantage of the Wolfe line-search conditions is that, in any given iteration, we may erroneously remove local minimizers from the search space. False. (local minimizers have φ (α) = 0, which is never discarded) 3. At a point satisfying the KKT conditions, if a Lagrange multiplier λ of a constraint is 0, then that constraint must be inactive. False. Minimizing x 2 subject to x 0 has a zero Lagrange multiplier but the constraint is active at optimality.. Consider the problem minimize f (x) a T x b where f (x) is a convex function. If x satises a T x < b and f (x ) = 0, then x must be a globally optimal solution to our problem. True. s.t. 5. Newton's method, applied to the function below, should converge to the marked root of the function if our initial guess lies in the shaded interval. True. 2

Problem 2: Line search conditions (20 points) Suppose that you are performing a line search to nd an unconstrained minimum of the function f (x) in R n. At the current iteration, you have chosen your descent direction d k and you are currently trying to determine the optimal step length α k. Sketch the ranges of feasible step lengths in the gure below for the Goldstein conditions. You may assume that φ (0) = f (x k ) T d k = 1, as suggested by the gure. Assume that c = 1/ and consequently that 1 c = 3/. 3

Problem 3: Cobb-Douglas Utility Function (20 points) The Cobb-Douglas production function is widely used in economics to represent the relationship between inputs and outputs of a rm. It takes the form Y = AL α K β where Y represents output, L labor, and K capital. The parameters α and β are constants that determine how production is scaled. The Cobb-Douglas function can also be applied to utility maximization and takes the general form N i=1 xαi i. Consider the following utility maximization problem: maximize u (x) = x α 1 x 1 α 2 s.t. p 1 x 1 + p 2 x 2 w Here p is a price vector for x 1 and x 2 and w represents a user's budget. Assume that p > 0 and w > 0. 1. Perform an appropriate transformation to turn this into a minimization problem with a convex objective function. Taking logarithms and multiplying by 1 we obtain the equivalent convex problem minimize [α log x 1 + (1 α) log x 2 ] s.t. p 1 x 1 + p 2 x 2 w 2. Write the KKT conditions and nd an explicit solution for x as a function of α, p, and w. (Hint: can you assume that certain constraints must be tight? Can you assume that certain constraints must be slack?) Are these conditions sucient for optimality? The KKT conditions for the transformed problem are ( ) ( ) ( ) ( α/x 1 p1 1 0 + λ (1 α)/x 0 + λ 2 p 1 + λ 2 0 2 1 ) = 0 λ 0 (p 1 x 1 + p 2 x 2 w) = 0 λ 1 x 1 = 0 λ 2 x 2 = 0 p 1 x 1 + p 2 x 2 w The constraint p 1 x 1 + p 2 x 2 w must be tight because the objective function of the transformed problem is monotonically decreasing in x 1 and x 2. The sign constraints on x 1 and x 2 are slack because otherwise the objective function of the transformed problem is, and thus λ 1 = λ 2 = 0. The conditions above therefore tell us that α/x 1 + λ 0 p 1 = 0 (1 α)/x 2 + λ 0 p 2 = 0 p 1 x 1 + p 2 x 2 = w which admits the unique solution λ 0 = 1/w so that x 1 = αw/p 1 and x 2 = (1 α)w/p 2. 3. Find the corresponding Lagrange multiplier for the budget constraint and give an interpretation of it. The Lagrange multiplier is λ 0 = 1/w; the economic interpretation is that it represents the marginal benet of one extra unit of budget.

This page intentionally left blank. 5

Problem : A linearly constrained problem (25 points) optimization problem: Consider the following linearly constrained convex minimize x 2 1 6x 1 + x 3 2 3x 2 s.t. x 1 + x 2 1 1. Derive the KKT conditions for this problem. The KKT conditions are 2x 1 6 + λ 0 λ 1 = 0 3x 2 2 3 + λ 0 λ 2 = 0 λ 0 (x 1 + x 2 1) = 0 λ 1 x 1 = 0 λ 2 x 2 = 0 x 1 + x 2 1 2. Determine whether or not x = (1/2, 1/2) is an optimal solution. If x = (1/2, 1/2), then by complementary slackness we must have λ 1 = λ 2 = 0. However, the rst two equations tell us that 2x 1 6 + λ 0 λ 1 = 0 3x 2 2 3 + λ 0 λ 2 = 0 so that 2(1/2) 6 + λ 0 = 3(1/2) 2 3 + λ 0 = 0, which cannot happen, and thus (1/2, 1/2) is not optimal. 3. Given that the optimal solution lies on a corner point of the feasible region, nd the optimal solution to the above problem. The feasible region has three corner points: (0, 0), (1, 0), and (0, 1). By inspection we nd that the optimal point is (1, 0). 6

This page intentionally left blank. 7

Problem 5: Interior point method for LP (25 points) Suppose we want to solve the following linear program: minimize x 1 + x 2 s.t. x 1 + x 2 + x 3 = 1 x 1, x 2, x 3 0 We will use a barrier method on the inequality constraints (not the primal-dual potential function!) to solve this problem. 1. Write the KKT conditions for the barrier-function relaxation to this linear program. The problem is and the KKT conditions are minimize x 1 + x 2 µ (log x 1 + log x 2 + log x 3 ) 1 µ/x 1 + λ 0 = 0 1 µ/x 2 + λ 0 = 0 µ/x 3 + λ 0 = 0 x 1 + x 2 + x 3 = 1 x 1 + x 2 + x 3 = 1 2. Show that for every µ > 0, the optimal solution to the barrier-function relaxation is: x 1 (µ) = 1 + 3µ 1 + 9µ 2 2µ x 2 (µ) = x 1 (µ) x 3 (µ) = 1 2x 1 (µ) s.t. The KKT conditions give us four equations in four unknowns. Isolating λ 0 we can write 1 µ/x 1 = 1 µ/x 2 = µ/x 3 x 1 + x 2 + x 3 = 1 which tells us that x 2 = x 1 at optimality. Since x 1 + x 2 + x 3 = 1 this tells us that x 3 = 1 2x 1. To solve for x 1 we write 1 µ/x 1 = µ/x 3 = µ/(1 2x 1 ) which gives a quadratic equation in x 1. Solving this equation gives us x 1 = 1 + 3µ ± 1 + 9µ 2 2µ We have two options for the ± sign, since both give feasible solutions. However, the objective function at x 1 = x 2 = 1+3µ+ 1+9µ 2 2µ is strictly greater than the objective function at x 1 = x 2 = 1+3µ 1+9µ 2 2µ, and the desired result follows. 3. Show that as µ 0, the optimal solution to the barrier problem approaches the unique optimal solution.. It is straightforward to verify that lim µ 0 (x 1 (µ), x 2 (µ), x 3 (µ)) = (0, 0, 1).. (Bonus; 5 points) Suppose that our objective function is just to minimize x 1. Where does the solution converge to now? Using the same approach the solution converges to (0, 1/2, 1/2). 8

This page intentionally left blank. 9