IE 5531 Practice Midterm #2

Similar documents
IE 5531 Midterm #2 Solutions

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I

IE 5531: Engineering Optimization I

2.098/6.255/ Optimization Methods Practice True/False Questions

IE 5531: Engineering Optimization I

Barrier Method. Javier Peña Convex Optimization /36-725


MVE165/MMG631 Linear and integer optimization with applications Lecture 13 Overview of nonlinear programming. Ann-Brith Strömberg

4TE3/6TE3. Algorithms for. Continuous Optimization

Lecture Note 1: Introduction to optimization. Xiaoqun Zhang Shanghai Jiao Tong University

Practice Questions for Math 131 Exam # 1

Lecture 13: Constrained optimization

BEEM103 UNIVERSITY OF EXETER. BUSINESS School. January 2009 Mock Exam, Part A. OPTIMIZATION TECHNIQUES FOR ECONOMISTS solutions

Solution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark

5 Handling Constraints

minimize x subject to (x 2)(x 4) u,

Simon Fraser University, Department of Economics, Econ 201, Prof. Karaivanov FINAL EXAM Answer key

Applications of Linear Programming

4.1 Identifying Linear Functions

Math 164-1: Optimization Instructor: Alpár R. Mészáros

Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control

CS711008Z Algorithm Design and Analysis

CSCI 1951-G Optimization Methods in Finance Part 09: Interior Point Methods

Optimization Tutorial 1. Basic Gradient Descent

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

Convex Optimization. Prof. Nati Srebro. Lecture 12: Infeasible-Start Newton s Method Interior Point Methods

Lecture 4: Optimization. Maximizing a function of a single variable

g(t) = f(x 1 (t),..., x n (t)).

Key Concepts: Economic Computation, Part III

September Math Course: First Order Derivative

Second Welfare Theorem

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7

Bi-Variate Functions - ACTIVITES

Generalization to inequality constrained problem. Maximize

Algorithms for Constrained Optimization

Written Examination

A Solution to the Problem of Externalities When Agents Are Well-Informed

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Homework 4. Convex Optimization /36-725

10-725/ Optimization Midterm Exam

Convex Optimization Lecture 6: KKT Conditions, and applications

Mathematics For Economists

Rice University. Answer Key to Mid-Semester Examination Fall ECON 501: Advanced Microeconomic Theory. Part A

Interior Point Methods for Convex Quadratic and Convex Nonlinear Programming

Lecture 18: Optimization Programming

Gradient Descent. Dr. Xiaowei Huang

Convex Optimization. Newton s method. ENSAE: Optimisation 1/44

Reading group: Calculus of Variations and Optimal Control Theory by Daniel Liberzon

Notes IV General Equilibrium and Welfare Properties

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

Tvestlanka Karagyozova University of Connecticut

Problem Set 2: Proposed solutions Econ Fall Cesar E. Tamayo Department of Economics, Rutgers University

Midterm 1. Every element of the set of functions is continuous

Support Vector Machines

Scientific Computing: Optimization

Lecture 3. Optimization Problems and Iterative Algorithms

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. xx xxxx 2017 xx:xx xx.

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

Notes on Iterated Expectations Stephen Morris February 2002

Cubic regularization of Newton s method for convex problems with constraints

Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method

Algorithms for constrained local optimization

Convex Optimization & Lagrange Duality

MATH 4211/6211 Optimization Basics of Optimization Problems

How to Characterize Solutions to Constrained Optimization Problems

Comprehensive Exam. Macro Spring 2014 Retake. August 22, 2014

Review of Optimization Methods

Constrained Optimization

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Lecture 12. Functional form

Course Notes for EE227C (Spring 2018): Convex Optimization and Approximation

IE 5531: Engineering Optimization I

Algorithms for nonlinear programming problems II

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

z = f (x; y) f (x ; y ) f (x; y) f (x; y )

Fundamental Theorems of Optimization

Following The Central Trajectory Using The Monomial Method Rather Than Newton's Method

Optimization and Newton s method

Optimization for Machine Learning

Convex Feasibility Problems

Project Discussions: SNL/ADMM, MDP/Randomization, Quadratic Regularization, and Online Linear Programming

Algorithms for nonlinear programming problems II

Chapter 1. Preliminaries

Lagrange duality. The Lagrangian. We consider an optimization program of the form

ECE 476. Exam #2. Tuesday, November 15, Minutes

Linear programming II

Convexification by Duality for a Leontief Technology Production Design Problem

Outline. Roadmap for the NPP segment: 1 Preliminaries: role of convexity. 2 Existence of a solution

Primal/Dual Decomposition Methods

Computational Finance

1 Computing with constraints

Math Camp Notes: Everything Else

Exam 2 Study Guide: MATH 2080: Summer I 2016

2 optimal prices the link is either underloaded or critically loaded; it is never overloaded. For the social welfare maximization problem we show that

Handout 1: Introduction to Dynamic Programming. 1 Dynamic Programming: Introduction and Examples

subject to (x 2)(x 4) u,

Lecture 3: Lagrangian duality and algorithms for the Lagrangian dual problem

An algorithmic solution for an optimal decision making process within emission trading markets

Computing regularization paths for learning multiple kernels

Transcription:

IE 5531 Practice Midterm #2 Prof. John Gunnar Carlsson November 23, 2010 Problem 1: Nonlinear programming You are a songwriter who writes Top 40 style songs for the radio. Each song you write can be described by a feature vector x, which encodes information about the song (for example, number of times the word 'love' is used, number of guitar ris, average duration of a song, and so forth). Each song you write will be a hit with probability p (x) = exp ( c T x + d ) 1 + exp (c T x + d) where c and d are known parameters (this function is commonly used in logistic regression, which you may have seen previously; note that p lies between 0 and 1 by construction). In writing songs, you must also obey restrictions imposed by the local radio station, which we can express as the system Ax b. 1. Consider the problem of choosing x so as to maximize the probability of making a hit song while obeying the radio station's restriction. Then, write an equivalent minimization problem with a convex objective function f ( ), using a transformation of the objective function (hint: the function log (e t + 1) is convex). 2. Suppose that c = (1, 2) and d = 1. Sketch some level sets of f ( ). 3. Suppose that a hit song will generate a prot w T x + q, where q is a positive constant so that there exists a feasible x satisfying w T x+q > 0 (a non-hit song will generate a prot of 0). Write the problem of maximizing the expected prot due to this song. Is there an equivalent convex optimization problem for this? Take the negative logarithm of p (x) and you get f (x) := log (p (x)) = log ( exp ( c T x + d )) + log ( 1 + exp ( c T x + d )) = c T x + d + log ( 1 + exp ( c T x + d )) where we set t = c T x + d. By the hint this is a sum of convex functions and is therefore convex. Since f (x) depends only on c T x + d, the level sets of f (x) are just the level sets of c T x + d, which are just straight lines of the form x 1 + 2x 2 + 1 = κ for various constants κ. The problem of maximizing the expected prot is maximize ( w T x + q ) p (x) Again, if we take the negative logarithm of this objective function, the problem is minimize log (( w T x + q ) p (x) ) = log ( w T x + q ) log p (x) = log ( w T x + q ) which is a sum of convex functions and therefore convex. c T x + d + log ( 1 + exp ( c T x + d )) 1

Problem 2: Multi-rm alliance revisited Suppose you are the mayor of Minneapolis and there are three rms in your city: rm 1 (capital), rm 2 (labor), and rm 3 (technology). The three rms are considering possible cooperations. Let x 1, x 2, x 3 denote the input of rms 1, 2, and 3, respectively. The payo function is f(x 1, x 2, x 3 ) = x 1 + x 3 + (x 1 + x 3 )(3x 2 2x 2 2). 1. Assume each input variable x 1, x 2, x 3 takes values in [0, 1]. That is, 0 x i 1 for i = 1, 2, 3. Derive the KKT condition for maximizing the social prot function f(x 1, x 2, x 3 ). Also show that x 1 = 1, x 2 = 0.75 and x 3 = 1 is an optimal solution. (Hint: KKT conditions are NOT sucient for this non-convex program. you should look for additional arguments to identify global optimality) 2. Suppose the social prot will be assigned to each rm proportionally to their input. So each rm i (i = 1 corresponds to rm 1, so on) will get prot x i x 1 + x 2 + x 3 f(x 1, x 2, x 3 ), i = 1, 2, 3. Suppose we do not consider the cost of the input. Show that if x 1 = 1, x 3 = 1 are xed, then x 2 = 0.75 is NOT an optimal strategy for rm 2 if rm 2 just aims to maximizes its own prot, which is x 2 x 2+2 f(1, x 2, 1). 3. From part 2, you may have realized that under this mechanism, the rms will never cooperate towards the social optimum. So, you changed the input rules that each rm either inputs x i = 1 or inputs x i = 0. For example, if rm 1 and rm 2 form a sub-alliance, then their total payo is f(1, 1, 0) = 2. We list the payos of all possible sub-alliances S {1, 2, 3} in the table below. S f(s) 0 {1} 1 {2} 0 {3} 1 {1, 2} 2 {2, 3} 2 {1, 3} 2 {1, 2, 3} 4 Table 1: Obviously, the grand alliance maximizes the social payo (= 4). We know a core is the set of payo allocation vectors (z 1, z 2, z 3 ) under the grand alliance, such that no subgroups can do better by deserting the grand alliance. Write out the expression of the core using the given data. Also show that this core is nonempty. 4. Due to the recent economic recession, your city has run out of budget, so you have to ask the grand alliance to pay a tax of T. However, you still want to maintain the grand alliance. In other words, you want to make sure that the core is nonempty. Under this condition, nd the maximal T you can charge the grand alliance. (The grand alliance has payo f({1, 2, 3}) T after tax, but we suppose any sub-alliance S {1, 2, 3} is exempt of tax and still has payo f(s).) First, note that 3x 2 2x 2 2 0 for x 2 [0, 1]. Therefore, the social objective function is monotonically increasing as a function of x 1 and x 3 and therefore x 1 = 1 and x 3 = 1. Finally, it is easy to see that 3x 2 2x 2 2 is maximized at the point x 2 = 0.75 as desired. Next, suppose that x 1 = x 3 = 1 and consider the prot of rm 2. The derivative of rm 2's prot at the point x 2 = 0.75 is 1.12 0 and therefore 0.75 is not a local minimizer of rm 2's prot. Indeed, if we set x 2 = 1 then rm 2 receives a prot of 4/3 as compared with a prot of 1.16 at x 2 = 0.75. The core must satisfy z 1 + z 3 2 2

It is non-empty because setting z 1 = 1.5, z 2 = 1, and z 3 = 1.5 is in the core (for example). Finally, to determine the maximum tax, we can formulate the linear problem maximize T z 1 + z 2 + z 3 + T = 4 z 1 + z 3 2 Omitting redundant constraints, substituting T = 4 (z 1 + z 2 + z 3 ), and removing the constant 4 from the objective function, this simplies to s.t. minimize z 1 + z 2 + z 3 s.t. Note that and implies z 1 + z 2 + z 3 3. Therefore a lower bound on the new objective function is 3, which implies a tax of T = 1 unit. This is indeed feasible, by setting z 1 = z 2 = z 3 = 1, and therefore the optimal tax is T = 1 unit. 3

Problem 3: Interior point methods In addition to solving linear programming problems, the barrier function method also works for solving quadratic programming problems. Consider the following quadratic programming problem: minimize (x 1 3) 2 x 2 s.t. 2 x 1 x 2 0 x 1 0 (1) 1. Solve this quadratic programming problem using the KKT conditions. Next, consider the following optimization problem: minimizeφ µ (x 1, x 2 ) = (x 1 3) 2 x 2 µ(log(2 x 1 x 2 ) + log x 1 ) 2. Find the unconstrained minimal solution of φ µ (x 1, x 2 ) for any given µ 0. Note: Since x 1, x 2 are functions of µ, the optimal solution can be written as (x 1 (µ), x 2 (µ)), where µ is a parameter. 3. For the above problem, would the minimum solution be a local or global optimum? Why? 4. Verify that (x 1 (µ), x 2 (µ)) converges to the solutions as µ 0. The KKT conditions are 2 (x 1 3) + λ 1 λ 2 = 0 1 + λ 1 = 0 λ 1 (2 x 1 x 2 ) = 0 λ 2 x 1 = 0 Since there are only two constraints, by trial and error we nd that the optimal solution has x 1 = 5/2 and x 2 = 0.5. The optimality conditions for the barrier problem are ( 1 2x 1 6 + µ 1 ) 2 x 1 x 2 x 1 = 0 µ 1 + 2 x 1 x 2 = 0 which tells us that x 1 (µ) = 5+ 25+8µ 4 and x 2 (µ) = 2 x 1 (µ) µ. The minimizer is a global minimizer because the objective function is convex. Finally, we see that as µ 0, we nd that x 1 (µ) 5/2 and x 2 (µ) 0.5 as desired. 4

Problem 4: KKT system Consider the nonlinear program minimize xy (x 3) 2 + (y 2) 2 = 1 1. Write the KKT conditions for optimality of this problem. 2. We know Newton's method can be used to nd roots of an equation. How can it be applied to nd the KKT points here? To that end, apply one iteration of the Newton's method starting with the point (x 0, y 0, λ) = (1, 1, 1). s.t. The KKT conditions, plus the feasibility condition, are y + 2λ (x 3) = 0 x + 2λ (y 2) = 0 (x 3) 2 + (y 2) 2 1 = 0 The Jacobian matrix of this system is J = 2λ 1 x 1 2λ y 2 (x 3) 2 (y 2) 0 the iteration scheme is where x = (x, y, λ) and f (x, y, λ) = x k+1 = x k J 1 f (x k ) y + 2λ (x 3) x + 2λ (y 2) (x 3) 2 + (y 2) 2 1 Computing one iteration with x 0 = (1, 1, 1) we nd that x 1 = (2.33, 0.33, 1.5). 5