Handout 1: Introduction to Dynamic Programming. 1 Dynamic Programming: Introduction and Examples
|
|
- Bruce Knight
- 6 years ago
- Views:
Transcription
1 SEEM 3470: Dynamic Optimization and Applications Second Term Handout 1: Introduction to Dynamic Programming Instructor: Shiqian Ma January 6, 2014 Suggested Reading: Sections of Chapter I of Richard Bellman, Dynamic Programming, Dover Publications, Inc., Also review material from SEEM 3440: Operations Research II. 1 Dynamic Programming: Introduction and Examples Operations Research: a science about decision making Operations: Activities carried out in an organization related to the attainment of its goals: Decision making among different options (Example: Shortest Path) Research: Scientific methods to study the operations Operations Research: Develop scientific methods to help people make decisions of activities so as to achieve a specific objective Two features: Decision making: which path? Achieve some objective, e.g., maximize profits or minimize costs Deterministic model All info and data are deterministic Produce chair and table using two materials Stochastic model Some info and data are stochastic Lifespan of a usb, when should I replace it? Where is operations research used? Airline: Scheduling aircrafts crews (minimum number of crews) Logistics and supply chain: Inventory (how many to order, demand, order cost, inventory cost) Revenue management: Pricing (retailer selects products to display) Financial industry: Portfolio selection, asset allocation Civil engineering: Traffic analysis and transportation system design (the routes and frequency of buses, emergency evacuation system) Dynamic Programming: multi-stage optimization: take advantage of the new information in each stage to make a new decision. Example: 1
2 Scheduling (Shortest Path) Inventory Control Two-game chess match Machine replacement 2 Basic Terminologies in Optimization An optimization problem typically takes the form minimize f(x) subject to x X. (P ) Here, f : R n R is called the objective function, and X R n is called the feasible region. Thus, x = (x 1,..., x n ) is an n dimensional vector, and we shall agree that it is represented in column form. In other words, we treat x as an n 1 matrix. The entries x 1,..., x n are called the decision variables of (P ). If X = R n, then (P ) is called an unconstrained optimization problem. Otherwise, it is called an constrained optimization problem. As the above formulation suggests, we are interested in an optimal solution to (P ), which is defined as a point x X such that f(x ) f(x) for all x X. We call f(x ) the optimum value of (P ). To illustrate the above concepts, let us consider the following example: Example 1 Suppose that f : R 2 R is given by f(x 1, x 2 ) = x x2 2, and X = {(x 1, x 2 ) R 2 : 0 x 1 1, 1 x 2 3}. Then, we can write (P ) as (P ) minimize x x2 2 subject to 0 x 1 1, 1 x 2 3. This is a constrained optimization problem, and it is easy to verify that f(x 1, x 2 ) f(0, 1) = 2 for all (x 1, x 2 ) X. Thus, we say that (0, 1) is an optimal solution to (P ), and f(0, 1) = 2 is the optimal value. It is worth computing the derivative of f at (0, 1): f [ ] [ ] [ ] x 1 2x1 0 0 f = =. 4x (x 1,x 2 )=(0,1) x 2 (x 1,x 2 )=(0,1) This shows that for a constrained optimization problem, the derivative at the optimal solution need not be zero. The different structures of f and X in (P ) give rise to different classes of optimization problems. Some important classes include: 1. discrete optimization problems, when the set X consists of countably many points; 2
3 2. linear optimization problems, when f takes the form a 1 x 1 + a 2 x a n x n for some given a 1,..., a n, and X is a set defined by linear inequalities; 3. nonlinear optimization problems, when f is nonlinear or X cannot be defined by linear inequalities alone; 4. stochastic optimization problems, where f takes the form where Z is a random parameter. f(x) = E Z [F (x, Z)], To illustrate the above concepts, let us consider the following problem, which will serve as our running example: Resource Allocation Problem. Suppose that we have an initial wealth of S 0 dollars, and we want to allocate it to two investment options. By allocating x 0 to the first option, one earns a return of g(x 0 ). The remaining S 0 x 0 dollars will earn a return of h(s 0 x 0 ). Here, we are assuming that 0 x 0 S 0, so that we are not borrowing extra money to fund our investments. Now, a natural goal is choose the allocation amount x 0 to maximize our total return, which is given by f(x 0 ) = g(x 0 ) + h(s 0 x 0 ). In our notation, the resource allocation problem is nothing but the following optimization problem: maximize g(x 0 ) + h(s 0 x 0 ) (RAP) subject to x 0 X = [0, S 0 ]. Consider the following scenarios: 1. Suppose that both g and h are linear, i.e., g(x) = ax + b and h(x) = cx + d for some a, b, c, d R. Then, (RAP) becomes maximize (a c)x 0 + b + d + cs 0 (RAP L) which is a linear optimization problem. In this case, the optimal solution to (RAP L) can be determined explicitly. Indeed, if a c 0, then it is profitable to make x 0 as large as possible, and hence the optimal solution is x 0 = S 0. On the other hand, if a c < 0, then a similar argument shows that the optimal solution should be x 0 = 0. Suppose that we change the constraint in (RAP L) from x 0 [0, S 0 ] to { x 0 X = 0, S 0 M, 2S } 0 M,..., S 0, where M 2 is some integer. Then, the problem becomes a discrete optimization problem, as the feasible region X now consists of only a finite number of points. 2. Suppose that g(x) = a log x and h(x) = b log x for some a, b > 0. Then, (RAP) becomes maximize a log x 0 + b log(s 0 x 0 ) (RAP LOG) 3
4 which is a nonlinear optimization problem. Observe that if x 0 is an optimal solution to (RAP LOG), then we must have 0 < x 0 < S 0. In other words, the boundary points x 0 = 0 and x 0 = S 0 cannot be optimal for (RAP LOG). This implies that the optimal solution x 0 can be found by differentiating the objective function and setting the derivative to zero; i.e., x 0 satisfies df = a b = 0. dx 0 x 0 S 0 x 0 In particular, we obtain x 0 = as 0/(a + b). 3. Let Z be a random variable with Consider the functions G and g defined by Pr(Z = 1) = 1 4, Pr(Z = 1) = 3 4. G(x, Z) = Zx + b, g(x) = E Z [G(x, Z)], where b R is a given constant. Furthermore, suppose that h(x) = cx + d, where c, d R are given. Then, (RAP) becomes maximize E Z [G(x, Z)] + cx + d (RAP S) which is a stochastic optimization problem. Note that by definition of expectation, we have E Z [G(x, Z)] = G(x, 1) Pr(Z = 1) + G(x, 1) Pr(Z = 1) = 3 4 ( x + b) + 1 (x + b) 4 = 1 2 x + b for any x. Hence, (RAP S) can be written as ( maximize c 1 ) x + b + d 2 which is a simple linear optimization problem. 3 Introduction to Dynamic Programming Observe that all the optimization problems introduced in the previous section involve only an one stage decision, namely, to choose a point x in the feasible region X to minimize an objective function f. However, in reality, information is often released in stages, and we are allowed to take advantage of the new information in each stage to make a new decision. This gives rise to multi stage optimization problems, which we shall refer to as dynamic programming or dynamic optimization problems. 4
5 Before we introduce the theory of dynamic programming, let us study an example and understand some of the difficulties of dynamic optimization. Consider a two stage generalization of the resource allocation problem, in which the first stage proceeds as before. However, as a price of obtaining the return g(x 0 ), the original allocation x 0 to the first option is reduced to ax 0, where 0 < a < 1. Similarly, the allocation S 0 x 0 for obtaining the return h(s 0 x 0 ) is reduced to b(s 0 x 0 ), where 0 < b < 1. In particular, at the end of the first stage, the available wealth for investment in the next stage is S 1 = ax 0 + b(s 0 x 0 ). Now, in the second stage, one can again split the S 1 dollars into the two investment options, obtaining a return of g(x 1 ) + h(s 1 x 1 ) if x 1 dollars is allocated to the first option and the remaining amount S 1 x 1 is allocated to the second option. The goal now is to choose the allocation amounts x 0 and x 1 in both stages to maximize the total return f S0 (x 0, x 1 ) = g(x 0 ) + h(s 0 x 0 ) + g(x 1 ) + h(s 1 x 1 ). In other words, we can formulate the two stage resouce allocation problem as follows: maximize g(x 0 ) + h(s 0 x 0 ) + g(x 1 ) + h(s 1 x 1 ) 0 x 1 S 1, S 1 = ax 0 + b(s 0 x 0 ). (RAP 2) Of course, there is no reason to stop at a second stage problem. By iterating the above process, we have an N stage resource allocation problem, where at the end of the k th stage (where k = 1, 2,..., N 1), the available wealth would be S k = ax k 1 + b(s k 1 x k 1 ), where x k 1 is the amount allocated to the first option in the k th stage. Mathematically, the N stage problem can be formulated as follows: maximize N 1 k=0 [ g(xk ) + h(s k x k ) ] 0 x 1 S 1,..., 0 x N 1 S N 1, S k = ax k 1 + b(s k 1 x k 1 ) for k = 1,..., N 1. (RAP N) Now, an important question is, how would one solve (RAP N)? If g, h are linear, then (RAP N) is a linear optimization problem, and hence it can in principle be solved by, say, the simplex method. However, the problem becomes more difficult if g, h are nonlinear. One possibility is to use calculus. Towards that end, suppose that the optimal solution (x 0, x 1,..., x N 1 ) to (RAP N) satisfies 0 < x k < S k for k = 0, 1,..., N 1. Let f S0 (x 0, x 1,..., x N 1 ) = N 1 k=0 [ g(xk ) + h(s k x k ) ]. Then, we set all the partial derivatives of f S0 to zero and solve for x 0, x 1,..., x N 1 : f S0 x N 1 = g (x N 1 ) h (S N 1 x N 1 ) = 0, f S0 x N 2 = g (x N 2 ) h (S N 2 x N 2 ) + (a b)h (S N 1 x N 1 ) = 0,. 5
6 This approach requires us to solve a system of N nonlinear equations in N unknowns, which in general is not an easy task. Worse yet, we have to also check the boundary points x k = 0 and x k = S k for optimality. Fortunately, not all is lost. Observe that in the above approach, we have not taken into account the sequential nature of the problem, i.e., the allocations x 0, x 1,..., x N 1 should be determined sequentially. This motivates us to consider approaches that can take advantage of such a structure. Towards that end, observe that the maximum total return of the N stage resource allocation problem depends only on N and the initial wealth S 0. Hence, we can define a function q N by q N (S 0 ) = max {f S0 (x 0, x 1,..., x N 1 ) : 0 x k S k for k = 0, 1,..., N 1}. (1) In words, q N (S 0 ) is the maximum return of the N stage resource allocation problem if the initial wealth is S 0. For instance, we have q 1 (S 0 ) = max {g(x 0 ) + h(s 0 x 0 ) : 0 x 0 S 0 }, (2) which coincides with (RAP). Now, although we can use the definition of q 2 (S 0 ) as given in (1), we can also express it in terms of q 1 (S 0 ). To see this, recall that the total return of the 2 stage problem is the first stage return plus the second stage return. Clearly, whatever we choose the first stage allocation x 0 to be, the wealth available at the end of the first stage, i.e., S 1 = ax 0 + b(s 0 x 0 ), must be allocated optimally for the second stage if we wish to maximize the total return. Thus, if x 0 is our allocation in the first stage, then we will obtain a return of q 1 (S 1 ) in the second stage by choosing x 1 optimally. It follows that q 2 (S 0 ) = max {g(x 0 ) + h(s 0 x 0 ) + q 1 (ax 0 + b(s 0 x 0 )) : 0 x 0 S 0 }. (3) More generally, by using the same idea, we obtain the following recurrence relation for q N (s 0 ): q N (S 0 ) = max {g(x 0 ) + h(s 0 x 0 ) + q N 1 (ax 0 + b(s 0 x 0 )) : 0 x 0 S 0 }. (4) An important feature of (4) is that it has only one decision variable (i.e., x 0 ), as opposed to N decision variables (i.e., x 0, x 1,..., x N 1 ) in the definition of q N (S 0 ) as given by (1). Now, starting with q 1 (S 0 ), as given by (2), we can use (3) to compute q 2 (S 0 ), which in turn can be used to compute q 3 (S 0 ) and so on using (4). Thus, the formulation (4) allows us to turn the original N variable formulation (RAP N) into N one dimensional problems. We shall see the computational advantage of such a formulation later in the course. As an illustration, consider the following example: Example 2 Consider the 2 stage resource allocation problem, where g(x) = a log x and h(x) = b log x for some a, b > 0, and the initial wealth is S 0. Recall that the maximum total return of this problem is given by q 2 (S 0 ) = max {g(x 0 ) + h(s 0 x 0 ) + q 1 (ax 0 + b(s 0 x 0 )) : 0 x 0 S 0 }. To determine q 2 (S 0 ), we start with q 1 (S 1 ), where S 1 = ax 0 + b(s 0 x 0 ). By definition, we have q 1 (S 1 ) = max {a log x + b log(s 1 x) : 0 x S 1 }. Observe that the optimal solution x to q 1 (S 1 ) must satisfy 0 < x < S 1. Hence, by differentiating the objective function and setting the derivative to zero, we obtain a x b S 1 x = 0 x = a a + b S 1. 6
7 In particular, q 1 (S 1 ) = a log(rs 1 ) + b log((1 r)s 1 ), where r = a a + b. Upon substituting this into q 2 (S 0 ), we have q 2 (S 0 ) = max {a log(x 0 ) + b log(s 0 x 0 ) + a log(rs 1 ) + b log((1 r)s 1 ) : 0 x 0 S 0 }. Again, the optimal solution x 0 to q 2(S 0 ) must satisfy 0 < x 0 < S 0. Hence, by differentiating the objective function and setting the derivative to zero, we have a x 0 b S 0 x 0 + a(a b) ax 0 + b(s 0 x 0 ) + b(a b) ax 0 + b(s 0 x 0 ) = 0. This is just a quadratic equation in x 0 and hence the optimal solution x 0 leave this as an exercise to the reader. can be found easily. We 7
1 Markov decision processes
2.997 Decision-Making in Large-Scale Systems February 4 MI, Spring 2004 Handout #1 Lecture Note 1 1 Markov decision processes In this class we will study discrete-time stochastic systems. We can describe
More informationminimize x subject to (x 2)(x 4) u,
Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for
More informationLecture 1. Stochastic Optimization: Introduction. January 8, 2018
Lecture 1 Stochastic Optimization: Introduction January 8, 2018 Optimization Concerned with mininmization/maximization of mathematical functions Often subject to constraints Euler (1707-1783): Nothing
More informationOPTIMIZATION. joint course with. Ottimizzazione Discreta and Complementi di R.O. Edoardo Amaldi. DEIB Politecnico di Milano
OPTIMIZATION joint course with Ottimizzazione Discreta and Complementi di R.O. Edoardo Amaldi DEIB Politecnico di Milano edoardo.amaldi@polimi.it Website: http://home.deib.polimi.it/amaldi/opt-15-16.shtml
More informationNonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control
Nonlinear Programming (Hillier, Lieberman Chapter 13) CHEM-E7155 Production Planning and Control 19/4/2012 Lecture content Problem formulation and sample examples (ch 13.1) Theoretical background Graphical
More informationConstrained Optimization. Unconstrained Optimization (1)
Constrained Optimization Unconstrained Optimization (Review) Constrained Optimization Approach Equality constraints * Lagrangeans * Shadow prices Inequality constraints * Kuhn-Tucker conditions * Complementary
More informationExaminers: R. Grinnell Date: April 19, 2013 E. Moore Time: 9:00 am Duration: 3 hours. Read these instructions:
University of Toronto at Scarborough Department of Computer and Mathematical Sciences FINAL EXAMINATION ***** Solutions are not provided***** MATA33 - Calculus for Management II Examiners: R. Grinnell
More informationMIT Manufacturing Systems Analysis Lecture 14-16
MIT 2.852 Manufacturing Systems Analysis Lecture 14-16 Line Optimization Stanley B. Gershwin Spring, 2007 Copyright c 2007 Stanley B. Gershwin. Line Design Given a process, find the best set of machines
More informationDRAFT Formulation and Analysis of Linear Programs
DRAFT Formulation and Analysis of Linear Programs Benjamin Van Roy and Kahn Mason c Benjamin Van Roy and Kahn Mason September 26, 2005 1 2 Contents 1 Introduction 7 1.1 Linear Algebra..........................
More informationCHAPTER 2: QUADRATIC PROGRAMMING
CHAPTER 2: QUADRATIC PROGRAMMING Overview Quadratic programming (QP) problems are characterized by objective functions that are quadratic in the design variables, and linear constraints. In this sense,
More informationREVIEW OF MATHEMATICAL CONCEPTS
REVIEW OF MATHEMATICAL CONCEPTS Variables, functions and slopes: A Variable is any entity that can take different values such as: price, output, revenue, cost, etc. In economics we try to 1. Identify the
More informationTheoretical questions and problems to practice Advanced Mathematics and Statistics (MSc course)
Theoretical questions and problems to practice Advanced Mathematics and Statistics (MSc course) Faculty of Business Administration M.Sc. English, 2015/16 first semester Topics (1) Complex numbers. Complex
More informationCE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions
CE 191: Civil and Environmental Engineering Systems Analysis LEC : Optimality Conditions Professor Scott Moura Civil & Environmental Engineering University of California, Berkeley Fall 214 Prof. Moura
More informationStudy Guide - Part 2
Math 116 Spring 2015 Study Guide - Part 2 1. Which of the following describes the derivative function f (x) of a quadratic function f(x)? (A) Cubic (B) Quadratic (C) Linear (D) Constant 2. Find the derivative
More informationDuality in LPP Every LPP called the primal is associated with another LPP called dual. Either of the problems is primal with the other one as dual. The optimal solution of either problem reveals the information
More informationMS-E2140. Lecture 1. (course book chapters )
Linear Programming MS-E2140 Motivations and background Lecture 1 (course book chapters 1.1-1.4) Linear programming problems and examples Problem manipulations and standard form problems Graphical representation
More informationMS 2001: Test 1 B Solutions
MS 2001: Test 1 B Solutions Name: Student Number: Answer all questions. Marks may be lost if necessary work is not clearly shown. Remarks by me in italics and would not be required in a test - J.P. Question
More informationREVIEW OF MATHEMATICAL CONCEPTS
REVIEW OF MATHEMATICAL CONCEPTS 1 Variables, functions and slopes A variable is any entity that can take different values such as: price, output, revenue, cost, etc. In economics we try to 1. Identify
More informationInteger programming: an introduction. Alessandro Astolfi
Integer programming: an introduction Alessandro Astolfi Outline Introduction Examples Methods for solving ILP Optimization on graphs LP problems with integer solutions Summary Introduction Integer programming
More informationCE 191: Civil & Environmental Engineering Systems Analysis. LEC 17 : Final Review
CE 191: Civil & Environmental Engineering Systems Analysis LEC 17 : Final Review Professor Scott Moura Civil & Environmental Engineering University of California, Berkeley Fall 2014 Prof. Moura UC Berkeley
More informationPractice Questions for Math 131 Exam # 1
Practice Questions for Math 131 Exam # 1 1) A company produces a product for which the variable cost per unit is $3.50 and fixed cost 1) is $20,000 per year. Next year, the company wants the total cost
More informationCHAPTER 11 Integer Programming, Goal Programming, and Nonlinear Programming
Integer Programming, Goal Programming, and Nonlinear Programming CHAPTER 11 253 CHAPTER 11 Integer Programming, Goal Programming, and Nonlinear Programming TRUE/FALSE 11.1 If conditions require that all
More informationMATH 4211/6211 Optimization Basics of Optimization Problems
MATH 4211/6211 Optimization Basics of Optimization Problems Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 A standard minimization
More informationSolution Methods. Richard Lusby. Department of Management Engineering Technical University of Denmark
Solution Methods Richard Lusby Department of Management Engineering Technical University of Denmark Lecture Overview (jg Unconstrained Several Variables Quadratic Programming Separable Programming SUMT
More informationNumerical Optimization. Review: Unconstrained Optimization
Numerical Optimization Finding the best feasible solution Edward P. Gatzke Department of Chemical Engineering University of South Carolina Ed Gatzke (USC CHE ) Numerical Optimization ECHE 589, Spring 2011
More informationNetwork Flows. 6. Lagrangian Relaxation. Programming. Fall 2010 Instructor: Dr. Masoud Yaghini
In the name of God Network Flows 6. Lagrangian Relaxation 6.3 Lagrangian Relaxation and Integer Programming Fall 2010 Instructor: Dr. Masoud Yaghini Integer Programming Outline Branch-and-Bound Technique
More informationLecture Note 1: Introduction to optimization. Xiaoqun Zhang Shanghai Jiao Tong University
Lecture Note 1: Introduction to optimization Xiaoqun Zhang Shanghai Jiao Tong University Last updated: September 23, 2017 1.1 Introduction 1. Optimization is an important tool in daily life, business and
More information3E4: Modelling Choice. Introduction to nonlinear programming. Announcements
3E4: Modelling Choice Lecture 7 Introduction to nonlinear programming 1 Announcements Solutions to Lecture 4-6 Homework will be available from http://www.eng.cam.ac.uk/~dr241/3e4 Looking ahead to Lecture
More informationMath 116: Business Calculus Chapter 4 - Calculating Derivatives
Math 116: Business Calculus Chapter 4 - Calculating Derivatives Instructor: Colin Clark Spring 2017 Exam 2 - Thursday March 9. 4.1 Techniques for Finding Derivatives. 4.2 Derivatives of Products and Quotients.
More informationThe Dual Simplex Algorithm
p. 1 The Dual Simplex Algorithm Primal optimal (dual feasible) and primal feasible (dual optimal) bases The dual simplex tableau, dual optimality and the dual pivot rules Classical applications of linear
More informationIntroduction to linear programming using LEGO.
Introduction to linear programming using LEGO. 1 The manufacturing problem. A manufacturer produces two pieces of furniture, tables and chairs. The production of the furniture requires the use of two different
More informationMath for Economics 1 New York University FINAL EXAM, Fall 2013 VERSION A
Math for Economics 1 New York University FINAL EXAM, Fall 2013 VERSION A Name: ID: Circle your instructor and lecture below: Jankowski-001 Jankowski-006 Ramakrishnan-013 Read all of the following information
More informationMarkov Decision Processes Chapter 17. Mausam
Markov Decision Processes Chapter 17 Mausam Planning Agent Static vs. Dynamic Fully vs. Partially Observable Environment What action next? Deterministic vs. Stochastic Perfect vs. Noisy Instantaneous vs.
More informationOperations Research: Introduction. Concept of a Model
Origin and Development Features Operations Research: Introduction Term or coined in 1940 by Meclosky & Trefthan in U.K. came into existence during World War II for military projects for solving strategic
More informationIE 5531: Engineering Optimization I
IE 5531: Engineering Optimization I Lecture 1: Introduction Prof. John Gunnar Carlsson September 8, 2010 Prof. John Gunnar Carlsson IE 5531: Engineering Optimization I September 8, 2010 1 / 35 Administrivia
More informationOPTIMISATION /09 EXAM PREPARATION GUIDELINES
General: OPTIMISATION 2 2008/09 EXAM PREPARATION GUIDELINES This points out some important directions for your revision. The exam is fully based on what was taught in class: lecture notes, handouts and
More informationUnderstanding the Simplex algorithm. Standard Optimization Problems.
Understanding the Simplex algorithm. Ma 162 Spring 2011 Ma 162 Spring 2011 February 28, 2011 Standard Optimization Problems. A standard maximization problem can be conveniently described in matrix form
More informationUNIVERSITY OF KWA-ZULU NATAL
UNIVERSITY OF KWA-ZULU NATAL EXAMINATIONS: June 006 Solutions Subject, course and code: Mathematics 34 MATH34P Multiple Choice Answers. B. B 3. E 4. E 5. C 6. A 7. A 8. C 9. A 0. D. C. A 3. D 4. E 5. B
More informationPart 1. The Review of Linear Programming
In the name of God Part 1. The Review of Linear Programming 1.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Formulation of the Dual Problem Primal-Dual Relationship Economic Interpretation
More informationISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints
ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained
More informationMS-E2140. Lecture 1. (course book chapters )
Linear Programming MS-E2140 Motivations and background Lecture 1 (course book chapters 1.1-1.4) Linear programming problems and examples Problem manipulations and standard form Graphical representation
More informationPage Points Score Total: 100
Math 1130 Spring 2019 Sample Midterm 3c 4/11/19 Name (Print): Username.#: Lecturer: Rec. Instructor: Rec. Time: This exam contains 10 pages (including this cover page) and 10 problems. Check to see if
More informationMathematical Optimization Models and Applications
Mathematical Optimization Models and Applications Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye Chapters 1, 2.1-2,
More informationIntroduction to Reinforcement Learning
CSCI-699: Advanced Topics in Deep Learning 01/16/2019 Nitin Kamra Spring 2019 Introduction to Reinforcement Learning 1 What is Reinforcement Learning? So far we have seen unsupervised and supervised learning.
More informationLecture 6: Sections 2.2 and 2.3 Polynomial Functions, Quadratic Models
L6-1 Lecture 6: Sections 2.2 and 2.3 Polynomial Functions, Quadratic Models Polynomial Functions Def. A polynomial function of degree n is a function of the form f(x) = a n x n + a n 1 x n 1 +... + a 1
More informationOptimeringslära för F (SF1811) / Optimization (SF1841)
Optimeringslära för F (SF1811) / Optimization (SF1841) 1. Information about the course 2. Examples of optimization problems 3. Introduction to linear programming Introduction - Per Enqvist 1 Linear programming
More informationMATH 121: EXTRA PRACTICE FOR TEST 2. Disclaimer: Any material covered in class and/or assigned for homework is a fair game for the exam.
MATH 121: EXTRA PRACTICE FOR TEST 2 Disclaimer: Any material covered in class and/or assigned for homework is a fair game for the exam. 1 Linear Functions 1. Consider the functions f(x) = 3x + 5 and g(x)
More informationOPTIMALITY OF RANDOMIZED TRUNK RESERVATION FOR A PROBLEM WITH MULTIPLE CONSTRAINTS
OPTIMALITY OF RANDOMIZED TRUNK RESERVATION FOR A PROBLEM WITH MULTIPLE CONSTRAINTS Xiaofei Fan-Orzechowski Department of Applied Mathematics and Statistics State University of New York at Stony Brook Stony
More informationWelcome to CPSC 4850/ OR Algorithms
Welcome to CPSC 4850/5850 - OR Algorithms 1 Course Outline 2 Operations Research definition 3 Modeling Problems Product mix Transportation 4 Using mathematical programming Course Outline Instructor: Robert
More informationIE 495 Stochastic Programming Problem Set #1 Solutions. 1 Random Linear Programs and the Distribution Problem
IE 495 Stochastic Programming Problem Set #1 Solutions 1 Random Linear Programs and the Distribution Problem Recall the random linear program that we saw in class: minimize subject to with ω 1 U[1, 4]
More informationMATH2070/2970 Optimisation
MATH2070/2970 Optimisation Introduction Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Course Information Lecture Information Optimisation: Weeks 1 7 Contact Information Email:
More informationSECTION 5.1: Polynomials
1 SECTION 5.1: Polynomials Functions Definitions: Function, Independent Variable, Dependent Variable, Domain, and Range A function is a rule that assigns to each input value x exactly output value y =
More informationOptimization. Broadly two types: Unconstrained and Constrained optimizations We deal with constrained optimization. General form:
Optimization Broadly two types: Unconstrained and Constrained optimizations We deal with constrained optimization General form: Min or Max f(x) (1) Subject to g(x) ~ b (2) lo < x < up (3) Some important
More informationChapter 1. Introduction. 1.1 Problem statement and examples
Chapter 1 Introduction 1.1 Problem statement and examples In this course we will deal with optimization problems. Such problems appear in many practical settings in almost all aspects of science and engineering.
More informationLinear programming: introduction and examples
Linear programming: introduction and examples G. Ferrari Trecate Dipartimento di Ingegneria Industriale e dell Informazione Università degli Studi di Pavia Industrial Automation Ferrari Trecate (DIS) Linear
More informationComputational Finance
Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples
More informationStochastic Integration and Stochastic Differential Equations: a gentle introduction
Stochastic Integration and Stochastic Differential Equations: a gentle introduction Oleg Makhnin New Mexico Tech Dept. of Mathematics October 26, 27 Intro: why Stochastic? Brownian Motion/ Wiener process
More informationMarkov Decision Processes Chapter 17. Mausam
Markov Decision Processes Chapter 17 Mausam Planning Agent Static vs. Dynamic Fully vs. Partially Observable Environment What action next? Deterministic vs. Stochastic Perfect vs. Noisy Instantaneous vs.
More information3E4: Modelling Choice
3E4: Modelling Choice Lecture 6 Goal Programming Multiple Objective Optimisation Portfolio Optimisation Announcements Supervision 2 To be held by the end of next week Present your solutions to all Lecture
More informationEcon Slides from Lecture 10
Econ 205 Sobel Econ 205 - Slides from Lecture 10 Joel Sobel September 2, 2010 Example Find the tangent plane to {x x 1 x 2 x 2 3 = 6} R3 at x = (2, 5, 2). If you let f (x) = x 1 x 2 x3 2, then this is
More informationLagrange Relaxation and Duality
Lagrange Relaxation and Duality As we have already known, constrained optimization problems are harder to solve than unconstrained problems. By relaxation we can solve a more difficult problem by a simpler
More informationLectures 6, 7 and part of 8
Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,
More informationMathematical Methods and Economic Theory
Mathematical Methods and Economic Theory Anjan Mukherji Subrata Guha C 263944 OXTORD UNIVERSITY PRESS Contents Preface SECTION I 1 Introduction 3 1.1 The Objective 3 1.2 The Tools for Section I 4 2 Basic
More information1 Computing with constraints
Notes for 2017-04-26 1 Computing with constraints Recall that our basic problem is minimize φ(x) s.t. x Ω where the feasible set Ω is defined by equality and inequality conditions Ω = {x R n : c i (x)
More informationInteger Linear Programming Modeling
DM554/DM545 Linear and Lecture 9 Integer Linear Programming Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. Assignment Problem Knapsack Problem
More informationLecture 1: Introduction
EE 227A: Convex Optimization and Applications January 17 Lecture 1: Introduction Lecturer: Anh Pham Reading assignment: Chapter 1 of BV 1. Course outline and organization Course web page: http://www.eecs.berkeley.edu/~elghaoui/teaching/ee227a/
More informationMULTIPLE CHOICE QUESTIONS DECISION SCIENCE
MULTIPLE CHOICE QUESTIONS DECISION SCIENCE 1. Decision Science approach is a. Multi-disciplinary b. Scientific c. Intuitive 2. For analyzing a problem, decision-makers should study a. Its qualitative aspects
More informationIn the Ramsey model we maximized the utility U = u[c(t)]e nt e t dt. Now
PERMANENT INCOME AND OPTIMAL CONSUMPTION On the previous notes we saw how permanent income hypothesis can solve the Consumption Puzzle. Now we use this hypothesis, together with assumption of rational
More informationAgenda today. Introduction to prescriptive modeling. Linear optimization models through three examples: Beyond linear optimization
Agenda today Introduction to prescriptive modeling Linear optimization models through three examples: 1 Production and inventory optimization 2 Distribution system design 3 Stochastic optimization Beyond
More informationModern Logistics & Supply Chain Management
Modern Logistics & Supply Chain Management As gold which he cannot spend will make no man rich, so knowledge which he cannot apply will make no man wise. Samuel Johnson: The Idler No. 84 Production Mix
More information2. Linear Programming Problem
. Linear Programming Problem. Introduction to Linear Programming Problem (LPP). When to apply LPP or Requirement for a LPP.3 General form of LPP. Assumptions in LPP. Applications of Linear Programming.6
More informationOptimization Methods in Finance
Optimization Methods in Finance 1 PART 1 WELCOME 2 Welcome! My name is Friedrich Eisenbrand Assistants of the course: Thomas Rothvoß, Nicolai Hähnle How to contact me: Come to see me during office ours:
More informationMATH2070 Optimisation
MATH2070 Optimisation Linear Programming Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The standard Linear Programming (LP) Problem Graphical method of solving LP problem
More informationConstrained optimization: direct methods (cont.)
Constrained optimization: direct methods (cont.) Jussi Hakanen Post-doctoral researcher jussi.hakanen@jyu.fi Direct methods Also known as methods of feasible directions Idea in a point x h, generate a
More informationII. Analysis of Linear Programming Solutions
Optimization Methods Draft of August 26, 2005 II. Analysis of Linear Programming Solutions Robert Fourer Department of Industrial Engineering and Management Sciences Northwestern University Evanston, Illinois
More informationIV. Violations of Linear Programming Assumptions
IV. Violations of Linear Programming Assumptions Some types of Mathematical Programming problems violate at least one condition of strict Linearity - Deterministic Nature - Additivity - Direct Proportionality
More informationSection K MATH 211 Homework Due Friday, 8/30/96 Professor J. Beachy Average: 15.1 / 20. ), and f(a + 1).
Section K MATH 211 Homework Due Friday, 8/30/96 Professor J. Beachy Average: 15.1 / 20 # 18, page 18: If f(x) = x2 x 2 1, find f( 1 2 ), f( 1 2 ), and f(a + 1). # 22, page 18: When a solution of acetylcholine
More informationMath Exam Jam Concise. Contents. 1 Algebra Review 2. 2 Functions and Graphs 2. 3 Exponents and Radicals 3. 4 Quadratic Functions and Equations 4
Contents 1 Algebra Review 2 2 Functions and Graphs 2 3 Exponents and Radicals 3 4 Quadratic Functions and Equations 4 5 Exponential and Logarithmic Functions 5 6 Systems of Linear Equations 6 7 Inequalities
More informationMATH 445/545 Homework 1: Due February 11th, 2016
MATH 445/545 Homework 1: Due February 11th, 2016 Answer the following questions Please type your solutions and include the questions and all graphics if needed with the solution 1 A business executive
More informationStructured Problems and Algorithms
Integer and quadratic optimization problems Dept. of Engg. and Comp. Sci., Univ. of Cal., Davis Aug. 13, 2010 Table of contents Outline 1 2 3 Benefits of Structured Problems Optimization problems may become
More informationApplications of Linear Programming - Minimization
Applications of Linear Programming - Minimization Drs. Antonio A. Trani and H. Baik Professor of Civil Engineering Virginia Tech Analysis of Air Transportation Systems June 9-12, 2010 1 of 49 Recall the
More informationRELATIONS AND FUNCTIONS
RELATIONS AND FUNCTIONS Definitions A RELATION is any set of ordered pairs. A FUNCTION is a relation in which every input value is paired with exactly one output value. Example 1: Table of Values One way
More informationMATH2070 Optimisation
MATH2070 Optimisation Nonlinear optimisation with constraints Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The full nonlinear optimisation problem with equality constraints
More information5 Flows and cuts in digraphs
5 Flows and cuts in digraphs Recall that a digraph or network is a pair G = (V, E) where V is a set and E is a multiset of ordered pairs of elements of V, which we refer to as arcs. Note that two vertices
More informationis called an integer programming (IP) problem. model is called a mixed integer programming (MIP)
INTEGER PROGRAMMING Integer Programming g In many problems the decision variables must have integer values. Example: assign people, machines, and vehicles to activities in integer quantities. If this is
More informationMS&E 246: Lecture 18 Network routing. Ramesh Johari
MS&E 246: Lecture 18 Network routing Ramesh Johari Network routing Last lecture: a model where N is finite Now: assume N is very large Formally: Represent the set of users as a continuous interval, [0,
More informationOPERATIONS RESEARCH. Linear Programming Problem
OPERATIONS RESEARCH Chapter 1 Linear Programming Problem Prof. Bibhas C. Giri Department of Mathematics Jadavpur University Kolkata, India Email: bcgiri.jumath@gmail.com MODULE - 2: Simplex Method for
More information1 Strict local optimality in unconstrained optimization
ORF 53 Lecture 14 Spring 016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, April 14, 016 When in doubt on the accuracy of these notes, please cross check with the instructor s
More informationALGEBRA CLAST MATHEMATICS COMPETENCIES
2 ALGEBRA CLAST MATHEMATICS COMPETENCIES IC1a: IClb: IC2: IC3: IC4a: IC4b: IC: IC6: IC7: IC8: IC9: IIC1: IIC2: IIC3: IIC4: IIIC2: IVC1: IVC2: Add and subtract real numbers Multiply and divide real numbers
More informationSample Mathematics 106 Questions
Sample Mathematics 106 Questions x 2 + 8x 65 (1) Calculate lim x 5. x 5 (2) Consider an object moving in a straight line for which the distance s (measured in feet) it s travelled from its starting point
More informationMath 120 Final Exam Practice Problems, Form: A
Math 120 Final Exam Practice Problems, Form: A Name: While every attempt was made to be complete in the types of problems given below, we make no guarantees about the completeness of the problems. Specifically,
More informationMath Practice Final - solutions
Math 151 - Practice Final - solutions 2 1-2 -1 0 1 2 3 Problem 1 Indicate the following from looking at the graph of f(x) above. All answers are small integers, ±, or DNE for does not exist. a) lim x 1
More informationLinear Programming. Businesses seek to maximize their profits while operating under budget, supply, Chapter
Chapter 4 Linear Programming Businesses seek to maximize their profits while operating under budget, supply, labor, and space constraints. Determining which combination of variables will result in the
More informationChapter 7. Extremal Problems. 7.1 Extrema and Local Extrema
Chapter 7 Extremal Problems No matter in theoretical context or in applications many problems can be formulated as problems of finding the maximum or minimum of a function. Whenever this is the case, advanced
More informationStochastic Optimization
Chapter 27 Page 1 Stochastic Optimization Operations research has been particularly successful in two areas of decision analysis: (i) optimization of problems involving many variables when the outcome
More informationConcept and Definition. Characteristics of OR (Features) Phases of OR
Concept and Definition Operations research signifies research on operations. It is the organized application of modern science, mathematics and computer techniques to complex military, government, business
More informationInterior Point Methods for Mathematical Programming
Interior Point Methods for Mathematical Programming Clóvis C. Gonzaga Federal University of Santa Catarina, Florianópolis, Brazil EURO - 2013 Roma Our heroes Cauchy Newton Lagrange Early results Unconstrained
More information32. Use a graphing utility to find the equation of the line of best fit. Write the equation of the line rounded to two decimal places, if necessary.
Pre-Calculus A Final Review Part 2 Calculator Name 31. The price p and the quantity x sold of a certain product obey the demand equation: p = x + 80 where r = xp. What is the revenue to the nearest dollar
More informationTopic 8: Optimal Investment
Topic 8: Optimal Investment Yulei Luo SEF of HKU November 22, 2013 Luo, Y. SEF of HKU) Macro Theory November 22, 2013 1 / 22 Demand for Investment The importance of investment. First, the combination of
More informationThe Boundary Problem: Markov Chain Solution
MATH 529 The Boundary Problem: Markov Chain Solution Consider a random walk X that starts at positive height j, and on each independent step, moves upward a units with probability p, moves downward b units
More information