Optimization. Thomas Dangl. March Thomas Dangl Optimization 1

Size: px
Start display at page:

Download "Optimization. Thomas Dangl. March Thomas Dangl Optimization 1"

Transcription

1 Optimization Thomas Dangl March 2010 Thomas Dangl Optimization 1

2 Introduction Course Preparation We use the program package R throughout the course. Please download the package from and install it on your system. Please also download and read the short introduction to the R language at and try to get started with the basics of R. For those using MS-Windows and MS-Excel: Also consider installing the package RExcel (with the load packages-function inside the R-Console) to perfectly integrate R with MS-Excel. Thomas Dangl Optimization 2

3 Introduction Literature The slides used to teach the course are still under construction! I will provide you with the most current version on a week-to-week basis. Please find the pdf of the slides at learn@wu. Additional literature for the course is Daniel Léonard and Ngo van Long (1992) Optimal Control Theory and Static Optimization in Economics, Cambridge University Press. Rangarajan K. Sundaram (1996) A First Course in Optimization Theory, Cambridge University Press. Morton I. Kamien and Nancy L. Schwartz (1991) Dynamic Optimization, Elsevier - North Holland. McGrawHill, some more applied texts Kenneth L. Judd (1998) Numerical Methods in Economics, MIT Press. Mario J. Miranda and Paul L. Fackler (2002) Applied Computational Economics and Finance, MIT Press. Thomas Dangl Optimization 3

4 Introduction Classes The classes will take place at March 1, March 8, March 15, March 22, April 19, April 26, April 28. There will be a mid-term exam on March 22 and a final exam on April 28 (approx. 90 minutes each). These exams will contribute 35% each to the final score. Preparation of the home exercises will contribute 30% to the final score. Active class participation, e.g., pointing out typos / errors in the slides can give up to an additional 10% score. A score of at least 50% is required to pass the course. Thomas Dangl Optimization 4

5 Introduction Course Outline 1 Introduction 2 Static Optimization Unconstrained Optimization Equality Constraints: The Method of Lagrange Inequality Constraints: The Method of Kuhn-Tucker Special Cases: LP, MILP, Porfolio Optimization 3 Dynamic Optimization The Bellman - Principle Theory of Optimal Control Theory of Optimal Control: Phase Diagram Analysis Thomas Dangl Optimization 5

6 Unconstrained Optimization Unconstrained Optimization (1) Deal with the choice of finitely many variables in order to maximize (minimize) some objective. sometimes, the values of these variables are unrestricted other times they are restricted by equality or inequality constraints The key concept of concavity (convexity) is introduced and will be associated with nice optimization problems. Assumptions, Theorems, Lemmas and Corollaries will be stated in the context we use and need them. I.e., there will be not section like Mathematical Preliminaries. Assumption: Differentiability If not stated otherwise, all functions we use are C 2, i.e., they have continuous second-order derivatives. Thomas Dangl Optimization 6

7 Unconstrained Optimization Unconstrained Optimization (2) Choose the values (x 1, x 2,..., x n ) R n in order to maximize the real-valued function f (x 1, x 2,..., x n ). Consider the example f (x 1, x 2 ) = 300 (10 x 1 ) 3 10x 1 2(10 x 2 ) 3 20x 2 Thomas Dangl Optimization 7

8 Unconstrained Optimization Unconstrained Optimization (3) Def. Convex Subsets of R n Consider a set D R n. If for D is called convex. x, y D [tx + (1 t)y] D, t [0, 1], Def. Open Subsets of R n The set D R n is called open subset if x D r > 0 such that x R n with x x < r x D. I.e., each x in D is surrounded by a neighborhood that is entirely contained in D. Or, each x D is an interior point. Thomas Dangl Optimization 8

9 Unconstrained Optimization Unconstrained Optimization (4) Assumption: Domains of Objective Functions We do only consider objective functions that are defined on domains that are convex and open subsets of R n. Def: Unconstrained Optimization Global Maximum Consider f : D R n R. Find x D such that x argmax x R n f (x) I.e., f (x ) f (x), x D. Then x is a global maximum. Sometimes, a maximum is only the maximum value in a certain neighborhood, i.e., the max. property is only valid locally Thomas Dangl Optimization 9

10 Unconstrained Optimization Unconstrained Optimization (5) Def: Local Maximum x is called a local maximum if r > 0 such that f (x ) f (x) x D with x x < r Def: Gradient The gradient vector f x R n is the column-vector created by f x = f x = f := The gradient f x evaluated at some point x and its components characterize the tangent plane to f at x and f x indicates the direction of the steepest increase in f. Thomas Dangl Optimization 10 f x 1 f x 2. f x n

11 Unconstrained Optimization Unconstrained Optimization (6) The Hesse matrix is the matrix that contains the second-order partial derivatives of f. Def. Hesse Matrix The Hesse matrix f xx is the symmetric n n matrix created by f xx = 2 f x 2 = H f := 2 f x1 2 2 f... 2 f x 1 x n 2 f x 2 x n x 2 x f x n x f x n x n According to our assumptions, f xx exists on the entire domain D of f and is continuous. Thomas Dangl Optimization 11

12 Unconstrained Optimization Unconstrained Optimization (7) To linearize a function at some x, i.e., to approximate it by the tangent-plane to f at the chosen x, we use Taylor s Theorem. First-Order Taylor-Series Expansion Consider x, x D R n, then we can write f (x) = f ( x) + (x x) f x ( x) + R 1 ( x, x) with R 1 ( x, x) = 1 2 (x x) f xx (x t )(x x). with x t = t x + (1 t)x for some t [0, 1]. Furthermore, ( ) R1 ( x, x) lim = 0. x x x x with. the Euclidean norm. I.e., R 1 ( x, x) = o( x x ). Thomas Dangl Optimization 12

13 Unconstrained Optimization Unconstrained Optimization (8) Linearization Approx. the function f by ignoring terms of the order o( x x ) Take the above example: f (x) ˆ f (x) = f ( x) + (x x) f x ( x) f (x 1, x 2 = 300 (10 x 1 ) 3 10x 1 2(10 x 2 ) 3 20x 2 ( ) ( ) [( ) ( )] ( x1 x1 x1 x (10 x1 ) fˆ = f + 2 x 2 x 2 x (10 x 2 ) 2 x 2 ) See R implementation for an illustration. Thomas Dangl Optimization 13

14 Unconstrained Optimization Unconstrained Optimization (9) First-Order Necessary Conditions Suppose x is the desired maximum, i.e., f (x ) f (x) x D, then it follows f x = 0. That is, at a maximum all partial derivatives of f must be zero. Or in other words, the tangent plane at x must be flat. Suppose x is a maximum but f x (x ) 0. Then we can find some x in the neighborhood of x such that f ( x) > f (x ). (The proof is a simple application of Taylor s rule and will be presented in class.) Thomas Dangl Optimization 14

15 Unconstrained Optimization Unconstrained Optimization (10) In order to capture the curvature of the objective function, we use a second-order Taylor-series expansion. Second-Order Taylor-Series Expansion Consider x, x D R n, then we can write f (x) = f ( x) + (x x) f x ( x) (x x) f xx ( x)(x x) + R 2 ( x, x) with i.e., R 2 ( x, x) = o( x x 2 ). lim x x ( ) R2 ( x, x) x x 2 = 0. Second-order Taylor-series expansion of f at a (local) maximum results in f (x) = f (x ) (x x) f xx ( x)(x x) + o( x x 2 ) Thomas Dangl Optimization 15

16 Unconstrained Optimization Unconstrained Optimization (11) Sectional drawing of f from above at x 2 = (30 30)/3 + approximations at x 1 = first order approx. f x 1, second order approx. Thomas Dangl Optimization 16

17 Unconstrained Optimization Unconstrained Optimization (12) Sectional drawing of f from above at x 2 = (30 30)/3 + approximations at x 1 = (30 30)/3. 50 f x 1, first order approx second order approx. Thomas Dangl Optimization 17

18 Unconstrained Optimization Unconstrained Optimization (13) See Taylor.R for a 3D-illustration of a Taylor-series approximation of the above example, using the package rgl. Def. Definiteness Consider a symmetric matrix A such that x R n : x Ax 0 then A is called negative-semidefinite. If x R n : x Ax < 0 then A is called negative-definite. Thomas Dangl Optimization 18

19 Unconstrained Optimization Unconstrained Optimization (14) Th: Second-Order Neccesary Condition f (x) reaches a (local) maximum at x implies f xx is negative-semidefinite at x. Suppose there exists a x such that x f xx ( x)x > 0, then we can show that there exists some x in the neighborhood of x, such that f ( x) > f (x ). Can be shown using the second-order Taylor-series expansion of f at x. Thomas Dangl Optimization 19

20 Unconstrained Optimization Unconstrained Optimization (15) How to check the definiteness of a matix? Def: Leading Principal Minors Let A be a (n n) matrix, then the r-th leading principal minor B r is the determinant of the matrix that results from deleting the last n r rows and the last n r columns from A. Th: Definiteness A n n symmetric matrix A is negative-definite if and only if all its eigenvalues are negative. A is negative-semidefinite, if and only if all its eigenvalues are non-positive. A is real-valued, symmetric. The spectral theorem guarantees that all eigenvalues are real-valued. The matrix A is negative-definite if its leading principal minors alternate in sign beginning with a negative sign, or ( 1) r B r > 0. Thomas Dangl Optimization 20

21 Unconstrained Optimization Unconstrained Optimization (16) Some notes on minima: The first-order necessary condition for a (local) minimum at some x is identical to the first-order necessary condition for a maximum. The second-order necessary condition for a (local) minimum requires that the quadratic form y f xx y 0 for all y R n. Def. Definiteness Consider a symmetric matrix A such that x R n : x Ax 0 then A is called positive-semidefinite. If x R n : x Ax > 0 then A is called positive-definite. Thomas Dangl Optimization 21

22 Unconstrained Optimization Unconstrained Optimization (17) To check for the definiteness of a matrix: Th: Definiteness (cont.) A n n symmetric matrix A is positive-definite if and only if all its eigenvalues are positive. It is positive-semidefinite if and only if its eigenvalues are non-negative. The matrix A is positive-definite if and only if all its leading principal minors are positive. Th: Second-Order Neccesary Condition (cont.) f (x) reaches a (local) minimum at x implies f xx is positive-semidefinite at x. Thomas Dangl Optimization 22

23 Unconstrained Optimization Unconstrained Optimization (18) Sufficient conditions for a (local) maximum (minimum) are somewhat stricter than the necessary first- and second-order conditions, since semi-definiteness of the Hession matrix is not a sufficient condition to ensure that the the quadratic term in the Taylor-series expansion actually dominates the residual. Th: Sufficient Conditions If f x (x ) = 0 and f xx (x ) is negative-semidefinite, then f (x) reaches a local maximum at x. Note: It is not sufficient for a maximum that f is concave at x in each variable, i.e., that 2 f 0 for all i = 1,..., n. xi 2 Consider the example for various values of a 0. f (x 1, x 2 ) = (x 2 1 ) + ax 1x 2 (x 2 2 ) Thomas Dangl Optimization 23

24 Unconstrained Optimization Unconstrained Optimization (19) The gradient of f is f x ( x1 x 2 ) ( 2x1 + ax = 2 2x 2 + ax 1 ). Independent of a, the point x = (0, 0) satisfies the first-order conditions. The Hesse matrix of f is given by ( 2 a f xx (x) = a 2 thus, second-order partial derivatives with respect to x 1 and x 2 resp. are negative, independent of a. Leading principal minors are B 1 = 2 and B 2 = 4 a 2, hence, the Hesse matrix is negative-semidefinite for a 2. ), Thomas Dangl Optimization 24

25 Unconstrained Optimization Unconstrained Optimization (20) Contour plots for a = 0, a = 1, a = 2, and a = 3. Thomas Dangl Optimization 25

26 Unconstrained Optimization Unconstrained Optimization (21) See SaddlePoint.R for a 3D-illustration. We next derive some global results for concave functions. Take a look at the exact form of Taylor s expansion: t [0, 1] such that f (x) = f (x ) + (x x ) f x (x ) (x x ) f xx (x t )(x x ) with x t = t x + (1 t)x, i.e., f xx is evaluated at some convex combination of x and x. Let us state three different definitions of concavity. Thomas Dangl Optimization 26

27 Unconstrained Optimization Unconstrained Optimization (22) Def. a: Concavity f C 2 defined on a convex set D is concave if and only if f xx is negative-semidefinite everywhere on D. Def. b: Concavity f C 1 defined on a convex set D is concave if and only if f ( x) f ( x) ( x x) f x ( x), x, x D. Def. c: Concavity f defined on a convex set D is concave if and only if f (t x + (1 t) x) t f ( x) + (1 t) f ( x), 0 t 1, and x, x D. Thomas Dangl Optimization 27

28 Unconstrained Optimization Unconstrained Optimization (23) With the help of concavity we can state global conditions Th. Sufficient Conditions for Concave Functions Let f (x) be a concave function, then it reaches a global max. at x if and only if f x (x ) = 0. How to find an extremum numerically. One possibility: Newtons Method Assume x is a local maximum and f xx is negative-definite in a neighborhood of x. Take start solution x 0 in the neighborhood of x and do an approximation of f (x ) by a second order Taylor-series of f at x 0, and take the derivative with respect to x. ˆ f x (x ) f x (x 0 ) + f xx (x 0 )(x x 0 ). Thomas Dangl Optimization 28

29 Unconstrained Optimization Unconstrained Optimization (24) Compute the next candidate for a solution to f x = 0 by solving the approximation for ˆ f x == 0. Obtain the iteration x i+1 = x i ( f xx (x i )) 1 f x (x i ) Convergence is generally not guaranteed, see Newton-Kantorovich Theorem for conditions. See simplenewton.r for a simple R implementation, see also the man pages for nlm. Newton s method does not distinguish between minima and maxima. Thomas Dangl Optimization 29

30 Equality Constraints: The Method of Lagrange Optimization Under Equality Constraints (1) Typically, when searching for an optimum of the objective function we are restricted in the choice of x by constraints. consumers have budget constraints, society faces resource constraints, etc. The Classical Equality Constrained Problem (L.1) Find x = (x 1,..., x n) that maximizes f (x) subject to g 1 (x 1,..., x n ) = 0,. g m (x 1,..., x n ) = 0, m < n. Where f is the objective function and g j, j = 1,..., m, are the constraints. Thomas Dangl Optimization 30

31 Equality Constraints: The Method of Lagrange Optimization Under Equality Constraints (2) Example: Maximize the objective function f (x 1, x 2 ) = 10 (10 x 1 ) 2 2(10 x 2 ) 2. subject to g(x 1, x 2 ) = 27 2x 1 x 2 = 0. g 0 f 0 f 9 f Thomas Dangl Optimization 31

32 Equality Constraints: The Method of Lagrange Optimization Under Equality Constraints (3) Compare the gradient of the objective function, f x, to the gradient of the constraint, g x at a non-optimal choice of (x 1, x 2 ) g 0 fx gx 1 Thomas Dangl Optimization 32

33 Equality Constraints: The Method of Lagrange Optimization Under Equality Constraints (4) Compare the gradient of the objective function, f x, to the gradient of the constraint, g x at a non-optimal choice of (x 1, x 2 ) g 0 fx 10.0 gx Thomas Dangl Optimization 33

34 Equality Constraints: The Method of Lagrange Optimization Under Equality Constraints (5) In the optimum, the gradient of the objective function, f x, has to be parallel to the gradient of the constraint, g x, i.e., f x has to be some multiple of g x g 0 fx gx 9.0 Thomas Dangl Optimization 34 1

35 Equality Constraints: The Method of Lagrange Optimization Under Equality Constraints (6) To formalize this idea, extend the dimensionality of the problem (introducing the before-mentioned multiples) and define the Lagrangian L as Def. The Lagrangian Define the Lagrangian L as L(λ 1,..., λ m, x 1,..., x n ) = f (x 1,..., x n ) + m λ j g j (x 1,..., x n ), j=1 or more compactly L(λ, x) = f (x) + λ g(x). Thomas Dangl Optimization 35

36 Equality Constraints: The Method of Lagrange Optimization Under Equality Constraints (7) Define a matrix that contains the first-order derivatives of all the gs, where the rows correspond to the constraints and the columns to the variables, i.e., the matrix is (m n). Def. G Let the (m n) matrix G be defined as [ ] g j G = x i j,i = g 1 g 1 x n x g m x 1... g m x n Thomas Dangl Optimization 36

37 Equality Constraints: The Method of Lagrange Optimization Under Equality Constraints (8) Th. First-Order Necessary Conditions Let x be a solution to problem (L.1) and let the (m n) matrix G have full rank m (this is known as the rank condition). Then there must exist a unique vector λ = (λ 1,..., λ m ) R m such that L λ = g(x ) = 0, L x = f x (x ) + G (x ) λ = 0. The first set of equations expresses that at the optimum the constraints must be satisfied. The second set of equations states that at the optimum the gradient of the objective function f x can be expressed as a linear combination of the gradients of the constraints. Thomas Dangl Optimization 37

38 Equality Constraints: The Method of Lagrange Optimization Under Equality Constraints (9) In more detail, this set of equations is L λ 1 = g 1 (x ) = 0,. L = g m (x ) λ m = 0, L x 1 = f x1 (x ) + L x n = f xn (x ) + m λ j gx j 1 (x ) = 0, j=1. m λ j gx j n (x ) = 0. j=1 Thomas Dangl Optimization 38

39 Equality Constraints: The Method of Lagrange Optimization Under Equality Constraints (10) Some more intuition Assume x is a constrained maximum. Any small change in x that is feasible must satisfy dg = 0 = G(x )dx i.e., dx is feasible if it dose not lead to the violation of any of the constraints. The first-order necessary condition of optimality requires that at x such a feasible change results in d f = f x(x )dx = 0 otherwise dx or dx leads to a marginal improvement. I.e., we do not require that f x (x ) = 0 (as in the unconstrained optimum), but only that f x (x ) = 0 in feasible directions. Thomas Dangl Optimization 39

40 Equality Constraints: The Method of Lagrange Optimization Under Equality Constraints (11) Every marginal feasible change dx that satisfies G dx = 0 must also satisfy f xdx = 0, i.e., every dx that satisfies the system of m linear equations in n variables g 1 x 1 dx g1 x n dx n = 0,... must also satisfy g m x 1 dx gm x n dx n = 0, f dx f dx n = 0. x 1 x n f x must be a linear combination of the g x, and since the rank of G equals m, the weights λ are unique. Thomas Dangl Optimization 40

41 Equality Constraints: The Method of Lagrange Optimization Under Equality Constraints (12) What, if the rank condition is not satisfied? Then, multipliers might not be unique (system of equation is over-determined), it might be impossible to write f x as a linear combination of the g x (the case of cusps ). Will show an example of a cusp later. First, we return to our example subject to f (x 1, x 2 ) = 10 (10 x 1 ) 2 2(10 x 2 ) 2. g(x 1, x 2 ) = 27 2x 1 x 2 = 0. and solve the necessary conditions for a candidate solution x 1 = 26/3, x 2 = 29/3, f (x 1, x 2 ) = 8. Thomas Dangl Optimization 41

42 Equality Constraints: The Method of Lagrange Optimization Under Equality Constraints (13) Th. Second-Order Necessary Conditions Let x be a local maximum for problem (L.1) and let (x, λ ) satisfy the first-order conditions. Then the matrix L = L xx is negative-semidefinite for all vectors z R n satisfying Gz = 0. For a minimum, negative-semidefinite has to be modified to positive-semidefinite. Th. Second-Order Sufficient Conditions Let (x, λ ) satisfy the first-order conditions and in addition let the matrix L = L xx be negative-definite for all vectors z R n satisfying Gz = 0, where L and G are evaluated at (x, λ ); then x is a local maximum for problem (L.1). For a minimum, negative-definite has to be modified to positive-definite. Thomas Dangl Optimization 42

43 Equality Constraints: The Method of Lagrange Optimization Under Equality Constraints (14) While it is intuitively clear, that the definiteness does only matter in feasible directions, it is generally not easy to check for conditional definiteness. There is a theorem for sufficiency, that is easy to check (the variables and derivatives have to be done exactly in the manner stated here). Def. B is the Hesse Matrix of L Define B as the Hesse matrix of the Lagrangian L, such that we start with the derivatives with respect to the multipliers λ and continue with the derivatives with respect to the variables x, L λλ. L λx B = L xλ. L xx Thomas Dangl Optimization 43

44 Equality Constraints: The Method of Lagrange Optimization Under Equality Constraints (15) Explicitly: 0 0 B =... g 1 x 1 g 1 x n g m x 1 g m x n g 1 x 1 g m x 1. f x1 x 1 + m j=1 λ j g j x 1 x 1 f x1 x n + m j=1 λ j g j x 1 x n..... g 1 x n g m x n. fx1 x n + m j=1 λ j gx j 1 x n f xn x n + m j=1 λ j gx j n x n Or more compactly 0. G B = G. L = f xx +, m j=1 λ j g j xx. Thomas Dangl Optimization 44

45 Equality Constraints: The Method of Lagrange Optimization Under Equality Constraints (16) B is (m + n) (m + n) and the four submatrices 0, G, G and L are of order m m, m n, n m, and n n, respectively. Th. Sufficient Conditions Let (x, λ ) satisfy the first-order conditions and in addition let the last (n m) leading principal minors of B alternate in sign beginning with that of ( 1) m+1, where B is evaluated at (x, λ ), then x is a local maximum of problem (L.1). For a minimum, the last (n m) leading principal minors of B must be of the same sign as ( 1) m. Thomas Dangl Optimization 45

46 Equality Constraints: The Method of Lagrange Optimization Under Equality Constraints (17) Some examples: Optimal consumption of an exhaustible resource. Socially optimal pricing of an exhaustible resource. Thomas Dangl Optimization 46

47 Equality Constraints: The Method of Lagrange Optimization Under Equality Constraints (18) Continue our example. The Hesse matrix B is B = and since n = 2 and m = 1, only the last leading principal minor is relevant, i.e., B = 18 must have the same sign as ( 1) m+1 = 1, which is satisfied. Thus, the point (x 1 = 26/3), x 2 = 29/3, λ = 4/3 is a local maximum. Thomas Dangl Optimization 47

48 Equality Constraints: The Method of Lagrange Optimization Under Equality Constraints (19) How does the optimized (maximum) value of the objective function change, if some parameter of the problem is altered? Suppose the objective function f as well as the constraints g j are affected by some vector of model parameters p. E.g., the total amount of the resource available. The natural question that arises: What is the total differential of f with respect to changes in p? The total differential considers that when p is altered also the choice of x changes! Th. The Envelope Theorem Let (x, λ ) solve problem (L.1) for some parameter vector p R k, i.e., f (x ; p) is a local maximum under the constraints g(x ; p) = 0. Then d f (x ; p) dp s = L(λ, x ; p) p s, s = 1,..., k. Thomas Dangl Optimization 48

49 Equality Constraints: The Method of Lagrange Optimization Under Equality Constraints (20) Back to our simple example: L = 10 (10 x 1 ) 2 2(10 x 2 ) 2 + λ(p 2x1 x2), with p = 27. How does the optimal value ( f (x ) = 8) change, if the parameter p of the constraint is altered by dp? Answer: d f (x ; p) = L(, λ, x ; p) = λ = 4/3. dp s p s If p moves to p + dp, then the optimized objective moves from f (x ; p) to f (x ; p) + λ dp Def. Shadow Price If the vector p characterizes the constraints in the form g j = p j h(x) = 0, then the associated Lagrange multipliers λ can be interpreted as shadow prices of the constraints. I.e., a rational agent maximizing the objective f is willing to pay up to λ j dp j in order to increase p j by a marginal dp j. Thomas Dangl Optimization 49

50 Inequality Constraints: The Method of Kuhn-Tucker Optimization Under Inequality Constraints (1) In many cases, constraints are not as strict as equality constraints. E.g., resource constraints do not imply that one has to fully exploit the resource. Def. The Standard Problem of Optimization Under Inequality Constraints (KT.1) subject to max x f (x) g(x) 0, x 0. Thomas Dangl Optimization 50

51 Inequality Constraints: The Method of Kuhn-Tucker Optimization Under Inequality Constraints (2) Example Consider a firm that produces a non-durable good in two consecutive time periods. Production cost is a constant EUR 10, per unit produced. Let x 1 and x 2 denote the quantity produced in periods 1 and 2. Inverse demand in the first period is given by p 1 = x 1. In period 2, the attained price is p 2 = 50 2x 2. In the course of production, emissions are unavoidable and legal restrictions admit a maximum of total 33 units of production (sum of x 1 and x 2 ). Capacity constraints restrict the production in each of the periods to be less than 28 units. The time value of money is 0. a) Determine the optimal production decision x 1 and x 1 that maximizes the total profit of the firm. b) Give an interpretation of the result. What can be derived from the magnitude of the Lagrangian multipliers? c) Assume that the emission constraint is relaxed and substituted by an emission tax of EUR 12 per unit produced. What is then the optimal production decision x 1, x 2? Thomas Dangl Optimization 51

52 Inequality Constraints: The Method of Kuhn-Tucker Optimization Under Inequality Constraints (3) Def. Binding Constraint A constraint g j 0 is called binding at some point x if g j ( x) = 0. Regularity Condition The set of constraints g meets the regularity condition if one of the following is satisfied (i) g j is linear for all j = 1,..., m. (ii) g j is concave and there exists x > 0 such that g j > 0 for all j = 1,..., m. (iii) The feasible set (g j 0, x 0) is convex and has a nonempty interior, and g j x 0 if constraint j is binding. (iv) Renumber the constraints so that the first k m constraints are binding. Then the matrix [ g j / x] ji, i = 1,..., n, j = 1,..., k has full rank k. Thomas Dangl Optimization 52

53 Inequality Constraints: The Method of Kuhn-Tucker Optimization Under Inequality Constraints (4) Th. Kuhn-Tucker Let x be a solution to problem (KT.1) and assume that the regulatory condition applies (see above). Then there must exist a set of multipliers λ 1,..., λ m such that g j (x ) 0, λ j 0, and λ j g j (x ) = 0, j = 1,..., m, m f xi (x ) + λ j gx j i (x ) 0, xi 0, x i j=1 f x i (x ) + m λ j gx j i (x ) = 0, i = 1,..., n. j=1 Thomas Dangl Optimization 53

54 Inequality Constraints: The Method of Kuhn-Tucker Optimization Under Inequality Constraints (5) Th. Kuhn-Tucker (cont) More compactly: L λ (λ, x ) 0, λ 0, and λ L λ (λ, x ) = 0, L x (λ, x ) 0, x 0, and x L x (λ, x ) = 0. The subset of the Kuhn-Tucker conditions λ L λ (λ, x ) = 0, x L x (λ, x ) = 0. is called the complementary slackness conditions. In words: Only binding constraints can have non-zero shadow prices. Sloppy speaking: Kuhn-Tucker is Lagrange Method for an arbitrary chosen subset of binding constraints + a criterion for deciding whether the chosen subset is consistent with the problem (KT.1). Thomas Dangl Optimization 54

55 Inequality Constraints: The Method of Kuhn-Tucker Optimization Under Inequality Constraints (6) Contra: There is no guideline for finding the proper selection of binding constraints. I.e., we need to have some (economic) intuition and / or have to solve up to 2 m Lagrange problems. Th. Sufficient Condition Suppose (x, λ ) satisfies the Kuhn-Tucker conditions and the gs satisfy the regularity condition (at x ). Suppose further that the first k m constraints are binding at x and the set of non-binding constraints is strictly non-binding in an entire neighborhood of x. If (x, λ j ), j = 1,..., k, together with the binding constraints g j = 0, j = 1,..., k, satisfy the sufficient optimality condition for the implied (L.1) Lagrange problem, then x is a local maximum of (KT.1). Thomas Dangl Optimization 55

56 Inequality Constraints: The Method of Kuhn-Tucker Optimization Under Inequality Constraints (7) Example (Cusp): Maximize f (x 1, x 2 ) = x 1 subject to g 1 : x 2 (x 1 2) 3 + 1, g 2 : x g x fx g x 2 Thomas Dangl Optimization 56

57 Inequality Constraints: The Method of Kuhn-Tucker Optimization Under Inequality Constraints (8) Some notes: Let gs depend on the selection of some parameters p. Let U {} be the feasible range of x. If a change of p p leads to a feasible range Ũ with Ũ U we call it a reinforcement of constraint g. If a change of p p leads to a feasible range Ũ with U Ũ we call it a relaxation of constraint g. Let f (x (p)) be a (local) maximum. A change p p that reinforces constraint(s) implies f (x ( p)) f (x (p)) Let f (x (p)) be a (local) maximum. A change p p that relaxes constraint(s) implies f (x ( p)) f (x (p)) Especially the introduction of an additional constraint is always a reinforcement of the set of constraints. Thomas Dangl Optimization 57

58 Special Cases: LP, MILP, Porfolio Optimization Linear Programming (1) Linear Programming is the special case where the objective function as well as all constraints are linear functions. The regularity condition is met in any case (see above). The set of feasible solutions (i.e., x that meet the boundary conditions) is either empty, or a convex, n-dimensional polyhedron. Due to the linearity of the problem, we know: If there exists a maximum, one of the vertices of the polyhedron is always part of the optimal solution. The Simplex method was developed by Georg Dantzig in the 1940-ies. It is a method that systematically searches the vertices of the polyhedron created by the restrictions along a path that follows the steepest improvement of the objective function. Simplex-method and Simplex-Tableau are not covered here. Thomas Dangl Optimization 58

59 Special Cases: LP, MILP, Porfolio Optimization Linear Programming (2) Solve a simple example and learn how to use the R-package lpsolve. Example: Resource Allocation A cabinetmaker is planning the coming year. She wants to invest a maximum of working hours in the production of chairs and tables. There are only 282 units of wood available per year. (i) Due to space constraints not more than 320 tables per year can be produced. (ii) The production of one table and one chair takes 2 hours and 4 hours, respectively. (iii) For producing a table and a chair, 0.8 and 0.4 units of wood is needed, resp. The profit margin of a table is EUR 90., the profit margin of a chair is 60.. What is the maximum attainable profit margin? What is the optimal allocation of resources over the year? Thomas Dangl Optimization 59

60 Special Cases: LP, MILP, Porfolio Optimization Linear Programming (3) The linear program is: max x t,x c f (x t, x c ) = 90x t + 60x c subject to g 1 (x t, x c ) = 320 x t 0, g 2 (x t, x c ) = x t 4x c 0, g 3 (x t, x c ) = x t 0.4x c 0, Thomas Dangl Optimization 60

61 Special Cases: LP, MILP, Porfolio Optimization Linear Programming (4) The region of feasible production schedules together with iso-value lines of the objective function f f 0 f 100 Thomas Dangl Optimization 61

62 Special Cases: LP, MILP, Porfolio Optimization Linear Programming (5) At a non-optimal vertex: The gradient of the objective function can be composed by a linear combination of the gradients of the active boundaries. But f const g 2 g fx g x g x 2 Thomas Dangl Optimization 62

63 Special Cases: LP, MILP, Porfolio Optimization Linear Programming (6)... the sign of the Lagrange-multipliers is not consistent with the Kuhn-Tucker conditions. 220 f const g 2 g Note: Λ Λ 1 g x 1 fx Λ 2 g x Thomas Dangl Optimization 63

64 Special Cases: LP, MILP, Porfolio Optimization Linear Programming (7) Implementation: see LP.R. library(lpsolve) # coefficients of the objective function obj <- c(90,60) # constraints # space constraint lhs <- matrix(c(1,0),nrow=1) rhs <- 320 dir <- "<=" # time constraint lhs <- rbind(lhs,c(2,4)) rhs <- c(rhs,1200) dir <- c(dir,"<=") Thomas Dangl Optimization 64

65 Special Cases: LP, MILP, Porfolio Optimization Linear Programming (8) LP.R cont.: # wood constraint lhs <- rbind(lhs,c(0.8,0.4)) rhs <- c(rhs,282) dir <- c(dir,"<=") # compute the solution sol <- lp("max",obj,lhs,dir,rhs,compute.sens=t) The components of sol can be seen with names(sol). The most important are sol$objval sol$solution sol$duals (first the duals for the constraints, then for the variables) Thomas Dangl Optimization 65

66 Special Cases: LP, MILP, Porfolio Optimization Linear Programming (9) Result: optimal production plan number tables 270 chairs 165 max. objective: dual variables of the constraints (shadow prices): lambda g1 0 g2 5 g3 100 Thomas Dangl Optimization 66

67 Special Cases: LP, MILP, Porfolio Optimization Linear Programming (10) Result (cont): duals of the variables (relative profit margins): rel. margin x1 0 x2 0 The λs say that the cabinetmaker values one additional hour at EUR 5, and one additional unit of wood at EUR 100. Additional space for producing tables has a value of EUR 0, since the space constraint is not binding. The relative profit margins constitute the marginal opportunity costs when deviating from the optimal production plan. Both variables are strictly positive, thus, the relative profit margins are 0. Thomas Dangl Optimization 67

68 Special Cases: LP, MILP, Porfolio Optimization Linear Programming (11) Now change the profit margins such that only one of the products is in the production plan: max x t,x c f (x t, x c ) = 60x t + 130x c New result: optimal production plan number tables 0 chairs 300 max. objective: Thomas Dangl Optimization 68

69 Special Cases: LP, MILP, Porfolio Optimization Linear Programming (12) New results (cont.): dual variables of the constraints (shadow prices): lambda g1 0.0 g g3 0.0 duals of the variables (relative profit margins): rel. margin x1-5 x2 0 Now tables are not in the production plan. The opportunity cost of producing a (marginal) table is EUR 5 per unit. Thomas Dangl Optimization 69

70 Special Cases: LP, MILP, Porfolio Optimization Mixed Integer Linear Programming (1) While large LPs can be solved very efficiently and fast, things change, if some or all of the free variables x are restricted to integer values. Since differentiability is lost in this case, the Lagrange and Kuhn-Tucker conditions cannot be applied. As long as there exists an efficient algorithm for solving the related non-integer problem, the branch-and-bound algorithm is a suitable method to solve the integer problem. The simplest approach to integer problems is the complete enumeration, i.e., the evaluation of the objective function on each of the possible combinations of the integer variables. Thomas Dangl Optimization 70

71 Special Cases: LP, MILP, Porfolio Optimization Mixed Integer Linear Programming (2) Assume a problem with n binary variables (integers that can either be 0 of 1. Suppose that a computer is able to calculate (and compare) the objective function at 1 billion locations per second. Then the time to completely enumerate the problem is for n = 10 it takes 1.024µs, for n = ms, for n = s, for n = min, for n = d, for n = y. for n = y. I.e., we must use a method that avoids complete enumeration! Thomas Dangl Optimization 71

72 Special Cases: LP, MILP, Porfolio Optimization Mixed Integer Linear Programming (3) Branch-and-bound algorithm for mixed integer linear programming: Consider a linear maximization program where the first k n variables in x are required to be integers. Some considerations: First ignore all integer conditions and solve the linear program. Call this initial problem P0 and the solution of the problem f 0. If f 0 satisfies all integer constraints, then f 0 is also a solution to the overall problem. If not, take one of the variables for which the integer condition is not satisfied, say x i. Let x 0,i be the value of x i in the solution f 0 and x 0,i the largest integer with x 0,i x 0,i. Furthermore, x 0,i is the smallest integer with x 0,i x 0,i. Create two new models (branches) and add them to the candidate list: P1 is P0 with the additional constraint x i x 0,i. P2 is P0 with the additional constraint x i x 0,i Thomas Dangl Optimization 72

73 Special Cases: LP, MILP, Porfolio Optimization Mixed Integer Linear Programming (4) We can now solve P1 and P2 and in the case there are still variables that do not satisfy the required integer conditions create new branches. So far, the branching approach is only a systematic approach for the total enumeration. Fortunately, we can calculate upper limits for the maximum attainable objective value in a branch (bounds): Consider a problem Ph with maximum LP-solution f h. All problems that follow in the branch rooted at Ph have additional constraints, i.e., the are reinforcements to Ph. Thus, f l f h for all successor problems Pl. Now we are ready to formulate the branch-and-bound algorithm. Thomas Dangl Optimization 73

74 Special Cases: LP, MILP, Porfolio Optimization Mixed Integer Linear Programming (5) Branch-and-Bound: 1 Start with P0 by ignoring all integer conditions and add it as the first problem to the candidate list. The model status is non-evaluated and the parent objective is set to -. 2 Select among the non-evaluated candidate problems one with maximum parent objective, say Ph, and compute the LP-solution f h. 3 If none of the following stop criteria is met, set status to evaluated, branch into two models at one of the variables for which the LP-solution f h does not meet the integer condition, pass f h as parent objective to these two child problems, status non-evaluated. 4 Continue at step 2 as long as there are non-evaluated models in the candidate list. 5 After no non-evaluated model is left If none of the evaluated models satisfies all integer conditions, the overall model is infeasible. If one or more candidate models satisfy all integer conditions, report the max. as overall solution. Thomas Dangl Optimization 74

75 Special Cases: LP, MILP, Porfolio Optimization Mixed Integer Linear Programming (6) Stopping condition: Do not further branch the tree after evaluation of problem Ph if one of the following criteria is met i The LP-solution f h satisfies all integer conditions. Then f h is a feasible solution to the overall problem. Compare f h to the best overall solution found so far. If it exceeds the max. found so far, remember Ph and f h. Search for non-evaluated problems in the candidate list with parent objectives that are less than f h. Change the status of these models from non-evaluated to dominated. I.e., the total branch rooted at such a model is dominated and we need not care about these successor models. ii Ph is infeasible. Set its status to infeasible. Thomas Dangl Optimization 75

76 Special Cases: LP, MILP, Porfolio Optimization Mixed Integer Linear Programming (7) Consider the following integer linear program: max f (x 1, x 2 ) = x 1 + 2x 2 subject to 6x 1 +5x x 1 +6x 2 21 x j 0 and integer, j = 1, 2 Thomas Dangl Optimization 76

77 Special Cases: LP, MILP, Porfolio Optimization Mixed Integer Linear Programming (8) Here you see the initial problem P0 together with the grid of feasible integer pairs (x 1, x 2 ) x1 5x2 27 2x1 6x2 21 f 0 2 P The LP-solution to P0 is: f 0 = 7.73, x 0,1 = 2.19,x 0,2 = Thomas Dangl Optimization 77

78 Special Cases: LP, MILP, Porfolio Optimization Mixed Integer Linear Programming (9) Branching at x 0,1 leads to further candidates P1, P2. 6 6x1 5x x1 6x2 21 f 0 2 P1 P Thomas Dangl Optimization 78

79 Special Cases: LP, MILP, Porfolio Optimization Mixed Integer Linear Programming (10) The candidate list model status parent objective boundaries P0 evaluated P0 P1 non-evaluated 7.73 P0 & x 1 2 P2 non-evaluated 7.73 P0 & x 1 3 Continue with P1 The LP-solution is: f 1 = 7.67, x 1,1 = 2.0,x 1,2 = New candidate list: model status parent objective boundaries P0 evaluated P0 P1 evaluated 7.73 P0 & x 1 2 P2 non-evaluated 7.73 P0 & x 1 3 P3 non-evaluated 7.67 P0 & x 1 2 & x 2 2 P4 non-evaluated 7.67 P0 & x 1 2 & x 2 3 Thomas Dangl Optimization 79

80 Special Cases: LP, MILP, Porfolio Optimization Mixed Integer Linear Programming (11) Illustration of the problems P3 and P4: 6 6x1 5x P4 f 0 2 P3 2x1 6x Thomas Dangl Optimization 80

81 Special Cases: LP, MILP, Porfolio Optimization Mixed Integer Linear Programming (12) We continue with problem P2. The LP-solution is: f 2 = 6.6, x 2,1 = 3.0,x 2,2 = 1.8. Branch again at x 2,2, the new candidate list: model status parent objective boundaries P0 evaluated P0 P1 evaluated 7.73 P0 & x 1 2 P2 evaluated 7.73 P0 & x 1 3 P3 non-evaluated 7.67 P0 & x 1 2 & x 2 2 P4 non-evaluated 7.67 P0 & x 1 2 & x 2 3 P5 non-evaluated 6.60 P0 & x 1 3 & x 2 1 P6 non-evaluated 6.60 P0 & x 1 3 & x 2 2 Next evaluate P3. The LP-solution is: f 3 = 6, x 3,1 = 2,x 3,2 = 2. Solution satisfies the integer conditions! Do not branch further. This is the best overall solution found so far. There are no branches dominated. Thomas Dangl Optimization 81

82 Special Cases: LP, MILP, Porfolio Optimization Mixed Integer Linear Programming (13) Continue with P4. LP-solution is: f 4 = 7.5, x 4,1 = 1.5, x 4,2 = 3. Branching creates a new list: model status parent objective boundaries P0 evaluated P0 P1 evaluated 7.73 P0 & x 1 2 P2 evaluated 7.73 P0 & x 1 3 P3 solution 7.67 P0 & x 1 2 & x 2 2 P4 evaluated 7.67 P0 & x 1 2 & x 2 3 P5 non-evaluated 6.60 P0 & x 1 3 & x 2 1 P6 non-evaluated 6.60 P0 & x 1 3 & x 2 2 P7 non-evaluated 7.50 P0 & x 1 1 & x 2 3 P8 non-evaluated 7.50 P0 & x 1 = 2 & x 2 3 Continue with problem P7: f 7 = 7.33, x 7,1 = 1, x 7,2 = Branch again at x 7,2 = Thomas Dangl Optimization 82

83 Special Cases: LP, MILP, Porfolio Optimization Mixed Integer Linear Programming (14) The new candidate list (skip evaluated problems): model status parent objective boundaries P3 solution 7.67 P0 & x 1 2 & x 2 2 P5 non-evaluated 6.60 P0 & x 1 3 & x 2 1 P6 non-evaluated 6.60 P0 & x 1 3 & x 2 2 P8 non-evaluated 7.50 P0 & x 1 = 2 & x 2 3 P9 non-evaluated 7.33 P0 & x 1 1 & x 2 = 3 P10 non-evaluated 7.33 P0 & x 1 1 & x 2 4 P8 is infeasible, stop criterion is met. Next P9: f 9 = 7, x 9,1 = 1, x 9,2 = 3. This is an integer solution. It is better than the solution of P4( f 4 = 6). Problems P5 and P6 are dominated by P9 Thomas Dangl Optimization 83

84 Special Cases: LP, MILP, Porfolio Optimization Mixed Integer Linear Programming (15) The only remaining problem P10 is infeasible. The overall maximum is given by the solution to problem P9: x1 = 1 x2 = 3 f (x1, x 2 ) = 7 Thomas Dangl Optimization 84

85 Special Cases: LP, MILP, Porfolio Optimization Portfolio Optimization (1) Suppose there is an investment universe consisting of n assets. Some notation: r... return in excess to some riskless asset Σ... variance-covariance matrix of returns x... portfolio weights µ... return expectations, E(r) 1... a (n 1) column vector consisting of 1s r p... portfolio return, r p = x r Assume that the riskiness of an investment is measured by the return variance (one period). So our first objective in portfolio selection is: Minimize the return variance for a desired expected portfolio return. Thomas Dangl Optimization 85

86 Special Cases: LP, MILP, Porfolio Optimization Portfolio Optimization (2) The problem subject to: 1 min x 2 Var[r p,t+1] = 1 2 x Σx E(r p,t+1 ) = x µ = µ x 1 = 1 The Lagrangian of this equality constrained problem is L = 1 2 x Σx + λ(1 x 1) + δ( µ x µ) Thomas Dangl Optimization 86

87 Special Cases: LP, MILP, Porfolio Optimization Portfolio Optimization (3) At the optimal selection the partial derivatives of the Lagrangian with respect to its variables must vanish (i) (ii) (iii) L = Σx λ1 δµ = 0 x L λ = x 1 1 = 0 L δ = x µ µ = 0 From (i) and the fact that λ and δ are real numbers it follows that x = λσ δσ 1 µ Using (ii) we get x 1 = 1 Thomas Dangl Optimization 87

88 Special Cases: LP, MILP, Porfolio Optimization Portfolio Optimization (4) Together with the expression for x this yields From (iii) it follows that and thus λ1 Σ δµ Σ 1 1 = 1 x µ = µ λ1 Σ 1 µ + δµ Σ 1 µ = µ And compactly written ( 1 Σ 1 1 µ Σ Σ 1 µ µ Σ 1 µ ) ( λ δ ) = ( 1 µ ) Thomas Dangl Optimization 88

89 Special Cases: LP, MILP, Porfolio Optimization Portfolio Optimization (5) And with the following notation A = 1 Σ 1 1 B = 1 Σ 1 µ C = µ Σ 1 µ we can write this as ( ) ( A B λ B C δ } {{ } M ) = ( 1 µ ) We can show that from Σ positive definite it follows that M is invertible. Thomas Dangl Optimization 89

90 Special Cases: LP, MILP, Porfolio Optimization Portfolio Optimization (6) The Lagrangian multipliers are given by ( ) ( ) λ = M 1 = δ 1 µ 1 ( C B M B A ) ( 1 µ The Lagrangian multiplier is associated with the condition x 1 = 1 (portfolio weights sum to 1), λ, is given by λ = C µb M Lagrangian multiplier associated with the condition x µ = µ (given expected return), δ, is given by δ = µa B M ) Thomas Dangl Optimization 90

91 Special Cases: LP, MILP, Porfolio Optimization Portfolio Optimization (7) Optimal portfolio weights: x = C µb Σ µa B Σ 1 µ M M 1 = M (CΣ 1 1 BΣ 1 µ) + 1 M (AΣ 1 µ BΣ 1 1) µ } {{ }} {{ } Λ 1 Λ 2 = Λ 1 + Λ 2 µ That is, the optimal portfolio weights are a linear affine function of µ. Thomas Dangl Optimization 91

92 Special Cases: LP, MILP, Porfolio Optimization Portfolio Optimization (8) The portfolio variance at the optimum is given by Var(r p ) = σ 2 p = x Σx = x Σ(λΣ δσ 1 µ) = λx ΣΣ δx ΣΣ 1 µ = λx 1 + δx µ = λ + δ µ 1 = M (C µb + µ2 A µb) 1 = M ( µ2 A 2 µb + C) Thomas Dangl Optimization 92

93 Special Cases: LP, MILP, Porfolio Optimization Portfolio Optimization (9) These minimum variance portfolios (for different values of µ) form the so called efficient portfolio frontier. The case of three assets: Thomas Dangl Optimization 93

94 Special Cases: LP, MILP, Porfolio Optimization Portfolio Optimization (10) The location of the global minimum variance portfolio can be computed in two ways. and, thus, dσ 2 µ = 1 (2 µa 2B) = 0 M µ = B A The second possibility to derive the location of the minimum variance portfolio: At the min. var. portfolio, the Lagrangian multiplier associated with x µ = µ must be zero, because this boundary condition is not binding in that case: δ = 0 1 M ( µa B) = 0 µ = B A Thomas Dangl Optimization 94

95 Special Cases: LP, MILP, Porfolio Optimization Portfolio Optimization (11) Now we have characterized the diversification potential inherent in the portfolio. I.e., we know the return-variance that an investor must accept if he / she demands an expected portfolio return of µ. Consider a representative mean-variance optimizing investor, i.e., an investor with utility u(r) = E(r) 1 2 γvar(r) = µ 1 2 γσ2 The representative investor must hold the market. Therefore u = µ 1 2 γ 1 M ( µ2 A 2 µb + C) The first order condition for the maximum utility choice is du d µ = γ 1 (2A µ 2B) = 0 M Thomas Dangl Optimization 95

96 Special Cases: LP, MILP, Porfolio Optimization Portfolio Optimization (12) Using the expression for δ leads to γ A µ B M = γδ = 1 i.e., the mean-variance optimizing investor with a relative risk aversion of γ maximizes her utility by choosing the efficient portfolio with expected return µ given by µ = M γa + B A. For this utility maximizing portfolio it holds that δ = 1 γ. Thomas Dangl Optimization 96

97 Special Cases: LP, MILP, Porfolio Optimization Portfolio Optimization (13) Utility maximization: The investor selects the portfolio which lies on the highest attainable iso-utility line Thomas Dangl Optimization 97

98 Special Cases: LP, MILP, Porfolio Optimization Portfolio Optimization (14) Utility maximization: Higher risk aversion means the choice of a portfolio with lower variance Thomas Dangl Optimization 98

99 Special Cases: LP, MILP, Porfolio Optimization Portfolio Optimization (15) Now consider the riskless asset (which is in zero net supply). The riskless asset has an excess return of 0 (as defined) and variance of 0. An individual investor can now mix a risky portfolio and the riskless asset and has, thus, two parameters of choice: a portfolio of risky assets (characterized by µ) the weight w of the risky portfolio in the overall investment. For a given µ get a min. var portfolio x ( µ) with expected return µ and variance σ 2 ( µ) = x Σx. If the investor choses a weight of w for the risky part of her investment, it is characterized by µ p = w µ, σ 2 p = w 2 x Σx = w 2 σ 2 ( µ) Thomas Dangl Optimization 99

100 Special Cases: LP, MILP, Porfolio Optimization Portfolio Optimization (16) What is the optimal choice of w (for a given µ)? For different w the utility of the mean-variance optimizer is u = w µ 1 2 γw2 σ 2 ( µ) Then the first order condition for the optimal choice of w is du dw = µ γwσ2 ( µ) = 0, which yields: w = µ γσ 2 ( µ). Thomas Dangl Optimization 100

101 Special Cases: LP, MILP, Porfolio Optimization Portfolio Optimization (17) Optimal weight w for a non-optimally chosen portfolio µ Thomas Dangl Optimization 101

102 Special Cases: LP, MILP, Porfolio Optimization Portfolio Optimization (18) What is the optimal choice of µ? Select µ such that the investor can reach the highest attainable iso-utility line. This is achieved by selecting the so called tangency portfolio. I.e., independent of the risk aversion of the individual investor, all investors will choose to invest a certain fraction of their wealth in the tangency portfolio and the rest in the riskless asset.! The weight w, however, depends on the investor s risk aversion! Thomas Dangl Optimization 102

103 Special Cases: LP, MILP, Porfolio Optimization Portfolio Optimization (19) The tangency portfolio and two different investors choice of the weight w of the risky sub-portfolio Thomas Dangl Optimization 103

E 600 Chapter 4: Optimization

E 600 Chapter 4: Optimization E 600 Chapter 4: Optimization Simona Helmsmueller August 8, 2018 Goals of this lecture: Every theorem in these slides is important! You should understand, remember and be able to apply each and every one

More information

Review of Optimization Methods

Review of Optimization Methods Review of Optimization Methods Prof. Manuela Pedio 20550 Quantitative Methods for Finance August 2018 Outline of the Course Lectures 1 and 2 (3 hours, in class): Linear and non-linear functions on Limits,

More information

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline LONDON SCHOOL OF ECONOMICS Professor Leonardo Felli Department of Economics S.478; x7525 EC400 20010/11 Math for Microeconomics September Course, Part II Lecture Notes Course Outline Lecture 1: Tools for

More information

September Math Course: First Order Derivative

September Math Course: First Order Derivative September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which

More information

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1 Session: 15 Aug 2015 (Mon), 10:00am 1:00pm I. Optimization with

More information

Econ Slides from Lecture 14

Econ Slides from Lecture 14 Econ 205 Sobel Econ 205 - Slides from Lecture 14 Joel Sobel September 10, 2010 Theorem ( Lagrange Multipliers ) Theorem If x solves max f (x) subject to G(x) = 0 then there exists λ such that Df (x ) =

More information

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7 Mathematical Foundations -- Constrained Optimization Constrained Optimization An intuitive approach First Order Conditions (FOC) 7 Constraint qualifications 9 Formal statement of the FOC for a maximum

More information

Optimization. A first course on mathematics for economists

Optimization. A first course on mathematics for economists Optimization. A first course on mathematics for economists Xavier Martinez-Giralt Universitat Autònoma de Barcelona xavier.martinez.giralt@uab.eu II.3 Static optimization - Non-Linear programming OPT p.1/45

More information

Structured Problems and Algorithms

Structured Problems and Algorithms Integer and quadratic optimization problems Dept. of Engg. and Comp. Sci., Univ. of Cal., Davis Aug. 13, 2010 Table of contents Outline 1 2 3 Benefits of Structured Problems Optimization problems may become

More information

FINANCIAL OPTIMIZATION

FINANCIAL OPTIMIZATION FINANCIAL OPTIMIZATION Lecture 1: General Principles and Analytic Optimization Philip H. Dybvig Washington University Saint Louis, Missouri Copyright c Philip H. Dybvig 2008 Choose x R N to minimize f(x)

More information

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints. 1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,

More information

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings

Structural and Multidisciplinary Optimization. P. Duysinx and P. Tossings Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be

More information

Appendix A Taylor Approximations and Definite Matrices

Appendix A Taylor Approximations and Definite Matrices Appendix A Taylor Approximations and Definite Matrices Taylor approximations provide an easy way to approximate a function as a polynomial, using the derivatives of the function. We know, from elementary

More information

Optimality Conditions

Optimality Conditions Chapter 2 Optimality Conditions 2.1 Global and Local Minima for Unconstrained Problems When a minimization problem does not have any constraints, the problem is to find the minimum of the objective function.

More information

Nonlinear Programming and the Kuhn-Tucker Conditions

Nonlinear Programming and the Kuhn-Tucker Conditions Nonlinear Programming and the Kuhn-Tucker Conditions The Kuhn-Tucker (KT) conditions are first-order conditions for constrained optimization problems, a generalization of the first-order conditions we

More information

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2 LONDON SCHOOL OF ECONOMICS Professor Leonardo Felli Department of Economics S.478; x7525 EC400 2010/11 Math for Microeconomics September Course, Part II Problem Set 1 with Solutions 1. Show that the general

More information

The Kuhn-Tucker Problem

The Kuhn-Tucker Problem Natalia Lazzati Mathematics for Economics (Part I) Note 8: Nonlinear Programming - The Kuhn-Tucker Problem Note 8 is based on de la Fuente (2000, Ch. 7) and Simon and Blume (1994, Ch. 18 and 19). The Kuhn-Tucker

More information

Optimization Theory. Lectures 4-6

Optimization Theory. Lectures 4-6 Optimization Theory Lectures 4-6 Unconstrained Maximization Problem: Maximize a function f:ú n 6 ú within a set A f ú n. Typically, A is ú n, or the non-negative orthant {x0ú n x$0} Existence of a maximum:

More information

CHAPTER 1-2: SHADOW PRICES

CHAPTER 1-2: SHADOW PRICES Essential Microeconomics -- CHAPTER -: SHADOW PRICES An intuitive approach: profit maimizing firm with a fied supply of an input Shadow prices 5 Concave maimization problem 7 Constraint qualifications

More information

Generalization to inequality constrained problem. Maximize

Generalization to inequality constrained problem. Maximize Lecture 11. 26 September 2006 Review of Lecture #10: Second order optimality conditions necessary condition, sufficient condition. If the necessary condition is violated the point cannot be a local minimum

More information

minimize x subject to (x 2)(x 4) u,

minimize x subject to (x 2)(x 4) u, Math 6366/6367: Optimization and Variational Methods Sample Preliminary Exam Questions 1. Suppose that f : [, L] R is a C 2 -function with f () on (, L) and that you have explicit formulae for

More information

where u is the decision-maker s payoff function over her actions and S is the set of her feasible actions.

where u is the decision-maker s payoff function over her actions and S is the set of her feasible actions. Seminars on Mathematics for Economics and Finance Topic 3: Optimization - interior optima 1 Session: 11-12 Aug 2015 (Thu/Fri) 10:00am 1:00pm I. Optimization: introduction Decision-makers (e.g. consumers,

More information

Constrained Optimization. Unconstrained Optimization (1)

Constrained Optimization. Unconstrained Optimization (1) Constrained Optimization Unconstrained Optimization (Review) Constrained Optimization Approach Equality constraints * Lagrangeans * Shadow prices Inequality constraints * Kuhn-Tucker conditions * Complementary

More information

ARE202A, Fall 2005 CONTENTS. 1. Graphical Overview of Optimization Theory (cont) Separating Hyperplanes 1

ARE202A, Fall 2005 CONTENTS. 1. Graphical Overview of Optimization Theory (cont) Separating Hyperplanes 1 AREA, Fall 5 LECTURE #: WED, OCT 5, 5 PRINT DATE: OCTOBER 5, 5 (GRAPHICAL) CONTENTS 1. Graphical Overview of Optimization Theory (cont) 1 1.4. Separating Hyperplanes 1 1.5. Constrained Maximization: One

More information

Computational Finance

Computational Finance Department of Mathematics at University of California, San Diego Computational Finance Optimization Techniques [Lecture 2] Michael Holst January 9, 2017 Contents 1 Optimization Techniques 3 1.1 Examples

More information

OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES

OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES General: OPTIMISATION 2007/8 EXAM PREPARATION GUIDELINES This points out some important directions for your revision. The exam is fully based on what was taught in class: lecture notes, handouts and homework.

More information

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents

MATHEMATICAL ECONOMICS: OPTIMIZATION. Contents MATHEMATICAL ECONOMICS: OPTIMIZATION JOÃO LOPES DIAS Contents 1. Introduction 2 1.1. Preliminaries 2 1.2. Optimal points and values 2 1.3. The optimization problems 3 1.4. Existence of optimal points 4

More information

Week 4: Calculus and Optimization (Jehle and Reny, Chapter A2)

Week 4: Calculus and Optimization (Jehle and Reny, Chapter A2) Week 4: Calculus and Optimization (Jehle and Reny, Chapter A2) Tsun-Feng Chiang *School of Economics, Henan University, Kaifeng, China September 27, 2015 Microeconomic Theory Week 4: Calculus and Optimization

More information

1 Review Session. 1.1 Lecture 2

1 Review Session. 1.1 Lecture 2 1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions

More information

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints.

Section Notes 8. Integer Programming II. Applied Math 121. Week of April 5, expand your knowledge of big M s and logical constraints. Section Notes 8 Integer Programming II Applied Math 121 Week of April 5, 2010 Goals for the week understand IP relaxations be able to determine the relative strength of formulations understand the branch

More information

LINEAR PROGRAMMING. Relation to the Text (cont.) Relation to Material in Text. Relation to the Text. Relation to the Text (cont.

LINEAR PROGRAMMING. Relation to the Text (cont.) Relation to Material in Text. Relation to the Text. Relation to the Text (cont. LINEAR PROGRAMMING Relation to Material in Text After a brief introduction to linear programming on p. 3, Cornuejols and Tϋtϋncϋ give a theoretical discussion including duality, and the simplex solution

More information

OPTIMIZATION THEORY IN A NUTSHELL Daniel McFadden, 1990, 2003

OPTIMIZATION THEORY IN A NUTSHELL Daniel McFadden, 1990, 2003 OPTIMIZATION THEORY IN A NUTSHELL Daniel McFadden, 1990, 2003 UNCONSTRAINED OPTIMIZATION 1. Consider the problem of maximizing a function f:ú n 6 ú within a set A f ú n. Typically, A might be all of ú

More information

CO 250 Final Exam Guide

CO 250 Final Exam Guide Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,

More information

Microeconomic Theory. Microeconomic Theory. Everyday Economics. The Course:

Microeconomic Theory. Microeconomic Theory. Everyday Economics. The Course: The Course: Microeconomic Theory This is the first rigorous course in microeconomic theory This is a course on economic methodology. The main goal is to teach analytical tools that will be useful in other

More information

The Envelope Theorem

The Envelope Theorem The Envelope Theorem In an optimization problem we often want to know how the value of the objective function will change if one or more of the parameter values changes. Let s consider a simple example:

More information

Chapter 1: Linear Programming

Chapter 1: Linear Programming Chapter 1: Linear Programming Math 368 c Copyright 2013 R Clark Robinson May 22, 2013 Chapter 1: Linear Programming 1 Max and Min For f : D R n R, f (D) = {f (x) : x D } is set of attainable values of

More information

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)

NONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition) NONLINEAR PROGRAMMING (Hillier & Lieberman Introduction to Operations Research, 8 th edition) Nonlinear Programming g Linear programming has a fundamental role in OR. In linear programming all its functions

More information

Module 04 Optimization Problems KKT Conditions & Solvers

Module 04 Optimization Problems KKT Conditions & Solvers Module 04 Optimization Problems KKT Conditions & Solvers Ahmad F. Taha EE 5243: Introduction to Cyber-Physical Systems Email: ahmad.taha@utsa.edu Webpage: http://engineering.utsa.edu/ taha/index.html September

More information

Integer programming: an introduction. Alessandro Astolfi

Integer programming: an introduction. Alessandro Astolfi Integer programming: an introduction Alessandro Astolfi Outline Introduction Examples Methods for solving ILP Optimization on graphs LP problems with integer solutions Summary Introduction Integer programming

More information

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to

g(x,y) = c. For instance (see Figure 1 on the right), consider the optimization problem maximize subject to 1 of 11 11/29/2010 10:39 AM From Wikipedia, the free encyclopedia In mathematical optimization, the method of Lagrange multipliers (named after Joseph Louis Lagrange) provides a strategy for finding the

More information

Microeconomics, Block I Part 1

Microeconomics, Block I Part 1 Microeconomics, Block I Part 1 Piero Gottardi EUI Sept. 26, 2016 Piero Gottardi (EUI) Microeconomics, Block I Part 1 Sept. 26, 2016 1 / 53 Choice Theory Set of alternatives: X, with generic elements x,

More information

3.10 Lagrangian relaxation

3.10 Lagrangian relaxation 3.10 Lagrangian relaxation Consider a generic ILP problem min {c t x : Ax b, Dx d, x Z n } with integer coefficients. Suppose Dx d are the complicating constraints. Often the linear relaxation and the

More information

Chapter 2: Unconstrained Extrema

Chapter 2: Unconstrained Extrema Chapter 2: Unconstrained Extrema Math 368 c Copyright 2012, 2013 R Clark Robinson May 22, 2013 Chapter 2: Unconstrained Extrema 1 Types of Sets Definition For p R n and r > 0, the open ball about p of

More information

15-780: LinearProgramming

15-780: LinearProgramming 15-780: LinearProgramming J. Zico Kolter February 1-3, 2016 1 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 2 Outline Introduction Some linear

More information

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem: CDS270 Maryam Fazel Lecture 2 Topics from Optimization and Duality Motivation network utility maximization (NUM) problem: consider a network with S sources (users), each sending one flow at rate x s, through

More information

Scientific Computing: Optimization

Scientific Computing: Optimization Scientific Computing: Optimization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 March 8th, 2011 A. Donev (Courant Institute) Lecture

More information

OPTIMISATION /09 EXAM PREPARATION GUIDELINES

OPTIMISATION /09 EXAM PREPARATION GUIDELINES General: OPTIMISATION 2 2008/09 EXAM PREPARATION GUIDELINES This points out some important directions for your revision. The exam is fully based on what was taught in class: lecture notes, handouts and

More information

Optimization. Charles J. Geyer School of Statistics University of Minnesota. Stat 8054 Lecture Notes

Optimization. Charles J. Geyer School of Statistics University of Minnesota. Stat 8054 Lecture Notes Optimization Charles J. Geyer School of Statistics University of Minnesota Stat 8054 Lecture Notes 1 One-Dimensional Optimization Look at a graph. Grid search. 2 One-Dimensional Zero Finding Zero finding

More information

Second Welfare Theorem

Second Welfare Theorem Second Welfare Theorem Econ 2100 Fall 2015 Lecture 18, November 2 Outline 1 Second Welfare Theorem From Last Class We want to state a prove a theorem that says that any Pareto optimal allocation is (part

More information

Gradient Descent. Dr. Xiaowei Huang

Gradient Descent. Dr. Xiaowei Huang Gradient Descent Dr. Xiaowei Huang https://cgi.csc.liv.ac.uk/~xiaowei/ Up to now, Three machine learning algorithms: decision tree learning k-nn linear regression only optimization objectives are discussed,

More information

January 29, Introduction to optimization and complexity. Outline. Introduction. Problem formulation. Convexity reminder. Optimality Conditions

January 29, Introduction to optimization and complexity. Outline. Introduction. Problem formulation. Convexity reminder. Optimality Conditions Olga Galinina olga.galinina@tut.fi ELT-53656 Network Analysis Dimensioning II Department of Electronics Communications Engineering Tampere University of Technology, Tampere, Finl January 29, 2014 1 2 3

More information

Nonlinear Programming (NLP)

Nonlinear Programming (NLP) Natalia Lazzati Mathematics for Economics (Part I) Note 6: Nonlinear Programming - Unconstrained Optimization Note 6 is based on de la Fuente (2000, Ch. 7), Madden (1986, Ch. 3 and 5) and Simon and Blume

More information

EC400 Math for Microeconomics Syllabus The course is based on 6 sixty minutes lectures and on 6 ninety minutes classes.

EC400 Math for Microeconomics Syllabus The course is based on 6 sixty minutes lectures and on 6 ninety minutes classes. London School of Economics Department of Economics Dr Francesco Nava Offi ce: 32L.3.20 EC400 Math for Microeconomics Syllabus 2016 The course is based on 6 sixty minutes lectures and on 6 ninety minutes

More information

Notes on Constrained Optimization

Notes on Constrained Optimization Notes on Constrained Optimization Wes Cowan Department of Mathematics, Rutgers University 110 Frelinghuysen Rd., Piscataway, NJ 08854 December 16, 2016 1 Introduction In the previous set of notes, we considered

More information

Maximum Theorem, Implicit Function Theorem and Envelope Theorem

Maximum Theorem, Implicit Function Theorem and Envelope Theorem Maximum Theorem, Implicit Function Theorem and Envelope Theorem Ping Yu Department of Economics University of Hong Kong Ping Yu (HKU) MIFE 1 / 25 1 The Maximum Theorem 2 The Implicit Function Theorem 3

More information

Constrained Optimization and Lagrangian Duality

Constrained Optimization and Lagrangian Duality CIS 520: Machine Learning Oct 02, 2017 Constrained Optimization and Lagrangian Duality Lecturer: Shivani Agarwal Disclaimer: These notes are designed to be a supplement to the lecture. They may or may

More information

CHAPTER 2: QUADRATIC PROGRAMMING

CHAPTER 2: QUADRATIC PROGRAMMING CHAPTER 2: QUADRATIC PROGRAMMING Overview Quadratic programming (QP) problems are characterized by objective functions that are quadratic in the design variables, and linear constraints. In this sense,

More information

56:171 Operations Research Midterm Exam - October 26, 1989 Instructor: D.L. Bricker

56:171 Operations Research Midterm Exam - October 26, 1989 Instructor: D.L. Bricker 56:171 Operations Research Midterm Exam - October 26, 1989 Instructor: D.L. Bricker Answer all of Part One and two (of the four) problems of Part Two Problem: 1 2 3 4 5 6 7 8 TOTAL Possible: 16 12 20 10

More information

ARE211, Fall 2005 CONTENTS. 5. Characteristics of Functions Surjective, Injective and Bijective functions. 5.2.

ARE211, Fall 2005 CONTENTS. 5. Characteristics of Functions Surjective, Injective and Bijective functions. 5.2. ARE211, Fall 2005 LECTURE #18: THU, NOV 3, 2005 PRINT DATE: NOVEMBER 22, 2005 (COMPSTAT2) CONTENTS 5. Characteristics of Functions. 1 5.1. Surjective, Injective and Bijective functions 1 5.2. Homotheticity

More information

Gestion de la production. Book: Linear Programming, Vasek Chvatal, McGill University, W.H. Freeman and Company, New York, USA

Gestion de la production. Book: Linear Programming, Vasek Chvatal, McGill University, W.H. Freeman and Company, New York, USA Gestion de la production Book: Linear Programming, Vasek Chvatal, McGill University, W.H. Freeman and Company, New York, USA 1 Contents 1 Integer Linear Programming 3 1.1 Definitions and notations......................................

More information

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers

Optimization for Communications and Networks. Poompat Saengudomlert. Session 4 Duality and Lagrange Multipliers Optimization for Communications and Networks Poompat Saengudomlert Session 4 Duality and Lagrange Multipliers P Saengudomlert (2015) Optimization Session 4 1 / 14 24 Dual Problems Consider a primal convex

More information

Constrained maxima and Lagrangean saddlepoints

Constrained maxima and Lagrangean saddlepoints Division of the Humanities and Social Sciences Ec 181 KC Border Convex Analysis and Economic Theory Winter 2018 Topic 10: Constrained maxima and Lagrangean saddlepoints 10.1 An alternative As an application

More information

Paul Schrimpf. October 17, UBC Economics 526. Constrained optimization. Paul Schrimpf. First order conditions. Second order conditions

Paul Schrimpf. October 17, UBC Economics 526. Constrained optimization. Paul Schrimpf. First order conditions. Second order conditions UBC Economics 526 October 17, 2012 .1.2.3.4 Section 1 . max f (x) s.t. h(x) = c f : R n R, h : R n R m Draw picture of n = 2 and m = 1 At optimum, constraint tangent to level curve of function Rewrite

More information

3E4: Modelling Choice. Introduction to nonlinear programming. Announcements

3E4: Modelling Choice. Introduction to nonlinear programming. Announcements 3E4: Modelling Choice Lecture 7 Introduction to nonlinear programming 1 Announcements Solutions to Lecture 4-6 Homework will be available from http://www.eng.cam.ac.uk/~dr241/3e4 Looking ahead to Lecture

More information

CSC Design and Analysis of Algorithms. LP Shader Electronics Example

CSC Design and Analysis of Algorithms. LP Shader Electronics Example CSC 80- Design and Analysis of Algorithms Lecture (LP) LP Shader Electronics Example The Shader Electronics Company produces two products:.eclipse, a portable touchscreen digital player; it takes hours

More information

2.098/6.255/ Optimization Methods Practice True/False Questions

2.098/6.255/ Optimization Methods Practice True/False Questions 2.098/6.255/15.093 Optimization Methods Practice True/False Questions December 11, 2009 Part I For each one of the statements below, state whether it is true or false. Include a 1-3 line supporting sentence

More information

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints Instructor: Prof. Kevin Ross Scribe: Nitish John October 18, 2011 1 The Basic Goal The main idea is to transform a given constrained

More information

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization

Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Extreme Abridgment of Boyd and Vandenberghe s Convex Optimization Compiled by David Rosenberg Abstract Boyd and Vandenberghe s Convex Optimization book is very well-written and a pleasure to read. The

More information

MATH2070 Optimisation

MATH2070 Optimisation MATH2070 Optimisation Nonlinear optimisation with constraints Semester 2, 2012 Lecturer: I.W. Guo Lecture slides courtesy of J.R. Wishart Review The full nonlinear optimisation problem with equality constraints

More information

Sensitivity Analysis and Duality in LP

Sensitivity Analysis and Duality in LP Sensitivity Analysis and Duality in LP Xiaoxi Li EMS & IAS, Wuhan University Oct. 13th, 2016 (week vi) Operations Research (Li, X.) Sensitivity Analysis and Duality in LP Oct. 13th, 2016 (week vi) 1 /

More information

Dynamic Macroeconomic Theory Notes. David L. Kelly. Department of Economics University of Miami Box Coral Gables, FL

Dynamic Macroeconomic Theory Notes. David L. Kelly. Department of Economics University of Miami Box Coral Gables, FL Dynamic Macroeconomic Theory Notes David L. Kelly Department of Economics University of Miami Box 248126 Coral Gables, FL 33134 dkelly@miami.edu Current Version: Fall 2013/Spring 2013 I Introduction A

More information

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY

STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY STATIC LECTURE 4: CONSTRAINED OPTIMIZATION II - KUHN TUCKER THEORY UNIVERSITY OF MARYLAND: ECON 600 1. Some Eamples 1 A general problem that arises countless times in economics takes the form: (Verbally):

More information

MS-E2140. Lecture 1. (course book chapters )

MS-E2140. Lecture 1. (course book chapters ) Linear Programming MS-E2140 Motivations and background Lecture 1 (course book chapters 1.1-1.4) Linear programming problems and examples Problem manipulations and standard form problems Graphical representation

More information

Constrained Optimization

Constrained Optimization Constrained Optimization Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 13, 2013 1 General Problem Consider the following general constrained optimization problem:

More information

CHAPTER 4: HIGHER ORDER DERIVATIVES. Likewise, we may define the higher order derivatives. f(x, y, z) = xy 2 + e zx. y = 2xy.

CHAPTER 4: HIGHER ORDER DERIVATIVES. Likewise, we may define the higher order derivatives. f(x, y, z) = xy 2 + e zx. y = 2xy. April 15, 2009 CHAPTER 4: HIGHER ORDER DERIVATIVES In this chapter D denotes an open subset of R n. 1. Introduction Definition 1.1. Given a function f : D R we define the second partial derivatives as

More information

Fundamentals of Operations Research. Prof. G. Srinivasan. Indian Institute of Technology Madras. Lecture No. # 15

Fundamentals of Operations Research. Prof. G. Srinivasan. Indian Institute of Technology Madras. Lecture No. # 15 Fundamentals of Operations Research Prof. G. Srinivasan Indian Institute of Technology Madras Lecture No. # 15 Transportation Problem - Other Issues Assignment Problem - Introduction In the last lecture

More information

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written 11.8 Inequality Constraints 341 Because by assumption x is a regular point and L x is positive definite on M, it follows that this matrix is nonsingular (see Exercise 11). Thus, by the Implicit Function

More information

CONSTRAINED NONLINEAR PROGRAMMING

CONSTRAINED NONLINEAR PROGRAMMING 149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach

More information

Calculus and optimization

Calculus and optimization Calculus an optimization These notes essentially correspon to mathematical appenix 2 in the text. 1 Functions of a single variable Now that we have e ne functions we turn our attention to calculus. A function

More information

Convex Functions and Optimization

Convex Functions and Optimization Chapter 5 Convex Functions and Optimization 5.1 Convex Functions Our next topic is that of convex functions. Again, we will concentrate on the context of a map f : R n R although the situation can be generalized

More information

Final Exam - Answer key

Final Exam - Answer key Fall4 Final Exam - Answer key ARE Problem (Analysis) [7 points]: A) [ points] Let A = (,) R be endowed with the Euclidean metric. Consider the set O = {( n,) : n N}. Does the open cover O of A have a finite

More information

LINEAR PROGRAMMING 2. In many business and policy making situations the following type of problem is encountered:

LINEAR PROGRAMMING 2. In many business and policy making situations the following type of problem is encountered: LINEAR PROGRAMMING 2 In many business and policy making situations the following type of problem is encountered: Maximise an objective subject to (in)equality constraints. Mathematical programming provides

More information

ANSWER KEY. University of California, Davis Date: June 22, 2015

ANSWER KEY. University of California, Davis Date: June 22, 2015 ANSWER KEY University of California, Davis Date: June, 05 Department of Economics Time: 5 hours Microeconomic Theory Reading Time: 0 minutes PRELIMINARY EXAMINATION FOR THE Ph.D. DEGREE Please answer four

More information

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear

More information

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014

Convex Optimization. Dani Yogatama. School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA. February 12, 2014 Convex Optimization Dani Yogatama School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA February 12, 2014 Dani Yogatama (Carnegie Mellon University) Convex Optimization February 12,

More information

Outline. Roadmap for the NPP segment: 1 Preliminaries: role of convexity. 2 Existence of a solution

Outline. Roadmap for the NPP segment: 1 Preliminaries: role of convexity. 2 Existence of a solution Outline Roadmap for the NPP segment: 1 Preliminaries: role of convexity 2 Existence of a solution 3 Necessary conditions for a solution: inequality constraints 4 The constraint qualification 5 The Lagrangian

More information

EE364a Review Session 5

EE364a Review Session 5 EE364a Review Session 5 EE364a Review announcements: homeworks 1 and 2 graded homework 4 solutions (check solution to additional problem 1) scpd phone-in office hours: tuesdays 6-7pm (650-723-1156) 1 Complementary

More information

Chapter 2: Preliminaries and elements of convex analysis

Chapter 2: Preliminaries and elements of convex analysis Chapter 2: Preliminaries and elements of convex analysis Edoardo Amaldi DEIB Politecnico di Milano edoardo.amaldi@polimi.it Website: http://home.deib.polimi.it/amaldi/opt-14-15.shtml Academic year 2014-15

More information

Chapter 4: Production Theory

Chapter 4: Production Theory Chapter 4: Production Theory Need to complete: Proposition 48, Proposition 52, Problem 4, Problem 5, Problem 6, Problem 7, Problem 10. In this chapter we study production theory in a commodity space. First,

More information

. This matrix is not symmetric. Example. Suppose A =

. This matrix is not symmetric. Example. Suppose A = Notes for Econ. 7001 by Gabriel A. ozada The equation numbers and page numbers refer to Knut Sydsæter and Peter J. Hammond s textbook Mathematics for Economic Analysis (ISBN 0-13- 583600-X, 1995). 1. Convexity,

More information

Final Exam - Math Camp August 27, 2014

Final Exam - Math Camp August 27, 2014 Final Exam - Math Camp August 27, 2014 You will have three hours to complete this exam. Please write your solution to question one in blue book 1 and your solutions to the subsequent questions in blue

More information

END3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur

END3033 Operations Research I Sensitivity Analysis & Duality. to accompany Operations Research: Applications and Algorithms Fatih Cavdur END3033 Operations Research I Sensitivity Analysis & Duality to accompany Operations Research: Applications and Algorithms Fatih Cavdur Introduction Consider the following problem where x 1 and x 2 corresponds

More information

Duality Theory, Optimality Conditions

Duality Theory, Optimality Conditions 5.1 Duality Theory, Optimality Conditions Katta G. Murty, IOE 510, LP, U. Of Michigan, Ann Arbor We only consider single objective LPs here. Concept of duality not defined for multiobjective LPs. Every

More information

Optimization Methods

Optimization Methods Optimization Methods Decision making Examples: determining which ingredients and in what quantities to add to a mixture being made so that it will meet specifications on its composition allocating available

More information

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010

Section Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010 Section Notes 9 IP: Cutting Planes Applied Math 121 Week of April 12, 2010 Goals for the week understand what a strong formulations is. be familiar with the cutting planes algorithm and the types of cuts

More information

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM

TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM TMA 4180 Optimeringsteori KARUSH-KUHN-TUCKER THEOREM H. E. Krogstad, IMF, Spring 2012 Karush-Kuhn-Tucker (KKT) Theorem is the most central theorem in constrained optimization, and since the proof is scattered

More information

Numerical Optimization

Numerical Optimization Constrained Optimization Computer Science and Automation Indian Institute of Science Bangalore 560 012, India. NPTEL Course on Constrained Optimization Constrained Optimization Problem: min h j (x) 0,

More information

Optimal Control. Macroeconomics II SMU. Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112

Optimal Control. Macroeconomics II SMU. Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112 Optimal Control Ömer Özak SMU Macroeconomics II Ömer Özak (SMU) Economic Growth Macroeconomics II 1 / 112 Review of the Theory of Optimal Control Section 1 Review of the Theory of Optimal Control Ömer

More information

Lecture Support Vector Machine (SVM) Classifiers

Lecture Support Vector Machine (SVM) Classifiers Introduction to Machine Learning Lecturer: Amir Globerson Lecture 6 Fall Semester Scribe: Yishay Mansour 6.1 Support Vector Machine (SVM) Classifiers Classification is one of the most important tasks in

More information

Optimality Conditions for Constrained Optimization

Optimality Conditions for Constrained Optimization 72 CHAPTER 7 Optimality Conditions for Constrained Optimization 1. First Order Conditions In this section we consider first order optimality conditions for the constrained problem P : minimize f 0 (x)

More information