Production Possibility Frontier

Similar documents
Summary Notes on Maximization

In view of (31), the second of these is equal to the identity I on E m, while this, in view of (30), implies that the first can be written

The general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.

Performance Surfaces and Optimum Points

Economics 101A (Lecture 3) Stefano DellaVigna

Constrained maxima and Lagrangean saddlepoints

Microeconomics I. September, c Leopold Sögner

Here each term has degree 2 (the sum of exponents is 2 for all summands). A quadratic form of three variables looks as

Written Examination

Constrained Optimization Theory

Lagrange multipliers. Portfolio optimization. The Lagrange multipliers method for finding constrained extrema of multivariable functions.

1. First and Second Order Conditions for A Local Min or Max.

The Kuhn-Tucker Problem

Convex Analysis and Economic Theory Winter 2018

Seminars on Mathematics for Economics and Finance Topic 5: Optimization Kuhn-Tucker conditions for problems with inequality constraints 1

ISM206 Lecture Optimization of Nonlinear Objective with Linear Constraints

MATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018

ECONOMICS 207 SPRING 2008 PROBLEM SET 13

Chapter 3 Transformations

EC487 Advanced Microeconomics, Part I: Lecture 2

Optimization. A first course on mathematics for economists

Generalization to inequality constrained problem. Maximize

Week 4: Calculus and Optimization (Jehle and Reny, Chapter A2)

Constrained Optimization

E 600 Chapter 4: Optimization

Division of the Humanities and Social Sciences. Supergradients. KC Border Fall 2001 v ::15.45

Sufficient Conditions for Finite-variable Constrained Minimization

Technology 1. that describes the (net) output of the L goods of the production process of the firm. By convention,

Support Vector Machines

Thus necessary and sufficient conditions for A to be positive definite are:

Optimization Problems with Constraints - introduction to theory, numerical Methods and applications

Mathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7

Quantitative Techniques (Finance) 203. Polynomial Functions

Midterm for Introduction to Numerical Analysis I, AMSC/CMSC 466, on 10/29/2015

CHAPTER 4: HIGHER ORDER DERIVATIVES. Likewise, we may define the higher order derivatives. f(x, y, z) = xy 2 + e zx. y = 2xy.

EC /11. Math for Microeconomics September Course, Part II Problem Set 1 with Solutions. a11 a 12. x 2

which arises when we compute the orthogonal projection of a vector y in a subspace with an orthogonal basis. Hence assume that P y = A ij = x j, x i

N. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:

Randomized Coordinate Descent Methods on Optimization Problems with Linearly Coupled Constraints

Arrow s General (Im)Possibility Theorem

September Math Course: First Order Derivative

Convex Analysis and Economic Theory AY Elementary properties of convex functions

Mathematical Economics. Lecture Notes (in extracts)

Lakehead University ECON 4117/5111 Mathematical Economics Fall 2002

Duality. for The New Palgrave Dictionary of Economics, 2nd ed. Lawrence E. Blume

Week 7: The Consumer (Malinvaud, Chapter 2 and 4) / Consumer November Theory 1, 2015 (Jehle and 1 / Reny, 32

UC Berkeley Department of Electrical Engineering and Computer Science. EECS 227A Nonlinear and Convex Optimization. Solutions 5 Fall 2009

PROFIT FUNCTIONS. 1. REPRESENTATION OF TECHNOLOGY 1.1. Technology Sets. The technology set for a given production process is defined as

GARP and Afriat s Theorem Production

Convex analysis and profit/cost/support functions

ECE580 Fall 2015 Solution to Midterm Exam 1 October 23, Please leave fractions as fractions, but simplify them, etc.

Nonlinear Programming and the Kuhn-Tucker Conditions

Lecture 05 Cost Concepts for Decision Making. C r = MRTS LK, w MPL

Two hours. To be provided by Examinations Office: Mathematical Formula Tables. THE UNIVERSITY OF MANCHESTER. xx xxxx 2017 xx:xx xx.

General Equilibrium and Welfare

where u is the decision-maker s payoff function over her actions and S is the set of her feasible actions.

6.254 : Game Theory with Engineering Applications Lecture 7: Supermodular Games

Microeconomic Theory -1- Introduction

Tangent spaces, normals and extrema

Optimality, Duality, Complementarity for Constrained Optimization

INTRODUCTORY MATHEMATICS FOR ECONOMICS MSCS. LECTURE 3: MULTIVARIABLE FUNCTIONS AND CONSTRAINED OPTIMIZATION. HUW DAVID DIXON CARDIFF BUSINESS SCHOOL.

Additional questions for Chapter 1 - solutions: 1. See Varian, p.20. Notice that Varian set a1=a2. 2. See Varian, p.20.

4TE3/6TE3. Algorithms for. Continuous Optimization

ECE580 Solution to Problem Set 6

Matrix Algebra, part 2

Linear Algebra. Session 12

Constrained optimization: direct methods (cont.)

ECON2285: Mathematical Economics

Mathematical Foundations II

x +3y 2t = 1 2x +y +z +t = 2 3x y +z t = 7 2x +6y +z +t = a

The Firm: Optimisation

Optimization Tutorial 1. Basic Gradient Descent

Optimization. Escuela de Ingeniería Informática de Oviedo. (Dpto. de Matemáticas-UniOvi) Numerical Computation Optimization 1 / 30

EC /11. Math for Microeconomics September Course, Part II Lecture Notes. Course Outline

CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.

The Tychonoff Theorem

ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION

Lecture 5 : Projections

Math 10C - Fall Final Exam

Differentiable Welfare Theorems Existence of a Competitive Equilibrium: Preliminaries

1 Lagrange Multiplier Method

Review of Optimization Methods

OPTIMALITY AND STABILITY OF SYMMETRIC EVOLUTIONARY GAMES WITH APPLICATIONS IN GENETIC SELECTION. (Communicated by Yang Kuang)

Constrained optimization

MAT 419 Lecture Notes Transcribed by Eowyn Cenek 6/1/2012

Concave programming. Concave programming is another special case of the general constrained optimization. subject to g(x) 0

Lecture 2 - Unconstrained Optimization Definition[Global Minimum and Maximum]Let f : S R be defined on a set S R n. Then

Econ 508-A FINITE DIMENSIONAL OPTIMIZATION - NECESSARY CONDITIONS. Carmen Astorne-Figari Washington University in St. Louis.

This property turns out to be a general property of eigenvectors of a symmetric A that correspond to distinct eigenvalues as we shall see later.

Outline. Roadmap for the NPP segment: 1 Preliminaries: role of convexity. 2 Existence of a solution

Motivation. Lecture 2 Topics from Optimization and Duality. network utility maximization (NUM) problem:

Chapter 2: Preliminaries and elements of convex analysis

Cowles Foundation for Research in Economics at Yale University

Finite Math - J-term Section Systems of Linear Equations in Two Variables Example 1. Solve the system

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

Convex Analysis and Economic Theory Winter 2018

Reference Material /Formulas for Pre-Calculus CP/ H Summer Packet

Math 291-2: Final Exam Solutions Northwestern University, Winter 2016

Final Exam - Math Camp August 27, 2014

Applications of Linear Programming

Transcription:

Division of the Humanities and Social Sciences Production Possibility Frontier KC Border v 20151111::1410 This is a very simple model of the production possibilities of an economy, which was formulated by Abba P Lerner [1] There are m outputs y 1,, y m and n productive factors x 1,, x n Each output is produced according to the production function y i = f i (x i ) There are no intermediate goods, no joint production, and only one (industry) production function for each output The supplies of each productive factor in the economy are fixed at levels x 1,, x n Assume that for each i, the production function satisfies f i : R n + R is continuous, C 2 on R n ++, f i 0 on R n ++, and that the Hessian matrix [ ] fkj i is negative definite on the subspace orthogonal to f i k=1,,n,,n (Throughout this note, subscripts on the production functions denote partial derivatives) These assumptions guarantee that all the second order conditions hold as strict inequalities 1 Production possibility frontier The production possibility set (PPS) is { y R m : 0 y i f i (x i ), i = 1,, m, and } x i x Note that the PPS is compact since the f i s are continuous and the PPS is the continuous image of the compact set { } (x 1,, x m ) R nm : x i 0, i = 1,, n, and x i x (This implicitly assumes free disposal of factors and outputs, but this is not crucial) The production possibility frontier (PPF) is the outer boundary of the PPS The production possibility frontier can be described by the following maximization problem (Here each x i R n + and x i j denotes the quantity of factor j used to produce good i) maximize f 1 (x 1 ) subject to x 1,,x m f i (x i ) = η i i = 2,, m x i j = x j j = 1,, n x i 0 i = 1,, m 1

KC Border Production Possibility Frontier 2 The Lagrangean for this maximization is: L(x, λ, µ; η, x) = f 1 (x 1 1,, x 1 ( n) + λ i f i (x i 1,, x i ) n ) n) η i + µ j ( x j x i j Q: Are the gradients of the constraints (wrt x) linearly independent? The answer is yes λ 2 λ m µ 1 µ n x 1 1 0 0 1 0 x 1 n 0 0 0 1 x 2 1 f1 2 0 0 1 0 x 2 n fn 2 0 0 0 1 x m 1 0 0 f1 m 1 0 x m n 0 0 fn m 0 1 Figure 1 The columns are the gradients of the constraints To see this it might help to consult Figure 1 Suppose λ 2,, λ m, µ 1,, µ n yield a linear combination of the gradients that adds up to the zero vector Then clearly µ 1 = = µ n = 0 Thus since each fj i > 0, we get λ i = 0, for all i Thus by the Lagrange Multiplier Theorem the first order conditions are (assuming each x i j > 0): λ i fj i i = 1,, m µ j = 0 j = 1,, n where for symmetry we define λ 1 = 1 This implies λ i = f 1 j for any factor j = 1,, n Let y 1 (w, x) be the optimal value function Then by the Envelope Theorem, the slope of v 20151111::1410

KC Border Production Possibility Frontier 3 the PPF satisfies y 1 = L η i η i = λ i = f 1 j for any j = 1,, n In other words, λ i is the marginal opportunity cost of a unit of y i in terms of y 1 Also note that f i k = µ j µ k, which is independent of i That is, in every industry the slopes of the isoquants are the same 11 Second order conditions While we re at it, let s check the second order conditions The Hessian of the Lagrangean is the mn mn block-diagonal matrix λ 1 f11 1 λ 1 f 1 1n λ 1 fn1 1 λ 1 f 1 nn λ 2 f11 2 λ 2 f 2 1n λ 2 fn1 2 λ 2 f 2 nn H = λ m f11 m λ m f1n m λ m fn1 m λ m fnn m Let v = (v 1,, v m ) belong to R nm The second order condition is that the quadratic form v Hv is negative semidefinite on the subspace orthogonal to the gradients of the constraints Again referring to Figure 1, it is straightforward to see that this means λ i fjkv i jv i k i 0 for all nonzero v satisfying 1=1 k=1 f i v i = fjv i j i = 0, i = 2,, m, v 20151111::1410

KC Border Production Possibility Frontier 4 and vj i = 0 j = 1,, n What about the case i = 1? If we can show that f 1 v 1 = 0, then by our assumption on the gradients of the f i s, each λ i > 0, so by the assumption on the Hessian of the f i s, each bracketed term is nonpositive, and at least one is strictly negative (since at least one v i 0) To see that f 1 v 1 = 0, observe that for each j, v 1 j = m i=2 v i j Thus f 1 v 1 = fj 1 vj 1 = fj 1 vj i i=2 = λ i fjv i i j i=2 = 0 The penultimate equality follows from the first order condition that λ i = µ j = f 1 j for all i 2 Relation to cost minimization Assume that each producer faces the same wages w = (w 1,, w n ) for the factors and minimizes costs To ease notation in this section, I shall suppress the superscripts denoting the particular output The cost minimization problem is to minimize w x subject to y f(x) Form the Lagrangean L(x, γ; w, y) = w x + γ ( y f(x) ) The value function is the cost function c(w, y) By the Envelope Theorem, the marginal cost is MC = c y = L y = γ We also have the first order conditions (check the gradient of the constraint): w j γf j = 0, j = 1,, n assuming each x j > 0 (Note that these implies γ > 0) In other words, f j = w j MC v 20151111::1410

KC Border Production Possibility Frontier 5 Now back to the PPF If all firms face the same wages and minimize costs, then y 1 η i = λ i = f 1 j = fj i w j MC 1 w j MC i = MC i MC 1 That is, the marginal opportunity cost of one unit of y i expressed in terms of y 1 is exactly the ratio of the marginal cost of a unit of y i (calculated in terms of wages) relative to the marginal cost of a unit of y 1 What this tells us is that marginal costs (derived from wages) indicate real opportunity costs! 21 Extensions What if there are several production functions for each y i? Call them f i,1,, f i,p 1 Then λ i,k f i,k j µ j = 0, and we proceed as before What if there are joint products? Describe feasibility as where each T y i < 0 and each T x i j T (y 1,, y m, x 1 1,, x 1 n,, x m 1,, x m n ) 0 > 0, and consider the Lagrangean ) y 1 + λt (y 1, η 2, η m, x 1 1,, x 1 n,, x m 1,, x m n ) + µ j ( x j x i j Also how do we deal seriously with the nonnegativity constraints? References [1] Lerner, A P 1934 The concept of monopoly and the measurement of monopoly power Review of Economic Studies 1(3):157 175 v 20151111::1410