CENTER FOR CYBERNETIC STUDIES

Similar documents
The Comparison of Stochastic and Deterministic DEA Models

IPRODUCTION FUNCTIONSU) TEXAS UNIV AT AUSIN CENTER FOR ICYBERNETI CAUDIES ACHURNESET AL MAY 83CCS-459

L0CI..'.'M496C099F0124N

USING LEXICOGRAPHIC PARAMETRIC PROGRAMMING FOR IDENTIFYING EFFICIENT UNITS IN DEA

ABSTRACT INTRODUCTION

Chance Constrained Data Envelopment Analysis The Productive Efficiency of Units with Stochastic Outputs

A METHOD FOR SOLVING 0-1 MULTIPLE OBJECTIVE LINEAR PROGRAMMING PROBLEM USING DEA

1/1 AD-A FOUNDATIONS OF DATA ENVELOPMENT ANALYSIS FOR PARETO-KOOPARNS EFFICIENT EM..(U) TEXAS UNIV AT AUSTIN

A DIMENSIONAL DECOMPOSITION APPROACH TO IDENTIFYING EFFICIENT UNITS IN LARGE-SCALE DEA MODELS

A Survey of Some Recent Developments in Data Envelopment Analysis

A Slacks-base Measure of Super-efficiency for Dea with Negative Data

AN IMPROVED APPROACH FOR MEASUREMENT EFFICIENCY OF DEA AND ITS STABILITY USING LOCAL VARIATIONS

Data envelopment analysis

Data Envelopment Analysis within Evaluation of the Efficiency of Firm Productivity

Equivalent Standard DEA Models to Provide Super-Efficiency Scores

Sensitivity and Stability Radius in Data Envelopment Analysis

Searching the Efficient Frontier in Data Envelopment Analysis INTERIM REPORT. IR-97-79/October. Pekka Korhonen

Finding the strong defining hyperplanes of production possibility set with constant returns to scale using the linear independent vectors

Investigación Operativa. New Centralized Resource Allocation DEA Models under Constant Returns to Scale 1

Symmetric Error Structure in Stochastic DEA

Review of Methods for Increasing Discrimination in Data Envelopment Analysis

Chapter 2 Output Input Ratio Efficiency Measures

Further discussion on linear production functions and DEA

Determination of Economic Optimal Strategy for Increment of the Electricity Supply Industry in Iran by DEA

Classifying inputs and outputs in data envelopment analysis

Complete Closest-Target Based Directional FDH Measures of Efficiency in DEA

Data Envelopment Analysis and its aplications

Identifying Efficient Units in Large-Scale Dea Models

Special Cases in Linear Programming. H. R. Alvarez A., Ph. D. 1

Revenue Malmquist Index with Variable Relative Importance as a Function of Time in Different PERIOD and FDH Models of DEA

Modeling undesirable factors in efficiency evaluation

MULTI-COMPONENT RETURNS TO SCALE: A DEA-BASED APPROACH

Optimal weights in DEA models with weight restrictions

Measuring the efficiency of assembled printed circuit boards with undesirable outputs using Two-stage Data Envelopment Analysis

A DEA- COMPROMISE PROGRAMMING MODEL FOR COMPREHENSIVE RANKING

Selective Measures under Constant and Variable Returns-To- Scale Assumptions

A ratio-based method for ranking production units in profit efficiency measurement

o t tc ELEt.TE JUL ~ H ~ ' DiSTRfBUTfOli!TA'ffid:NT A The University of Texas Austin,Texas CCS Research Report 622

Darmstadt Discussion Papers in ECONOMICS

Subhash C Ray Department of Economics University of Connecticut Storrs CT

NEGATIVE DATA IN DEA: A SIMPLE PROPORTIONAL DISTANCE FUNCTION APPROACH. Kristiaan Kerstens*, Ignace Van de Woestyne**

A New Method for Optimization of Inefficient Cost Units In The Presence of Undesirable Outputs

A Simple Characterization

Mixed input and output orientations of Data Envelopment Analysis with Linear Fractional Programming and Least Distance Measures

A reference-searching based algorithm for large-scale data envelopment. analysis computation. Wen-Chih Chen *

RD-R14i 390 A LIMIT THEOREM ON CHARACTERISTIC FUNCTIONS VIA RN ini EXTREMAL PRINCIPLE(U) TEXAS UNIV AT AUSTIN CENTER FOR CYBERNETIC STUDIES A BEN-TAL

Sensitivity and Stability Analysis in Uncertain Data Envelopment (DEA)

Research Article A Data Envelopment Analysis Approach to Supply Chain Efficiency

Indian Institute of Management Calcutta. Working Paper Series. WPS No. 787 September 2016

Ranking Decision Making Units with Negative and Positive Input and Output

LITERATURE REVIEW. Chapter 2

The directional hybrid measure of efficiency in data envelopment analysis

Properties of Inefficiency Indexes on Input, Output Space

CHAPTER 4 MEASURING CAPACITY UTILIZATION: THE NONPARAMETRIC APPROACH

Author Copy. A modified super-efficiency DEA model for infeasibility. WD Cook 1,LLiang 2,YZha 2 and J Zhu 3

Understanding the Simplex algorithm. Standard Optimization Problems.

Estimating most productive scale size using data envelopment analysis

EFFICIENCY ANALYSIS UNDER CONSIDERATION OF SATISFICING LEVELS

A Data Envelopment Analysis Based Approach for Target Setting and Resource Allocation: Application in Gas Companies

Institute of Applied Mathematics, Comenius University Bratislava. Slovak Republic (

Answers to problems. Chapter 1. Chapter (0, 0) (3.5,0) (0,4.5) (2, 3) 2.1(a) Last tableau. (b) Last tableau /2 -3/ /4 3/4 1/4 2.

Groups performance ranking based on inefficiency sharing

PRIORITIZATION METHOD FOR FRONTIER DMUs: A DISTANCE-BASED APPROACH

DEA WITH EFFICIENCY CLASSIFICATION PRESERVING CONDITIONAL CONVEXITY

Joint Use of Factor Analysis (FA) and Data Envelopment Analysis (DEA) for Ranking of Data Envelopment Analysis

A Fuzzy Data Envelopment Analysis Approach based on Parametric Programming

Centre for Efficiency and Productivity Analysis

CHAPTER 3 THE METHOD OF DEA, DDF AND MLPI

Constructing efficient solutions structure of multiobjective linear programming

Department of Economics Working Paper Series

Fuzzy efficiency: Multiplier and enveloping CCR models

Lecture 8: Column Generation

Nonparametric Analysis of Cost Minimization and Efficiency. Beth Lemberg 1 Metha Wongcharupan 2

Finding Closest Targets in Non-Oriented DEA Models: The Case of Convex and Non-Convex Technologies

Modeling Dynamic Production Systems with Network Structure

Chapter 1. Preliminaries

Linear Algebra Practice Problems

European Journal of Operational Research

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

SOLVE LEAST ABSOLUTE VALUE REGRESSION PROBLEMS USING MODIFIED GOAL PROGRAMMING TECHNIQUES

Sensitivity and Stability Analysis in DEA on Interval Data by Using MOLP Methods

PRINCIPAL COMPONENT ANALYSIS TO RANKING TECHNICAL EFFICIENCIES THROUGH STOCHASTIC FRONTIER ANALYSIS AND DEA

Preliminary Statistics Lecture 2: Probability Theory (Outline) prelimsoas.webs.com

Prediction of A CRS Frontier Function and A Transformation Function for A CCR DEA Using EMBEDED PCA

THE MEASUREMENT OF EFFICIENCY

Distributionally Robust Convex Optimization

A Note on the Scale Efficiency Test of Simar and Wilson

INEFFICIENCY EVALUATION WITH AN ADDITIVE DEA MODEL UNDER IMPRECISE DATA, AN APPLICATION ON IAUK DEPARTMENTS

Review of ranking methods in the data envelopment analysis context

Robust linear optimization under general norms

4.3 Minimizing & Mixed Constraints

3. Linear Programming and Polyhedral Combinatorics

Generalized quantiles as risk measures

Math Studio College Algebra

Lecture 8: Column Generation

3. Duality: What is duality? Why does it matter? Sensitivity through duality.

Joint Variable Selection for Data Envelopment Analysis via Group Sparsity

A Characterization of Polyhedral Convex Sets

Transformation ofvariablesin Data EnvelopmentAnalysis

McMaster University. Advanced Optimization Laboratory. Title: A Proximal Method for Identifying Active Manifolds. Authors: Warren L.

Transcription:

TIC 1 (2 CCS Research Report No. 626 Data Envelopment Analysis 1% by (A. Charnes W.W. Cooper CENTER FOR CYBERNETIC STUDIES DTIC The University of Texas Austin,Texas 78712 ELECTE fl OCT04109W 90 -

CCS Research Report No. 626 Data Envelopment Analysis by A. Charnes W.W. Cooper December 1989 DTIC ELECTE This research was partly supported by National Science Foundation Grant SES- 8722504 with the Center for Cybernetic Studies, The University of Texas at Austin. Reproduction in whole or in part is permitted for any purpose of the United States Government. - IAMtU11O MTTENT A Approv,7d fmr psbm O ;"ditkvzo Ua0=zQ _I CENTER FOR CYBERNETIC STUDIES A. Charnes, Director College of Business Administration, 5.202 The University of Texas at Austin Austin, Texas 78712-1177 (512) 471-1821

DATA ENVELOPMENT ANALYSIS National Contribution of U.S.A. IFORS '90-12th Triennial Conference on Operations Research Athens, Greece --- June 25-29, 1990 A. Charnes and W.W. Cooper University of Texas, Austin, Texas, U.S.A. ABSTRACT The origin, history, current status and problems of Data Envelopment Analysis (DEA) on empirical multi-input, multi-output data are surveyed in relation to efficiency valuation, production function determination and stochastic frontier estimation. KEYWORDS Data Envelopment Analysis, Efficiency Valuation, Production Functions, Frontier Estimation Aocession?or tstis! TIc TAB GRA&I [ SUnarm.ouinced 1: j u * s t i fc TA f i[ c a t i[ o - - 12 --" - Di trib ttwl AvailabilittY Oel Dist Specil

ORIGIN - Data Envelopment Analysis (DEA) began in generalization of the usual scientificengineering efficiency valuation of a single input, single output system as the ratio of the output to input (in the same physical measure, e.g., energy) to multi-input, multi-output systems (or organizations or production units) without known "physical" laws or the same measure for all inputs and outputs. This was accomplished by (i) reduction of the multi-inputs and outputs to single "virtual" inputs and outputs, (ii)replacing absolute efficiency by efficiency relative to all members of a sample of units (called DMU's) having the same inputs and outputs, (iii) evaluating a unit's relative efficiency as the maximum of the ratio of its virtual output to virtual input subject to virtual outputs being less than or equal to virtual inputs for each (all) of the DMU's. Evidently there are infinitely many ways to construct virtual inputs and outputs. One involves multiplying the inputs and outputs by non-negative "virtual multipliers" and adding to get a single virtual input or output. Another is raising them to non-negative powers and multiplying them together to get a virtual input or output. Out of (iii), the dual mathematical programming problem arising in the first way (Chames, et al. 1978) reproduces (and generalizes) M. Farrell's efficiency evaluation (Farrell 1957) based on a "production possibility set" consisting of the conical hull of the input-output vectors of the sample points. The economic efficient production function is then a piecewise linear function based on the efficient DMU's inputs and outputs. The second "multiplicative" way (Charnes et al. 1981, 1983) leads to piece-wise Cobb-Douglas functions. ECONOMIC PRODUCTION FUNCTIONS Farrell, working rom the production fuiatiulx side, thought he had to assume constant returns to scale to obtain his (single output) results. Others like R. Shepard thought a constant elasticity assumption was needed to get log-linear (or Cobb-Douglas) results. Coming from

the "valuation" side via the dual mathematical programming problem to the "production 2 function" or "DEA" side, Charnes et al. (1978, 1981, 1983) showed that neither assumption was necessary and that piece-wise constancy would hold for returns to scale respectively elasticity Ln the efficient economic "empirical" production function based on sample data. In Charnes et al. (1985a), exploring the production function side, it was shown that all (and more) "models" for testing the efficiency of DMU's were the Charnes-Cooper test (see Chames and Cooper 1961 and Ben-Israel et al. 1977) for multi-criteria ("goal programming") optimality here specialized to "Pareto-Koopmans efficiency" relative to the specified production possibility sets. These involve envelopment of the inputs from below and the outputs from above, hence the name Data Envelopment Analysis (DEA) for all "models" of such efficiency type. The first way started with the "CCR ratio" form: max TIT yd4t x, with TT yj/t xj < 1, T1, t 0, j = 1,...,n (1) where T =(YjYsj)x(xj...Xmj), are the output respectively input vectors of DMUj assumed positive and (x., yo) is the pair for DMUo, one of the DMU's. To eliminate false technical efficiency determinations (recognized by Farrell) stemming from optimal entries of T or 4, being zero, it was immediately replaced by the non- Archimedean CCR form: max 1 T ygtt xo with T T yj T xj _ I, 11T/tT x. > r et (2) ht/ T x et, i j = m a oo...nsn where e is a non-archimedean infinitesimal and the e T are vectors of ones.

Using the Charnes-Cooper transformation (3) 3,T, t=( ~ 4T xd The equivalent dual linear programs are: CCR max t T Y. with U T x_ = 1, jtty - Tx<0,tT>e et ut> et (4.1) DEA min 0- e ets+- e T s-withy) - s =Yo,0Xo- XX- s-0 (4.2) and X, s, s- 2t 0 where Y = [Y 1... Yn], X = [x 1...Xn] To be noted is that Fare-Hunsaker (Management Science February 1986) present them erroneously with erroneous conclusions and all examples erroneously solved. (See Chames et al 1987.) Computation is done on the DEA side. No non-archimedean quantities need be used, see Charnes and Cooper (1961), also I. Ali (University of Massachusetts, College of Business), J. Stutz (University of Miami, Quantitative Management department) and DEA software of the Center for Cybernetic Studies. See also Charnes et al (1986) for a much more complicated Archimedean approach using multiple linear programs for each efficiency determination. model Also in Charnes et al (1985a) is a most useful DEA model, today called the "additive" min -e T s + - e T s- with YX - s + = yo, -XX - s- = Xo (5) et)x = 1 and X, s+, s- _0 Interestingly, by taking logs of the virtual input-output vectors in the multiplicative model, it reduces to this form.

To insure that the efficiency determined in the additive model is independent of the units 4 of measurement of the inputs and outputs, the s and s- in (5) can be replaced by r, 9' with si = Sr+IYr o and S'; = sdxi o, r =,... s, i = 1... m. This also improves numerical stability in the calculations. To allow for the important possibilities of thresholds on possible inputs and ceilings on possible outputs, the "extended additive" model, see Charnes et al (1987a), puts individual bounds on the DEA side "slacks" which do not require additional rows of constraints in usual LP software. In all the above models, each inefficient DMU determination provides it with a "facet" of similar efficient DMU's which is the convex hull of the DMU's with zero "reduced costs" in an optimal basic simplex tableau for the inefficient DMU problem. These facets are the pieces of the empirical efficient production function on which the function is linear (log-linear in the multiplicative case). The union of the input sets of the facets, however, is often not that of the de'sired input set for the production possibility set. It is an open problem to extend the function to all of this set i.e. how best to estimate an approximation to what a corresponding efficient output to each input therein might be. EFFICIENCY VALUATION Every DEA analysis involves suitable selection of inputs and outputs to assure a reasonable production function. Also required are sufficient sample data. Sometimes proper inputs-outputs or sample data are unavailable. Sometimes the objective is only to determine a single (or few) "most" efficient DMU. For example, Thompson & Thrall (1986) determined Waxahachie, Texas to be the "most efficient" site for the Super Collider of six possible locations.

Starting with 4 inputs, 4 outputs, 6 DMU's and employing the CCR ratio model, 5 5 DMU's were efficient. By placing additional restrictions on pairs of virtual multipliers (called "assurance regions") i.e. by requiring that the relative valuations of certain inputs or outputs were in specified ranges, their new DEA model recognized only one DMU, Waxahachie, to be efficient. This means, however, that the corresponding efficient production function so determined consists only of all positive multiples of the Waxahachie input-output vector. The restrictions on the virtual multipliers placed them in cones which were the intersection of half-spaces with the non-negative orthant. Working with the Pareto-optimality or multi-criteria (or"dominance") DEA basis and the dual convex programming forms of Ben- Israel et al (1971) with one side variables in a closed convex cone and the dual side variables in the (negative) polar cone, Charnes et al (1987b) generalized the CCR ratio model of (1) to a "cone-ratio" model which with trivial extension includes all assurance region embellishments and which does not require the cones to be given as intersections of half-spaces. Trying to determine a more objective measure of managerial performance of bank managers from Call report data in D. B. Sun's Ph.D. thesis, the CCR ratio form rated two notoriously inefficient banks (in particular years) as efficient (see Chames et al 1988). A coneratio model with virtual multiplier cones as die conical hull of the CCR optimal virtual multipliers of 3 banks unanimously top rated by bank experts was essayed. It correctly rated the notorious ones as inefficient. In this "sum" form, computation is reduced to the old CCR computation with inputoutput matrix multiplied by the matrix of the old optimal virtual multipliers of the selected top rated DMU's. Thus no new major software is required. All "intersection" form cones for assurance regions can be transformed by a matrix multiplication into "sum" form. From sum to intersection form, the half-spaces are often more complicated than assurance regions (see Charnes et al 1989).

STOCHASTIC ASPECTS OF DEA Every DEA analysis involves sample data of inputs and outputs which are converted by definite mathematical operations into other quantities. By definition such quantifies are "statistics." Therefore every DEA "model" is a stochastic model. Since, however, the distribution functions of managerial performance at the different DMU's is unknown we lack appropriate statistical theory for our real statistical structures. Development of such theory and appropriate computation is a major task for DEA research. The current state of progress is perhaps best evaluated in Jati Sengupta's outstanding 1989 monograph. Efficiency Analysis by Production Frontiers: The Non-Parametric Approach which surveys and develops some DEA models in relation to past and current econometric concepts. These "risk" elaborations (i.e. known statistical distributions) are almost entirely for single output situations and fail to consider appropriately the "waste" resident in inefficient "uncertain" managerial performance. At a minimum, since efficient production function pieces have numbers of parameters equal to the sum of the number of inputs plus outputs, one should have at least 3 times more DMU's than this sum. Practically this has been accomplished in real studies with DMU observations over multiple time periods by "window analysis." (See Charnes et al 1985b.) There a DMU in each different period of a "window" of periods is treated as if it were a different DMU. The same DMU in say three successive periods is treated as three DMU's, thus tripling the number of DMU's in the sample window. Then the window is moved ahead one period from the old start and an analysis is done on the new window, and so forth. From the pattern of, say, efficiency scores for each particular DMU a good deal of practical information on stochastic variability is at hand. E.g. a drastic change in the score for a real DMU at a particular time-period across the windows is a strong signal that something unusual was happening in that DMU at that time period which should be investigated. Again, taking the median of the scores through the windows at each time period for a DMU gives a reasonable temporal estimate of the efficiency performance of the DMU. The

7 totality for all DMU's gives then the temporal pattern of efficiency performance of them all. It goes without saying that there is as yet no time series theory developed for such constructs. Another useful tool is the "envelopment map", a matrix (aij) which records the number of times DMUj is a facet generator for DMU i. By summing the columns one can have instant determination of which DMU's are most (or least) consistently efficient. These window analysis techniques can also be applied to other quantities than efficiency scores e.g., rates of change of a particular output with respect to a particular input, which would be important in specification of and temporal analysis of the relative effectiveness of the different DMU's for this output with this input. Again, only pigce of the efficient empirical production function are determined and these, stochastically, with robustness corresponding to the number of DMU's enveloped by the associated facet. As mentioned, additional means for production function estimation across at least the whole desired production possibility set domain (of inputs) are important to determine. Despite these challenging research problems, DEA has proved itself as a powerful tool for investigation of real managerial or production situations and with the most unusual feature of developing assessments applicable to the individual productive units instead of averages across the mass which are in error for every individual. )

REFERENCES 8 Ben-Israel, A., A. Ben-Tal, and A. Charnes(1977). Necessary and sufficient conditions for a pareto optimum in convex programming. In Econometrica, 4J, 811-820. Ben-Israel, A., A. Charnes, K.O. Kortanek, (1971). Asymptotic duc ity over closed convex sets. In Journal of Math. Anal. and App., 35, 677-691. Charnes, A. and W.W. Cooper (1961). Management Models and Industrial Applications of Linear Programming Vol. II, J. Wiley and Sons. Charnes, A., W.W. Cooper and E. Rhodes (1978). Measuring the efficiency of decisionmaking units. European Journal of Operational Research, 2, 429-444. Charnes, A., W.W. Cooper, L. Seiford and J. Stutz (1981). A multiplicative model for efficiency analysis. Journal of Socio-Economic Planning Science, 16, 223-224. Charnes, A., W.W. Cooper, L. Seiford and J Stutz (1983). Invariant multiplicative efficiency and piece-wisc Cobb-Douglas envelopments. Operations Research Letters, 2, 101-103. Charnes, A., W.W. Cooper, B. Golany, I. Seiford and J. Stutz (1985a). Foundations of data envelopment analysis for Pareto-Koopmans efficient empirical production functions. Journal of Econometrics, 30, 91-107. Charnes, A., W.W. Cooper, C.T. Clark and B. Golany (1985b). A developmental study of data envelopment analysis in measuring the efficiency of maintenance units in the U.S. Air Forces. Annals of Operations Research, 2, 95-112. Charnes, A., W.W. Cooper and R. Thrall (1986). Classifying and characterizing efficiencies in data envelopment analysis. Operations Research Letters, 5, 105-110. Charnes, A., W.W. Cooper, J. Rousseau and J. Semple (1987a). Data envelopment analysis and axiomatic notions of efficiency and reference sets. Research Report CCS 558 (University of Texas at Austin, Center for Cybernetic Studies). Charnes, A., W.W. Cooper, Z.M. Huang and Q.I,. Wei (1987b). Cone-ratio data envelopment analysis and multi-objective programming. Research Report CCS 559 (University of Texas at Austin, Center for Cybernetic Studies). International Journal of Systems Science, (1989). 20, 1099-1118. Chames, A., W.W. Cooper, Z.M. Huang and D.B. Sun (1988). Polyhedral cone-ratio DEA models with an illustrative application to large commercial banks. Research Report CCS 611 (University of Texas at Austin, Center for Cybernetic Studies). Charnes, A., W.W. Cooper, Z.M. Huang and D.B. Sun (1989). Relations between halfspace and finitely generated cones in polyhedral cone-ratio DEA models. Research Report CCS 636 (University of Texas at Austin, Center for Cybernetic Studies). Farrell, M.J. (1957). The measurement cf productive efficiency. J.Royal Stat. Soc., Ser. A., 120, Part 3, 253. Thompson, R.G., F.D. Singleton, R.M. Thrall and B.A. Smith (1986). Comparative site evaluations for locating a high energy physics laboratory in Texas. Interfaces, 16, 35-49.