Keywords: large-scale problem, dimension reduction, multicriteria programming, Pareto-set, dynamic programming.
|
|
- Marvin Berry
- 5 years ago
- Views:
Transcription
1 Advanced Mathematical Models & Applications Vol.3, No.3, 2018, pp REDUCTION OF ONE BLOCK LINEAR MULTICRITERIA DECISION-MAKING PROBLEM Rafael H. Hamidov 1, Mutallim M. Mutallimov 2, Khatin Y. Huseynova 1, Rufana R. Javadzada 1 1 Faculty of Applied Mathematics and Cybernetics, Baku State University, Baku, Azerbaijan 2 Institute of Applied Mathematics, Baku State University, Baku, Azerbaijan Abstract. This paper analyzes one large-scale multicriteria linear programming problem with block-triangular shape matrix A of constraints. The matrix A is taken as M-matrix. Its all diagonal blocks are also assumed to be M-matrix. We construct procedure to reduce the dimension of problem, preserving the initial structure as it was. Such property of procedure allows to use existence efficient procedures as an auxiliary ones in developing of decision making to have finally solution. Suggested reduction scheme allows one to treat information we deal with in part. We illustrate this process on a numerical example. It helps to master easily a reduction algorithm and to have an evident idea of its efficiency. We also consider linear dynamic multicriteria problem with a phase vector having large dimension and show applicability of reduction algorithm in this case. Keywords: large-scale problem, dimension reduction, multicriteria programming, Pareto-set, dynamic programming. AMS Subject Classification: 46A22, 90C26. Corresponding author: Mutallim M. Mutallimov, Institute of Applied Mathematics, Baku State University, Z.Khalilov 23, Baku, AZ1148, Azerbaijan, mmutallimov@bsu.edu.az Received: 08 October 2018; Revised: 26 October 2018; Accepted: 16 November 2018; Published: 28 December Introduction The following multicriteria problem of linear programming is considered: Ax b, x 0, Cx max, (1) where A = a ij is the matrix of dimension (n n), C = c ij is the matrix of dimension (k n), b is n dimensional vector column and x- unknown n-dimensional vector column. A has a property: a ii > 0, i = 1,..., n, a ij 0, i, j = 1,..., n where i j. All coordinates of b are non-negative. It is supposed that there is A 1 and her elements are non-negative. The matrix A specified properties is called a M-matrix (Gal et al., 2013). In addition we impose the following condition on (1): A has block-triangular shape and all diagonal blocks of A are M-matrix. A reduction algorithm of a problem (1) is developed. The purpose of the algorithm is to exclude at the same time a part of the x i variables and conditions (Ax) i b i from (1) and to receive a new problem of smaller dimension and with the same Pareto set as in (1). It is shown that the structure of the problem (1), after a reduction, remains as in (1). It allows us to use, if necessary, the effective algorithms developed in Belen kii (1968), Meerov (1986) when k = 1 for the reduced problem. Reduction process will be organized in part and begins with the last diagonal block. The explanation is offered to expediency of the organization of reduction process for this scheme. It allows to treat in part of information in reduction process and to arise a number of problems we can meet because of large dimension in process of the solution of 227
2 ADVANCED MATH. MODELS & APPLICATIONS, V.3, N.3, 2018 the problem (1). One reduction algorithm without when A is not triangular shape is considered in Hamidov et al. (2017). At first on a numerical example we illustrate executions of separate steps of an algorithm. It helps to master easily a reduction algorithm and to have an evident idea of its efficiency. Then the general scheme of an algorithm is offered. After then, the linear dynamic multicriteria problem, with a phase vector, having large dimension is considered. Such problem arises, for example, by optimization of oil production in the elastic mode, maximizing full profit (Meerov, 1986). After finite-dimensional approximation according to the scheme from Meerov (1986) the problem becomes as problem (1). Then the offered reduction algorithm can be called reduction algorithm for considered class of problems of multicriteria programming in functional space. 2 Problem definition Problems in decision-making most commonly arise from the analysis of mathematical models of practical situations. A variety of practical situations of decision-making as a rule are considered in their mathematical analysis. It is the reason of lack of uniform reception of the decision to use him for each case. However, there is a number of fundamental properties that the seeking solutions possess, independently, in what way it is obtained. In many decision making problems one of such properties is a property of efficiency. It assumes that decision-making process should be organized in Pareto optimum set. However, for problems of large dimension, the organization of decision-making process considerably becomes complicated. Therefore if there is an opportunity, at first it is reasonable to reduce a problem and then to organize search of her decision. In this work such opportunity for following multicriteria linear programming with block triangular shape matrix is considered: i 1 x i A i x i H ik x k b i, x i 0, i = 1, 2,..., m, (2) k=1 c 1r x 1 + c 2r x c mr x m max, r = 1,..., l. (3) Here A i,h ik is a square (n n)- matrices with non-negative elements (A i 0, H ik 0), b i, x i is n dimensional column vectors and b i 0, i = 1, 2,..., m; c ir is n dimensional row vectors, i = 1, 2,..., m, r = 1,..., l Denote A i = E A i, i = 1, 2,..., m (E is n dimensional unit matrix) and problem (2), (3) is representable in block-matrix form:. Ax = Ā H 21 Ā H 31 H 32 Ā H m 11 H m 12 H m Ā m 1 0 H m1 H m2 H m3... H m m 1 Ā m x 1 0,..., x m 0, Cx = C 1 x 1 + C 2 x C m x m max. x 1 x 2 x 3... x m 1 x m b 1 b 2 b 3. b m 1 b m = b, Here C is (l nm) matrix, C i is (l n)-matrices, i = 1, 2,..., m. Note that an assumption of identical dimension of all vectors of x i, diagonal blocks A i and vectors of b i, i = 1, 2,..., m is not necessarily. For convenience of treatment it is assumed. We take that matrices A i, i = 1, 2,..., m are M- matrices, i.e. A i 1 exists, A i 1 0, i = 1, 2,..., m, and their non-diagonal elements aren t positive. The last condition is satisfied automatically as a result of the conditions A 0 and H ik 0. It is easy to check that, from A i 1 0, i = 1, 2,..., m follows also a condition of A 1 0, i.e. a matrix A becomes M-matrix. 228
3 R.H. HAMIDOV et al.: REDUCTION OF ONE BLOCK LINEAR MULTICRITERIA DECISION-MAKING... 3 Auxiliary facts Denote X = { x R n m Ax b, x 0 }. Let x 0 X and there exists j 0 such that c ij 0 0, i = 1, 2,..., n and (Ax 0 ) j 0 < b j 0. Then the following statement takes place. Statement 1. If l i=1 c ij 0 > 0, then x0 is non-efficient solution (dominated solution (Podinovsky & Nogin, 1982)) and if l i=1 c ij0 = 0, then there exist vector estimation which is not worse in Poreto sens, than Cx 0 and (Ax, ) j 0 = b j 0. Proof : The j 0 -th diagonal element of the matrix A are positive, and other elements of j th column of a matrix A are non-negative. Then it is possible to find number α > 0 and x Xx i = x 0 i, i j0, x j = x 0 j 0 + α, (Ax ) j 0 = b j. Then Cx = Cx 0 + αc j 0 Cx 0. If C j 0 0 we have Cx Cx 0, i.e x 0 can t be effective. Denote G = {j c j 0}. Let G and j 0 G. Statement 2. We eliminate variable x j 0 from variables involved in criterion functions (Cx) i max, i = 1,..., k expressing x j 0 trough the others of variables in the equation (Ax) j 0 = b j 0. Then if among other coefficients of a matrix C exist such which change its value, then such change can happen only by means of positive increments. The trustiness of statement can easily be checked. From statements 1,2 it follows that, if C j 0, j G, it is possible to exclude all variables from x j, j G(3), (4) by means of the system of the equations (Ax) j = b j, j G, preserving at the same time the set of all effective estimates. Thereby we eliminate all pare conditions (Ax) j = b j, x j, j G from further considiration. Elimination of the variables x i, i G from criterion function increase (don t reduce) coefficient of the other variables and we have an opportunity to make a new nonempty set G for again received new problem. Thus process we can continue until G. Below we will show one effective variant of a reduction of a problem (2), (3). At first, we illustrate this variant on the numerical example. 4 Numerical example We will consider problem (2), (3) with the following numerical data: 9x 1 x 2 x 3 3, x 1 + 9x 2 x 3 3, x 1 x 2 + 9x 3 3, x 1 x 2 x 3 + 6x 4 x 5 x 6 3, x 1 x 2 x 3 x 4 + 6x 5 x 6 3, x 1 x 2 x 3 x 4 x 5 + 6x 6 3, (4) x 1 x 2 x 3 x 4 x 5 x 6 + 3x 7 x 8 x 9 3, x 1 x 2 x 3 x 4 x 5 x 6 x 7 + 3x 8 x 9 3, x 1 x 2 x 3 x 4 x 5 x 6 x 7 x 8 + 3x 9 3, x 1 0, x 2 0, x 3 0, x 4 0, x 5 0, x 6 0, x 7 0, x 8 0, x 9 0, 229
4 ADVANCED MATH. MODELS & APPLICATIONS, V.3, N.3, , 65x 1 1, 5x 2 2, 7x 3 6, 5x 4 + x 5 1, 5x 6 4x 7 x 8 + 6x 9 max, (5) x 1 x 2 x 3 x 4 7x 5 x 6 2x 7 + x 8 + x 9 max. The problem (4), (5) has three diagonal blocks. We will reduce it on blocks beginning from the last one. First step. We take the last three columns of the matrix of coefficients of the criterion function Cx and check whether there is among them a column with non-negative coordinates. The last column of a matrix of C, i.e. coefficients of variable x 9 has such property. Define = {9} =. We eliminate variable x 9 by using the equation (Ax) 9 = b 9. As a result, new coefficients of variable x 8 in criterion functions become non-negative, i.e. the set G extends and we have G = {8, 9}. Now at the same time we eliminate the variables x 8, x 9 from the variables in criterion functions. However this time there is no new non-negative column from coefficients of the remained variables, corresponding to the third blocks. Such variable in our case is only x 7. Then we exclude the variables x 8, x 9 from a condition (Ax) 7 b 7 and remember it. After elimination x 8 x 9 we come to the following problem with two diagonal blocks A 1, A 2 : 9x 1 x 2 x 3 3, x 1 + 9x 2 x 3 3, x 1 x 2 + 9x 3 3, x 1 x 2 x 3 + 6x 4 x 5 x 6 3, (6) x 1 x 2 x 3 x 4 + 6x 5 x 6 3, x 1 x 2 x 3 x 4 x 5 + 6x 6 3, x 1 0, x 2 0, x 3 0, x 4 0, x 5 0, x 6 0, 0, 15x 1 + x 2 0, 2x 3 4x 4 + 3, 5x 5 + x 6 1, 5x max, (7) 0 x x x x 4 6x x 6 + x max. New problem (6), (7) preserves initial structure of problem (4), (5). Second step. Now we carry out reductions concerning the second block A 2 observing at the same time to the rule of the first step. We have G = {6}. We eliminate variable x 6 in (7) by means of the equation (Ax) 6 = b 6. However after elimination x 6 we don t have new non-negative columns from the coefficients x 4, x 5. Therefore we exclude the variable x 6 from the conditions (Ax) 4 b 4, (Ax) 5 b 5 and remember them. We come to the problem: 9x 1 x 2 x 3 3, x 1 + 9x 2 x 3 3, x 1 x 2 + 9x 3 3, (8) x 1 0, x 2 0, x 3 0, 1 60 x x x x x x max, (9) 4 0 x x x x 4 6x 5 + x max. 230
5 R.H. HAMIDOV et al.: REDUCTION OF ONE BLOCK LINEAR MULTICRITERIA DECISION-MAKING... Third step. G = {2}. We eliminate x 2 from (9) by using the equation x 1 + 9x 2 x 3 = 3 and after then the column from new coefficients of x 3 becomes non-negative. Then at the same time we eliminate x 2 and x 3 from criterion functions. As a result the first column of coefficients x 1 in criterion functions becomes non-negative. Now we eliminate variables x 1, x 2, x 3 from criterion functions by setting x 1 = x 2 = x 3 = 3 7. In this case there is no condition to be remembered. After three steps problem (4), (5) gets a form: 33x 4 7x 5 7x 7 5, 7x x 5 7x 7 5, 2x 4 2x 5 + 2x , x 4 0, x 5 0, x 7 0, x x x max, 0 x 4 6x 5 + x max. The condition of new the problem is formed by remembered conditions of (3), (4) after elimination all of variables x 1, x 2, x 3, x 6, x 8, x 9. Remark: If to begin reduction process not from the last diagonal block, and other such block, then we wouldn t gain that effect as a result of reduction of the problem (4), (5) as it took place. All variables which to be subject to elimination when we begin with the last block, aren t present in the higher blocks. Therefore we have an opportunity to process initial information in part to reduce it. It is very important when reducing of the large scale problems. 5 Description of an problem reduction Step 1. We take the last diagonal block of problem (2), (3) and set the problem: A m x m b m, x m 0, (10) C n x n m. (11) We reduce problem (10), (11) according to the scheme from Hamidov et al. (2017): a) define a set: G = {i C m i 0, i = 1,..., n}, b) we eliminate all variables x n i, i G from the criterion functions and then check whether appears among again received columns in criterion functions non-negative. In positive answer, we eliminate all variables corresponding to these columns. If answer is negative, we eliminate from all conditions (Ax) i b i of a problem (2), (3) the conditions by which the variables x i are eliminated from criterion functions.we also remember other constraints of the problem (2), (3) containing rows of a matrix A m after elimination of variables. As a result we have a problem like (2), (3). All the blocks constraints of a new problem remain as in initial case. Only the matrix of criterion functions changes. If elements of matrix of new criteria differ from the corresponding element of a former elements of criteria matrix, following a statement 2, we can say that the new value will be strict larger than old values. Thus the chance of emergence of new non-negative columns in a matrix of criterion functions increases. However it can t be said if we begin process of a reduction with the first block A 1. Step 2. We repeat step 1 for again received problem. Step 3. Process comes to the end when G =. 231
6 ADVANCED MATH. MODELS & APPLICATIONS, V.3, N.3, Reduction of a linear multicriteria problem in dynamics Let L n 2 [0, T ] be the space of measurable of n-dimensional vector-functions, integratable with a square. C i (t), i = 1,..., l, b (t), x (t) L n 2 [0, T ]. Ci (t), i = 1,..., l, b(t) are given, and x (t) is unknown. A (t), H(t, τ) are the matrix functions of dimension (n n) and all components of a matrix A (t), H(t, τ) and a vector of b (t) aren t negative. Let F [x(t)] = A (t) x (t) + t 0 H (t, τ) x (τ) dτ and the operator F maps space of L n 2 [0, T ] into itself. Let X be the set of all feasible solution of system: x (t) F [x(t)] b (t), x (t) 0, 0 t T. (12) All equalities and inequalities are carried out for almost all t [0, T ]. The following problem is considered: T T J = (J 1,..., J k ) = C 1 (t) x (t) dt,..., C k (t) x (t) dt max, (13) 0 0 when x (t) X. The problem such as (13) arises, for example, in problem of optimization of oil production in the elastic mode (Meerov, 1986), maximizing full profit (Meerov, 1986). Following Meerov (1986) we approximate the problem (13) and we have: n C j i xi max, j = 1,..., k, i=1 i 1 x i A i x i h H ij x j b i, x i 0, i = 1,..., N, j=1 N = 2 m, h = T/N and b i = min { b i (t) } {, C j i = min } C j i (y). H ij = min {H (t, τ)} when (i 1) h t ih, (j 1) h τ jh, A i = min {A(t)}, when (i 1) h t ih. For the measurable function f(t) the minimum is understood as an exact top side of all numbers µ, for which µ µ (t) at almost all t. In (Meerov, 1986) it is proved that the optimal solution (14) when j = 1 tends to an optimal solution of (12), (13) in the following sense. Let x m = (x m 1,..., xm N ) an optimal solution of a (14) and xm (t) step-like function with value of x m i when (i 1) h t ih. Then x m (t) monotoncially tends to an optimal solution of (12), (13) in the norm L 2. A sufficient condition for existence of an optimal solution of (12), (13) when l = 1 is existence (E A) 1 0 (Meerov, 1986). We will assume that this condition takes place. Then the set of Y R k of all vector estimates of (12), (13) will be limited, closed and convex set. Then justice of the following statement follows from S. Karlin s theorem (see (Karlin & Gillis, 1960)) Statement 3.The set of effective points of (14) approximates the set of all effective points of (12), (13) and the accuracy of approximation depends on number N. It follows from statement 3 that the suggested reduction algorithm for (2), (3) becomes also the reduction algorithm for (12), (13). 7 Conclusion In this paper we considered one large- scale multicriteria linear programming with block-triangular shape matrix of constrains. We developed dimension reduction procedure by using the special structure of constraints. As a result we have the problem with the same structure as it was but with less variables and conditions. We also show the applicability of reduction algorithm to one linear dynamic multicriteria decision making problem. Illustration of the algorithm is given on a numerical example. (14) 232
7 R.H. HAMIDOV et al.: REDUCTION OF ONE BLOCK LINEAR MULTICRITERIA DECISION-MAKING... References Belen kii, V.Z.E. (1968). Problems of mathematical programming which have a minimal point. Doklady Akademii Nauk, 183(1), (in Russian). Gal, T., Stewart, T. & Hanne, T. (Eds.). (2013). Multicriteria decision making: advances in MCDM models, algorithms, theory, and applications (Vol. 21). Springer Science & Business Media. Hamidov, R., Mutallimov, M., Amirova, L. (2017). Application of reduction in one problem of decision-making, NAUPRI, 7, 6-12 (in Russian). Karlin, S. & Gillis, J. (1960). Mathematical methods and theory in games, programming, and economics. Physics Today, 13, 54p. Meerov, M.V. (1986). Research and optimization of multiply connected control systems, Nauka, Moscow, 1986 (in Russian). Podinovsky, V.V, Nogin, V.D. (1982). Poreto-optimal solutions of multicriteria problems. Moscow, Nauka (in Russian). 233
CLASSICAL FORMS OF LINEAR PROGRAMS, CONVERSION TECHNIQUES, AND SOME NOTATION
(Revised) October 12, 2004 CLASSICAL FORMS OF LINEAR PROGRAMS, CONVERSION TECHNIQUES, AND SOME NOTATION Linear programming is the minimization (maximization) of a linear objective, say c1x 1 + c2x 2 +
More informationSolving Linear Systems
Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 207 Philippe B. Laval (KSU) Linear Systems Fall 207 / 2 Introduction We continue looking how to solve linear systems of the
More information3.2 Iterative Solution Methods for Solving Linear
22 CHAPTER 3. NUMERICAL LINEAR ALGEBRA 3.2 Iterative Solution Methods for Solving Linear Systems 3.2.1 Introduction We continue looking how to solve linear systems of the form Ax = b where A = (a ij is
More informationUnderstanding the Simplex algorithm. Standard Optimization Problems.
Understanding the Simplex algorithm. Ma 162 Spring 2011 Ma 162 Spring 2011 February 28, 2011 Standard Optimization Problems. A standard maximization problem can be conveniently described in matrix form
More informationSolving Linear Systems
Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) Linear Systems Fall 2015 1 / 12 Introduction We continue looking how to solve linear systems of
More informationReview of Basic Concepts in Linear Algebra
Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra
More informationYinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method
The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear
More informationLinear Programming, Lecture 4
Linear Programming, Lecture 4 Corbett Redden October 3, 2016 Simplex Form Conventions Examples Simplex Method To run the simplex method, we start from a Linear Program (LP) in the following standard simplex
More informationMath Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88
Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant
More information15-780: LinearProgramming
15-780: LinearProgramming J. Zico Kolter February 1-3, 2016 1 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 2 Outline Introduction Some linear
More informationCPSC 540: Machine Learning
CPSC 540: Machine Learning Proximal-Gradient Mark Schmidt University of British Columbia Winter 2018 Admin Auditting/registration forms: Pick up after class today. Assignment 1: 2 late days to hand in
More informationElementary maths for GMT
Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1
More informationMULTIPLIERS OF THE TERMS IN THE LOWER CENTRAL SERIES OF THE LIE ALGEBRA OF STRICTLY UPPER TRIANGULAR MATRICES. Louis A. Levy
International Electronic Journal of Algebra Volume 1 (01 75-88 MULTIPLIERS OF THE TERMS IN THE LOWER CENTRAL SERIES OF THE LIE ALGEBRA OF STRICTLY UPPER TRIANGULAR MATRICES Louis A. Levy Received: 1 November
More informationMemorandum No An easy way to obtain strong duality results in linear, linear semidefinite and linear semi-infinite programming
Faculty of Mathematical Sciences University of Twente University for Technical and Social Sciences P.O. Box 217 75 AE Enschede The Netherlands Phone +31-53-48934 Fax +31-53-4893114 Email memo@math.utwente.nl
More information11.1 Vectors in the plane
11.1 Vectors in the plane What is a vector? It is an object having direction and length. Geometric way to represent vectors It is represented by an arrow. The direction of the arrow is the direction of
More informationMathematical Foundations -1- Constrained Optimization. Constrained Optimization. An intuitive approach 2. First Order Conditions (FOC) 7
Mathematical Foundations -- Constrained Optimization Constrained Optimization An intuitive approach First Order Conditions (FOC) 7 Constraint qualifications 9 Formal statement of the FOC for a maximum
More informationThe general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.
1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,
More informationNew Reference-Neighbourhood Scalarization Problem for Multiobjective Integer Programming
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 3 No Sofia 3 Print ISSN: 3-97; Online ISSN: 34-48 DOI:.478/cait-3- New Reference-Neighbourhood Scalariation Problem for Multiobjective
More informationModels of resource planning during formation of calendar construction plans for erection of high-rise buildings
Models of resource planning during formation of calendar construction plans for erection of high-rise buildings Irina Pocebneva 1* Vadim Belousov and Irina Fateeva 1 1 Voronezh State Technical University
More informationPhys 201. Matrices and Determinants
Phys 201 Matrices and Determinants 1 1.1 Matrices 1.2 Operations of matrices 1.3 Types of matrices 1.4 Properties of matrices 1.5 Determinants 1.6 Inverse of a 3 3 matrix 2 1.1 Matrices A 2 3 7 =! " 1
More informationBasic Concepts in Linear Algebra
Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear
More informationSeptember Math Course: First Order Derivative
September Math Course: First Order Derivative Arina Nikandrova Functions Function y = f (x), where x is either be a scalar or a vector of several variables (x,..., x n ), can be thought of as a rule which
More informationMath Studio College Algebra
Math 100 - Studio College Algebra Rekha Natarajan Kansas State University November 19, 2014 Systems of Equations Systems of Equations A system of equations consists of Systems of Equations A system of
More informationSolving Linear Systems of Equations
November 6, 2013 Introduction The type of problems that we have to solve are: Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. a 1N a NN x = x 1 x 2. x N B = b 1 b 2. b N To find A 1 (inverse
More informationResearch Article Convex Polyhedron Method to Stability of Continuous Systems with Two Additive Time-Varying Delay Components
Applied Mathematics Volume 202, Article ID 689820, 3 pages doi:0.55/202/689820 Research Article Convex Polyhedron Method to Stability of Continuous Systems with Two Additive Time-Varying Delay Components
More informationChapter 9: Systems of Equations and Inequalities
Chapter 9: Systems of Equations and Inequalities 9. Systems of Equations Solve the system of equations below. By this we mean, find pair(s) of numbers (x, y) (if possible) that satisfy both equations.
More informationInterior-Point Methods for Linear Optimization
Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function
More informationThe Edgeworth-Pareto Principle in Decision Making
The Edgeworth-Pareto Principle in Decision Making Vladimir D. Noghin Saint-Petersburg State University Russia URL: www.apmath.spbu.ru/staff/noghin dgmo-2006 Introduction Since the 19 century, the Edgeworth-Pareto
More informationThe Simplex Algorithm
8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.
More informationIntroduction to Determinants
Introduction to Determinants For any square matrix of order 2, we have found a necessary and sufficient condition for invertibility. Indeed, consider the matrix The matrix A is invertible if and only if.
More information"SYMMETRIC" PRIMAL-DUAL PAIR
"SYMMETRIC" PRIMAL-DUAL PAIR PRIMAL Minimize cx DUAL Maximize y T b st Ax b st A T y c T x y Here c 1 n, x n 1, b m 1, A m n, y m 1, WITH THE PRIMAL IN STANDARD FORM... Minimize cx Maximize y T b st Ax
More informationA Tropical Extremal Problem with Nonlinear Objective Function and Linear Inequality Constraints
A Tropical Extremal Problem with Nonlinear Objective Function and Linear Inequality Constraints NIKOLAI KRIVULIN Faculty of Mathematics and Mechanics St. Petersburg State University 28 Universitetsky Ave.,
More informationGauss-Jordan elimination ( used in the videos below) stops when the augmented coefficient
To review these matrix methods for solving systems of linear equations, watch the following set of YouTube videos. They are followed by several practice problems for you to try, covering all the basic
More informationStructural and Multidisciplinary Optimization. P. Duysinx and P. Tossings
Structural and Multidisciplinary Optimization P. Duysinx and P. Tossings 2018-2019 CONTACTS Pierre Duysinx Institut de Mécanique et du Génie Civil (B52/3) Phone number: 04/366.91.94 Email: P.Duysinx@uliege.be
More informationHow to Take the Dual of a Linear Program
How to Take the Dual of a Linear Program Sébastien Lahaie January 12, 2015 This is a revised version of notes I wrote several years ago on taking the dual of a linear program (LP), with some bug and typo
More informationMath 096--Quadratic Formula page 1
Math 096--Quadratic Formula page 1 A Quadratic Formula. Use the quadratic formula to solve quadratic equations ax + bx + c = 0 when the equations can t be factored. To use the quadratic formula, the equation
More information2t t dt.. So the distance is (t2 +6) 3/2
Math 8, Solutions to Review for the Final Exam Question : The distance is 5 t t + dt To work that out, integrate by parts with u t +, so that t dt du The integral is t t + dt u du u 3/ (t +) 3/ So the
More informationMh -ILE CPYl. Caregi Mello University PITBRH ENYVNA123AS1718. Carnegi Melo Unovrsity reecs ~ 8
000 Mh -ILE CPYl Caregi Mello University PITBRH ENYVNA123AS1718 Carnegi Melo Unovrsity reecs PITTSBURGH, PENYLAI 15213 G 88 8 1 6~ 8 W.P. *87-88-40 Management Science Research Report No. MSRR 545 ON THE
More informationMem. Differential Equations Math. Phys. 44 (2008), Malkhaz Ashordia and Shota Akhalaia
Mem. Differential Equations Math. Phys. 44 (2008), 143 150 Malkhaz Ashordia and Shota Akhalaia ON THE SOLVABILITY OF THE PERIODIC TYPE BOUNDARY VALUE PROBLEM FOR LINEAR IMPULSIVE SYSTEMS Abstract. Effective
More informationIntroduction to General Equilibrium
Introduction to General Equilibrium Juan Manuel Puerta November 6, 2009 Introduction So far we discussed markets in isolation. We studied the quantities and welfare that results under different assumptions
More information2 Statement of the problem and assumptions
Mathematical Notes, 25, vol. 78, no. 4, pp. 466 48. Existence Theorem for Optimal Control Problems on an Infinite Time Interval A.V. Dmitruk and N.V. Kuz kina We consider an optimal control problem on
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming
E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program
More informationInteger programming: an introduction. Alessandro Astolfi
Integer programming: an introduction Alessandro Astolfi Outline Introduction Examples Methods for solving ILP Optimization on graphs LP problems with integer solutions Summary Introduction Integer programming
More informationInterval solutions for interval algebraic equations
Mathematics and Computers in Simulation 66 (2004) 207 217 Interval solutions for interval algebraic equations B.T. Polyak, S.A. Nazin Institute of Control Sciences, Russian Academy of Sciences, 65 Profsoyuznaya
More informationMATH 445/545 Test 1 Spring 2016
MATH 445/545 Test Spring 06 Note the problems are separated into two sections a set for all students and an additional set for those taking the course at the 545 level. Please read and follow all of these
More informationMA 575 Linear Models: Cedric E. Ginestet, Boston University Regularization: Ridge Regression and Lasso Week 14, Lecture 2
MA 575 Linear Models: Cedric E. Ginestet, Boston University Regularization: Ridge Regression and Lasso Week 14, Lecture 2 1 Ridge Regression Ridge regression and the Lasso are two forms of regularized
More informationReview of Optimization Methods
Review of Optimization Methods Prof. Manuela Pedio 20550 Quantitative Methods for Finance August 2018 Outline of the Course Lectures 1 and 2 (3 hours, in class): Linear and non-linear functions on Limits,
More informationQuadratic Programming
Quadratic Programming Quadratic programming is a special case of non-linear programming, and has many applications. One application is for optimal portfolio selection, which was developed by Markowitz
More informationCHAPTER 2. The Simplex Method
CHAPTER 2 The Simplex Method In this chapter we present the simplex method as it applies to linear programming problems in standard form. 1. An Example We first illustrate how the simplex method works
More information(1) for all (2) for all and all
8. Linear mappings and matrices A mapping f from IR n to IR m is called linear if it fulfills the following two properties: (1) for all (2) for all and all Mappings of this sort appear frequently in the
More informationSystem of Linear Equations
Math 20F Linear Algebra Lecture 2 1 System of Linear Equations Slide 1 Definition 1 Fix a set of numbers a ij, b i, where i = 1,, m and j = 1,, n A system of m linear equations in n variables x j, is given
More informationMatrix operations Linear Algebra with Computer Science Application
Linear Algebra with Computer Science Application February 14, 2018 1 Matrix operations 11 Matrix operations If A is an m n matrix that is, a matrix with m rows and n columns then the scalar entry in the
More information12. Perturbed Matrices
MAT334 : Applied Linear Algebra Mike Newman, winter 208 2. Perturbed Matrices motivation We want to solve a system Ax = b in a context where A and b are not known exactly. There might be experimental errors,
More informationPrinciples in Economics and Mathematics: the mathematical part
Principles in Economics and Mathematics: the mathematical part Bram De Rock Bram De Rock Mathematical principles 1/65 Practicalities about me Bram De Rock Office: R.42.6.218 E-mail: bderock@ulb.ac.be Phone:
More informationSection Notes 9. Midterm 2 Review. Applied Math / Engineering Sciences 121. Week of December 3, 2018
Section Notes 9 Midterm 2 Review Applied Math / Engineering Sciences 121 Week of December 3, 2018 The following list of topics is an overview of the material that was covered in the lectures and sections
More information4TE3/6TE3. Algorithms for. Continuous Optimization
4TE3/6TE3 Algorithms for Continuous Optimization (Algorithms for Constrained Nonlinear Optimization Problems) Tamás TERLAKY Computing and Software McMaster University Hamilton, November 2005 terlaky@mcmaster.ca
More informationLecture 10: Linear programming duality and sensitivity 0-0
Lecture 10: Linear programming duality and sensitivity 0-0 The canonical primal dual pair 1 A R m n, b R m, and c R n maximize z = c T x (1) subject to Ax b, x 0 n and minimize w = b T y (2) subject to
More informationVasiliy Saiko Institute for Entrepreneurship Strategy, Zhovty Vody, Ukraine
Specific Characteristics of Applying the Paired Comparison Method for Parameterization of Consumer Wants Vasiliy Saiko Institute for Entrepreneurship Strategy, Zhovty Vody, Ukraine ABSTRACT. The article
More informationLesson 27 Linear Programming; The Simplex Method
Lesson Linear Programming; The Simplex Method Math 0 April 9, 006 Setup A standard linear programming problem is to maximize the quantity c x + c x +... c n x n = c T x subject to constraints a x + a x
More informationIterative Methods for Solving A x = b
Iterative Methods for Solving A x = b A good (free) online source for iterative methods for solving A x = b is given in the description of a set of iterative solvers called templates found at netlib: http
More informationReduction of the Pareto Set in Multicriteria Economic Problem with CES Functions
Reduction of the Pareto Set in Multicriteria Economic Problem with CES Functions arxiv:1805.10500v1 [math.oc] 26 May 2018 Abstract A multicriteria economic problem is considered: the basic production assets
More informationANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3
ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any
More informationON CALCULATING THE VALUE OF A DIFFERENTIAL GAME IN THE CLASS OF COUNTER STRATEGIES 1,2
URAL MATHEMATICAL JOURNAL, Vol. 2, No. 1, 2016 ON CALCULATING THE VALUE OF A DIFFERENTIAL GAME IN THE CLASS OF COUNTER STRATEGIES 1,2 Mikhail I. Gomoyunov Krasovskii Institute of Mathematics and Mechanics,
More informationVector Spaces. Addition : R n R n R n Scalar multiplication : R R n R n.
Vector Spaces Definition: The usual addition and scalar multiplication of n-tuples x = (x 1,..., x n ) R n (also called vectors) are the addition and scalar multiplication operations defined component-wise:
More informationVariants of Simplex Method
Variants of Simplex Method All the examples we have used in the previous chapter to illustrate simple algorithm have the following common form of constraints; i.e. a i x + a i x + + a in x n b i, i =,,,m
More informationAn introductory example
CS1 Lecture 9 An introductory example Suppose that a company that produces three products wishes to decide the level of production of each so as to maximize profits. Let x 1 be the amount of Product 1
More informationMatrices: 2.1 Operations with Matrices
Goals In this chapter and section we study matrix operations: Define matrix addition Define multiplication of matrix by a scalar, to be called scalar multiplication. Define multiplication of two matrices,
More informationDuality in LPP Every LPP called the primal is associated with another LPP called dual. Either of the problems is primal with the other one as dual. The optimal solution of either problem reveals the information
More informationFirst Welfare Theorem
First Welfare Theorem Econ 2100 Fall 2017 Lecture 17, October 31 Outline 1 First Welfare Theorem 2 Preliminaries to Second Welfare Theorem Past Definitions A feasible allocation (ˆx, ŷ) is Pareto optimal
More informationMAT016: Optimization
MAT016: Optimization M.El Ghami e-mail: melghami@ii.uib.no URL: http://www.ii.uib.no/ melghami/ March 29, 2011 Outline for today The Simplex method in matrix notation Managing a production facility The
More informationLecture 15 Newton Method and Self-Concordance. October 23, 2008
Newton Method and Self-Concordance October 23, 2008 Outline Lecture 15 Self-concordance Notion Self-concordant Functions Operations Preserving Self-concordance Properties of Self-concordant Functions Implications
More informationEE364a Review Session 5
EE364a Review Session 5 EE364a Review announcements: homeworks 1 and 2 graded homework 4 solutions (check solution to additional problem 1) scpd phone-in office hours: tuesdays 6-7pm (650-723-1156) 1 Complementary
More informationWeek 4: Calculus and Optimization (Jehle and Reny, Chapter A2)
Week 4: Calculus and Optimization (Jehle and Reny, Chapter A2) Tsun-Feng Chiang *School of Economics, Henan University, Kaifeng, China September 27, 2015 Microeconomic Theory Week 4: Calculus and Optimization
More informationDuality. Geoff Gordon & Ryan Tibshirani Optimization /
Duality Geoff Gordon & Ryan Tibshirani Optimization 10-725 / 36-725 1 Duality in linear programs Suppose we want to find lower bound on the optimal value in our convex problem, B min x C f(x) E.g., consider
More informationOptimization of Linear Systems of Constrained Configuration
Optimization of Linear Systems of Constrained Configuration Antony Jameson 1 October 1968 1 Abstract For the sake of simplicity it is often desirable to restrict the number of feedbacks in a controller.
More informationLemma 8: Suppose the N by N matrix A has the following block upper triangular form:
17 4 Determinants and the Inverse of a Square Matrix In this section, we are going to use our knowledge of determinants and their properties to derive an explicit formula for the inverse of a square matrix
More informationLinear & nonlinear classifiers
Linear & nonlinear classifiers Machine Learning Hamid Beigy Sharif University of Technology Fall 1396 Hamid Beigy (Sharif University of Technology) Linear & nonlinear classifiers Fall 1396 1 / 44 Table
More informationChapter 2: Linear Programming Basics. (Bertsimas & Tsitsiklis, Chapter 1)
Chapter 2: Linear Programming Basics (Bertsimas & Tsitsiklis, Chapter 1) 33 Example of a Linear Program Remarks. minimize 2x 1 x 2 + 4x 3 subject to x 1 + x 2 + x 4 2 3x 2 x 3 = 5 x 3 + x 4 3 x 1 0 x 3
More informationAM 205: lecture 14. Last time: Boundary value problems Today: Numerical solution of PDEs
AM 205: lecture 14 Last time: Boundary value problems Today: Numerical solution of PDEs ODE BVPs A more general approach is to formulate a coupled system of equations for the BVP based on a finite difference
More information0.1 O. R. Katta G. Murty, IOE 510 Lecture slides Introductory Lecture. is any organization, large or small.
0.1 O. R. Katta G. Murty, IOE 510 Lecture slides Introductory Lecture Operations Research is the branch of science dealing with techniques for optimizing the performance of systems. System is any organization,
More informationwhere u is the decision-maker s payoff function over her actions and S is the set of her feasible actions.
Seminars on Mathematics for Economics and Finance Topic 3: Optimization - interior optima 1 Session: 11-12 Aug 2015 (Thu/Fri) 10:00am 1:00pm I. Optimization: introduction Decision-makers (e.g. consumers,
More informationPrimal/Dual Decomposition Methods
Primal/Dual Decomposition Methods Daniel P. Palomar Hong Kong University of Science and Technology (HKUST) ELEC5470 - Convex Optimization Fall 2018-19, HKUST, Hong Kong Outline of Lecture Subgradients
More informationCopositive Plus Matrices
Copositive Plus Matrices Willemieke van Vliet Master Thesis in Applied Mathematics October 2011 Copositive Plus Matrices Summary In this report we discuss the set of copositive plus matrices and their
More informationNext topics: Solving systems of linear equations
Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:
More informationOPTIMAL CONTROL PROBLEM DESCRIBING BY THE CAUCHY PROBLEM FOR THE FIRST ORDER LINEAR HYPERBOLIC SYSTEM WITH TWO INDEPENDENT VARIABLES
TWMS J. Pure Appl. Math., V.6, N.1, 215, pp.1-11 OPTIMAL CONTROL PROBLEM DESCRIBING BY THE CAUCHY PROBLEM FOR THE FIRST ORDER LINEAR HYPERBOLIC SYSTEM WITH TWO INDEPENDENT VARIABLES K.K. HASANOV 1, T.S.
More informationCSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming
CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming
More informationLS.1 Review of Linear Algebra
LS. LINEAR SYSTEMS LS.1 Review of Linear Algebra In these notes, we will investigate a way of handling a linear system of ODE s directly, instead of using elimination to reduce it to a single higher-order
More informationSimplex Algorithm Using Canonical Tableaus
41 Simplex Algorithm Using Canonical Tableaus Consider LP in standard form: Min z = cx + α subject to Ax = b where A m n has rank m and α is a constant In tableau form we record it as below Original Tableau
More informationLinear Equations and Matrix
1/60 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Gaussian Elimination 2/60 Alpha Go Linear algebra begins with a system of linear
More information6. Iterative Methods for Linear Systems. The stepwise approach to the solution...
6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse
More informationCalculation in the special cases n = 2 and n = 3:
9. The determinant The determinant is a function (with real numbers as values) which is defined for quadratic matrices. It allows to make conclusions about the rank and appears in diverse theorems and
More informationOPERATIONS RESEARCH. Linear Programming Problem
OPERATIONS RESEARCH Chapter 1 Linear Programming Problem Prof. Bibhas C. Giri Department of Mathematics Jadavpur University Kolkata, India Email: bcgiri.jumath@gmail.com MODULE - 2: Simplex Method for
More informationSection Notes 9. IP: Cutting Planes. Applied Math 121. Week of April 12, 2010
Section Notes 9 IP: Cutting Planes Applied Math 121 Week of April 12, 2010 Goals for the week understand what a strong formulations is. be familiar with the cutting planes algorithm and the types of cuts
More informationThe use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis:
Sensitivity analysis The use of shadow price is an example of sensitivity analysis. Duality theory can be applied to do other kind of sensitivity analysis: Changing the coefficient of a nonbasic variable
More informationMath 240 Calculus III
The Calculus III Summer 2015, Session II Wednesday, July 8, 2015 Agenda 1. of the determinant 2. determinants 3. of determinants What is the determinant? Yesterday: Ax = b has a unique solution when A
More informationCHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.
1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function
More informationChapter 1 Vector Spaces
Chapter 1 Vector Spaces Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 110 Linear Algebra Vector Spaces Definition A vector space V over a field
More informationChap6 Duality Theory and Sensitivity Analysis
Chap6 Duality Theory and Sensitivity Analysis The rationale of duality theory Max 4x 1 + x 2 + 5x 3 + 3x 4 S.T. x 1 x 2 x 3 + 3x 4 1 5x 1 + x 2 + 3x 3 + 8x 4 55 x 1 + 2x 2 + 3x 3 5x 4 3 x 1 ~x 4 0 If we
More informationYORK UNIVERSITY. Faculty of Science Department of Mathematics and Statistics MATH M Test #1. July 11, 2013 Solutions
YORK UNIVERSITY Faculty of Science Department of Mathematics and Statistics MATH 222 3. M Test # July, 23 Solutions. For each statement indicate whether it is always TRUE or sometimes FALSE. Note: For
More informationA strongly polynomial algorithm for linear systems having a binary solution
A strongly polynomial algorithm for linear systems having a binary solution Sergei Chubanov Institute of Information Systems at the University of Siegen, Germany e-mail: sergei.chubanov@uni-siegen.de 7th
More information