PROGRAMMING UNDER PROBABILISTIC CONSTRAINTS WITH A RANDOM TECHNOLOGY MATRIX
|
|
- Brandon Ramsey
- 6 years ago
- Views:
Transcription
1 Math. Operationsforsch. u. Statist , Heft 2. pp PROGRAMMING UNDER PROBABILISTIC CONSTRAINTS WITH A RANDOM TECHNOLOGY MATRIX András Prékopa Technological University of Budapest and Computer and Automation Institute of the Hungarian Academy of Sciences, H 502 Budapest XI. Kende utca 3 7, Hungary Eingereicht bei der Redaktion: Abstract Probabilistic constraint of the type P Ax β p is considered and it is proved that under some conditions the constraining function is quasi-concave. The probabilistic constraint is embedded into a mathematical programming problem of which the algorithmic solution is also discussed. Introduction We consider the following function of the variable x: Gx =PAx β,. where A is an m n matrix, x is an n-component, β is an m-component vector and it is supposed that the entries of A as well as the components of β are random variables. The function. will be used as a constraining function in a nonlinear programming problem which we formulate below: Gx p, G i x 0, i =,...,N, a i x b i, i =,...,M, min fx..2 Here we suppose that a,...,a M are constant n-component vectors, b,...,b M are constants, the functions G,...,G M are quasi-concave in R n and fx isconvexinr n.many practical problems lead to the stochastic programming decision problem.2. Among
2 them we mention the nutrition problem in which case the nutrient contents of the foods are random and the ore buying problem in which case the ferrum contents of the ores are random and in both cases.2 is a reasonable decision principle. Regarding the joint distribution of the entries of A and the components of β, two special cases will be considered. In the first one we suppose that this distribution is normal satisfying further restrictive conditions while in the second one lognormal or some more general distribution is involved. The reason why we deal with these conditions is of mathematical nature: we can prove in these cases the quasi-concavity of the constraining function in the probabilistic constraint. In Section 2 we prove three theorems and in Section 3 the algorithmic solution of Problem.2 is discussed. The theorems of Section 2 are based on the following theorems of the author proved in [2] resp. [3]. Theorem Let fx be a probability density function in the n-dimensional Euclidean space R n and suppose that it is logarithmic concave i.e. it has the form fx =e Qx, x R n,.3 where Qx is a convex function in the entire space. Let further A, B be two convex subsets of R n and 0 <λ<. Denoting by PC the integral of fx over the measurable subset C of R n, we have PλA + λb [PA] λ [PB] λ..4 The constant multiple of a set and the Minkowski sum of two sets are defined by kc = {kc c C} resp. C + D = {c + d c C, d D}, wherek is a real number. Theorem implies that if A is a convex subset of R n then PA + x is a logarithmic concave function in the entire space. If we take in particular A = {t t 0}, thenwesee that the probability distribution function F x belonging to the density function fx is logarithmic concave in the entire space, since PA + x =F x. A probability measure P in R n is said to be logarithmic concave if for every pair A, B of convex subsets of R n and for every 0 <λ<, the inequality.4 is satisfied see [2]. Theorem 2 Let g i x, y, i =,...,r be convave functions in R m+n where x is an n-component and y is an m-component vector. Let ξ be an m-component random vector having a logarithmic concave probability distribution. Then the function of the variable x: is logarithmic concave in the space R n. Pg i x, ξ 0, i =,...,r.5 The formulation of Theorem 2 is slightly different from that given in [3]. Its proof is the same as the proof of the original version only a trivial modification is necessary. A function F x defined on a convex set K is said to be quasi-concave if for every pair x, x 2 K we have F λx + λx 2 minf x,fx 2. It is easy to see that a necessary and sufficient condition for this is that the set {x x K, F x b} is convex for every real b. 2
3 2 Quasi-concavity theorems concerning the function. Let ξ,...,ξ n denote the columns of the matrix A and introduce the notation ξ = β. Theorem 3 Suppose that the altogether mn + components of the random vectors ξ,...,ξ have a joint normal distribution where the cross-covariance matrices of ξ i and ξ j are constant multiples of a fixed covariance matrix C i.e. where Then the set of x vectors satisfying is convex provided p /2. E[ξ i μ i ξ j μ j ]=s ij C, i, j =,...,, 2. μ i = Eξ i, i =,...,. 2.2 PAx β p 2.3 Proof. Consider the covariance matrix of Ax βx which is equal to the following E ξ i x i μ i x i ξ i x i μ i x i = s ij x i x j C = z SzC, 2.4 where z =x,x ands is the matrix with entries s ij. If all elements of C are zero, then the vectors ξ,...ξ are constants with probability and the statement of the theorem holds trivially. If there is at least one nonzero element in C, then there is at least one positive element in its main diagonal. This implies that z Sz 0 for every z since 2.4 is a covariance matrix hence all elements in its main diagonal are nonnegative. Thus S is a positive semidefinite matrix. We may suppose that both C and S are positive definite matrices because otherwise we can use a perturbation of the type C + εi m, S + εi, where I m and I are unit matrices of the size m m and n + n +, respectively, prove the theorem for these positive definite matrices and then take the limit ε 0. Let x = everywhere in the sequel. We have ξ i μ i x i μ i x i PAx β =P. 2.5 z Sz 2 z Sz 2 i,j= Introduce the notation Lz = μ i x i 3
4 and denote L i z theith component of Lz fori =,...,m. If c ik denote the elements of C then 2.5 can be rewritten as L z L m z PAx β =Φ,..., ; R, 2.6 c z Sz 2 c mm z Sz 2 where R is the correlation matrix belonging to the covariance matrix C. For every i i =,...,mwehave Φ L i z L z L m z Φ,..., ; R c ii z Sz 2 c z Sz 2 c mm z Sz Thus if the right hand side of 2.7 is greater than or equal to p /2 then we conclude L i z 0, i =,..., m. It is well-known that z Sz 2 is a convex function in the entire space R. Hence it follows that for every z, z 2, L 2 z + 2 z 2 [ 2 z + 2 z 2 S 2 z + ] λ Lz Lz 2 2 z 2 z Sz + λ 2 z 2 Sz, where z λ = Sz 2 z Sz 2 +z 2 Sz. 2 2 The probability density function of any nondegenerated normal distribution is logarithmic concave in the entire space. Hence it follows that log Φ v c 2,...,v mc 2 mm; R is concave in the variables v,...,v m. Thus if the value of the function 2.6 is greater than or equal to p /2 for z = z and also for z = z 2, then the value of 2.6 will be greater than or equal to p /2 at the point determined by the right hand side of 2.8. Since Φ is monotonically increasing in all variables it follows from 2.8 that 2.6 is greater than or equal to p /2 at the point determined by the left hand side of 2.8. This proves the theorem. Theorem 4 Let A i denote the i-th row of A and β i denote the i-th component of β, i =,...,m. Suppose that the random row vectors of n + components A i β i, i =,...m 2.9 are independent, normally distributed and their covariance matrices are constant multiples of a fixed covariance matrix C. Then the set of x vectors on which the function. is greater than or equal to p, is convex provided p /2. 4
5 Proof. We may suppose that C is positive definite. We also suppose that the covariance matrix of 2.9 equals s 2 i C with s i > 0, i =,...,m.leta 0 i, b i be the expectation of A i and β i, respectively, i =,...,m and introduce the notation: L i z = A 0 i,b i z, i =,...,m where z =x,...,x n,. s i Then we have m m A i A 0 i, β i + b i z PAx β = PA i, β i z 0 = P s i z Cz 2 m L i z L z L m z = Φ =Φ,..., ; I z Cz 2 z Cz 2 z Cz m p, 2 A0 i,b i z s i z Cz 2 where I m is the m m unit matrix. From here the proof goes in the same way as the proof of Theorem 3. Let a ij, i =,...,m; j =,...,n denote the elements of the matrix A and for the sake of simplicity suppose that the columns of A are numbered so that those come first which contain random variables. Let r be the number of these columns and introduce the following notations: J denotes the set of those ordered pairs i, j for which a ij i m, j r; is random variable, J i denotes the set of those subscripts j, forwhichi, j J, where i m; K i denotes the set {,...,r} J i ; L denotes the set of those subscripts i, forwhichβ i is a random variable, where i m; T denotes the set {,...,m} L. Nowweprovethefollowing Theorem 5 Suppose that the random variables a ij, i, j J are positive with probability and the constants a ij, i m, j r, i, j / J are nonnegative. Suppose further that the joint distribution of the random variables α ij, i, j J, β i, i L is a logarithmic concave probability distribution, where α ij =loga ij, i, j J. Under these conditions the function hx,...,x r,x r+,...,x n =Ge x,...,e xr,x r+,...,x n 2.0 is logarithmic concave in the entire space R n,wheregx is the function defined by.. 5
6 Proof. Consider the following functions g i = e u ij+x j n a ij e xj a ij x j + v i, i L, j J i j K i j=r+ g i = e u ij+x j n a ij e xj a ij x j + b i, i T. j J i j K i j=r+ These are all supposed to be functions of the variables 2. u ij, i, j J; x j, j n; v i, i L, though not every variable appears explicitely in every function. The functions g,...,g m are clearly concave in the entire space. Substituting u ij by α ij for every i, j J and v i by β i for every i L, instead of the functions g,...,g m we obtain random variables whichwedenotebyγ,...,γ m, respectively. By Theorem 2, the function Pγ 0,...,γ m 0 is a logarithmic concave function of the variables x,...,x n. On the other hand we have obviously Pγ 0,...,γ m 0 = Ge x,...,e xr,x r+,...,x n, thus the proof of the theorem is complete. 3 Algorithmic solution of Problem.2 In this section we deal with the algorithmic solution of Problem.2 under the conditions of Theorem 3, Theorem 4 and Theorem 5, respectively. Consider the case of Theorem 3. We suppose that C and S are positive definite matrices. The other cases can be treated in a similar way. We suppose further that C is a correlation matrix i.e. its diagonal elements are equal to. The random vectors ξ,...,ξ can always be defined so that the constraint Ax β remains equivalent and this condition holds. Using the notation Lz introduced in Section 2, we can write Gx =PAx β =P ξ i μ i x i Lz 3. = P z Sz 2 ξ i μ i x i Lz =Φ Lz; C, z Sz 2 z Sz 2 where Φy; C denotes the probability distribution of the m-dimensional normal distribution with N0, marginal distributions and correlation matrix C. It is well-known that Φy, C y i c ji y i =Φ y i c 2, j =,...,i,i+,...,m; C i ϕy i, 3.2 ji 6
7 where ϕy = 2π e y2 2, <y< 3.3 and C i is an m m correlation matrix with the entries c jk c ji c ki, j,k =,...,i,i+,...,m. 3.4 c 2 ji c 2 ki With the aid of the gradient of the function gx can be obtained by an elementary calculation. If a nonlinear programming procedure solves the problem with quasi-concave constraints and convex objective function is to be minimized, it can be applied to solve our problem. Such a method is the method of feasible directions see [6] the convergence proof of which for the case mentioned above is given in [4] and in a more detailed form in [5]. The case of Theorem 4 does not present new difficulty. In fact we see in the proof of Theorem 4 that the function PAx β has the same form as the function standing on the right hand side in 3.. In the case of Theorem 5 we substitute x i by e x i everywhere in Problem. for i =,...,r. Then the first constraining function becomes logarithmic concave in all variables. It may happen that the other constraining functions are concave while the objective function is convex after this substitution. If this is the case and the original problem contained constraints of the form x i >D i, i =,...,r, 3.5 where D i, i =,...,r are positive constants, then we can solve the problem by convex programming procedures e.g. by the SUMT method see [] which seems to be very suitable due to the logarithmic concavity of the first constraining function. In fact using the logarithmic penalty function, the unconstrained problems are convex problems. The values of the probabilities can be obtained by simulation. If instead of 3.5 we had nonnegativity constraints for x,...,x r in the original problem then using the constraints 3.5 instead of the constraints x 0,...,x r 0 in the original problem, by a suitable selection for the constants D i, i =,...,r we can come arbitrarily near the original optimum value. References [] Fiacco, A. V. and G. P. McCormick 968. Nonlinear Programming: Sequential Unconstrained Minimization Techniques. Wiley, New York. [2] Prékopa, A. 97. Logarithmic concave measures with application to stochastic programming. Acta Sci. Math., Szeged, 32,
8 [3] Prékopa, A A class of stochastic programming decision problems. Math. Operationsforsch. Statist., 3, [4] Prékopa, A On probabilistic constrained programming. Proc. Princeton Sympos. Math. Programming, Princeton Univ. Press, Princeton, New Jersey, [5] Prékopa, A Eine Erweiterung der sogenannten Methode der zulässigen Richtungen der nichtlinearen Optimierung auf den Fall quasikonkaver Restriktionsfunktionen. Math. Operationsforsch. und Statistik, 5, [6] Zoutendijk, G Methods of Feasible Directions. Elsevier, Amsterdam. 8
ON PROBABILISTIC CONSTRAINED PROGRAMMING
Proceedings of the Princeton Symposium on Mathematical Programming Princeton University Press, Princeton, NJ 1970. pp. 113 138 ON PROBABILISTIC CONSTRAINED PROGRAMMING András Prékopa Technical University
More informationA CONVEXITY THEOREM IN PROGRAMMING UNDER PROBABILISTIC CONSTRAINTS
R u t c o r Research R e p o r t A CONVEXITY THEOREM IN PROGRAMMING UNDER PROBABILISTIC CONSTRAINTS András Prékopa a Mine Subasi b RRR 32-2007, December 2007 RUTCOR Rutgers Center for Operations Research
More informationMaximization of a Strongly Unimodal Multivariate Discrete Distribution
R u t c o r Research R e p o r t Maximization of a Strongly Unimodal Multivariate Discrete Distribution Mine Subasi a Ersoy Subasi b András Prékopa c RRR 12-2009, July 2009 RUTCOR Rutgers Center for Operations
More informationCONTRIBUTIONS TO THE THEORY OF STOCHASTIC PROGRAMMING. Andras Prekopa. Technological University and Hungarian Academy of Sciences, Budapest, Hungary
Mathematical Programming 4 (1973) pp. 202{221 CONTRIBUTIONS TO THE THEORY OF STOCHASTIC PROGRAMMING Andras Prekopa Technological University and Hungarian Academy of Sciences, Budapest, Hungary Received:
More informationN. L. P. NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP. Optimization. Models of following form:
0.1 N. L. P. Katta G. Murty, IOE 611 Lecture slides Introductory Lecture NONLINEAR PROGRAMMING (NLP) deals with optimization models with at least one nonlinear function. NLP does not include everything
More informationOn Strong Unimodality of Multivariate Discrete Distributions
R u t c o r Research R e p o r t On Strong Unimodality of Multivariate Discrete Distributions Ersoy Subasi a Mine Subasi b András Prékopa c RRR 47-2004, DECEMBER, 2004 RUTCOR Rutgers Center for Operations
More informationON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION
Annales Univ. Sci. Budapest., Sect. Comp. 33 (2010) 273-284 ON SUM OF SQUARES DECOMPOSITION FOR A BIQUADRATIC MATRIX FUNCTION L. László (Budapest, Hungary) Dedicated to Professor Ferenc Schipp on his 70th
More informationAssignment 1: From the Definition of Convexity to Helley Theorem
Assignment 1: From the Definition of Convexity to Helley Theorem Exercise 1 Mark in the following list the sets which are convex: 1. {x R 2 : x 1 + i 2 x 2 1, i = 1,..., 10} 2. {x R 2 : x 2 1 + 2ix 1x
More informationHW1 solutions. 1. α Ef(x) β, where Ef(x) is the expected value of f(x), i.e., Ef(x) = n. i=1 p if(a i ). (The function f : R R is given.
HW1 solutions Exercise 1 (Some sets of probability distributions.) Let x be a real-valued random variable with Prob(x = a i ) = p i, i = 1,..., n, where a 1 < a 2 < < a n. Of course p R n lies in the standard
More informationCS 246 Review of Linear Algebra 01/17/19
1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector
More information2 Chance constrained programming
2 Chance constrained programming In this Chapter we give a brief introduction to chance constrained programming. The goals are to motivate the subject and to give the reader an idea of the related difficulties.
More informationApplications of Linear Programming
Applications of Linear Programming lecturer: András London University of Szeged Institute of Informatics Department of Computational Optimization Lecture 9 Non-linear programming In case of LP, the goal
More informationINVEX FUNCTIONS AND CONSTRAINED LOCAL MINIMA
BULL. AUSRAL. MAH. SOC. VOL. 24 (1981), 357-366. 9C3 INVEX FUNCIONS AND CONSRAINED LOCAL MINIMA B.D. CRAVEN If a certain weakening of convexity holds for the objective and all constraint functions in a
More informationPrimal-Dual Interior-Point Methods for Linear Programming based on Newton s Method
Primal-Dual Interior-Point Methods for Linear Programming based on Newton s Method Robert M. Freund March, 2004 2004 Massachusetts Institute of Technology. The Problem The logarithmic barrier approach
More information1 Strict local optimality in unconstrained optimization
ORF 53 Lecture 14 Spring 016, Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Thursday, April 14, 016 When in doubt on the accuracy of these notes, please cross check with the instructor s
More informationLINEAR INTERVAL INEQUALITIES
LINEAR INTERVAL INEQUALITIES Jiří Rohn, Jana Kreslová Faculty of Mathematics and Physics, Charles University Malostranské nám. 25, 11800 Prague, Czech Republic 1.6.1993 Abstract We prove that a system
More informationThe general programming problem is the nonlinear programming problem where a given function is maximized subject to a set of inequality constraints.
1 Optimization Mathematical programming refers to the basic mathematical problem of finding a maximum to a function, f, subject to some constraints. 1 In other words, the objective is to find a point,
More informationLECTURE 13 LECTURE OUTLINE
LECTURE 13 LECTURE OUTLINE Problem Structures Separable problems Integer/discrete problems Branch-and-bound Large sum problems Problems with many constraints Conic Programming Second Order Cone Programming
More informationLinear Algebra. Session 12
Linear Algebra. Session 12 Dr. Marco A Roque Sol 08/01/2017 Example 12.1 Find the constant function that is the least squares fit to the following data x 0 1 2 3 f(x) 1 0 1 2 Solution c = 1 c = 0 f (x)
More informationConstrained Optimization
1 / 22 Constrained Optimization ME598/494 Lecture Max Yi Ren Department of Mechanical Engineering, Arizona State University March 30, 2015 2 / 22 1. Equality constraints only 1.1 Reduced gradient 1.2 Lagrange
More information1 The linear algebra of linear programs (March 15 and 22, 2015)
1 The linear algebra of linear programs (March 15 and 22, 2015) Many optimization problems can be formulated as linear programs. The main features of a linear program are the following: Variables are real
More informationSemidefinite Programming Basics and Applications
Semidefinite Programming Basics and Applications Ray Pörn, principal lecturer Åbo Akademi University Novia University of Applied Sciences Content What is semidefinite programming (SDP)? How to represent
More informationPenalty and Barrier Methods General classical constrained minimization problem minimize f(x) subject to g(x) 0 h(x) =0 Penalty methods are motivated by the desire to use unconstrained optimization techniques
More informationCSC Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming
CSC2411 - Linear Programming and Combinatorial Optimization Lecture 10: Semidefinite Programming Notes taken by Mike Jamieson March 28, 2005 Summary: In this lecture, we introduce semidefinite programming
More informationComparing Convex Relaxations for Quadratically Constrained Quadratic Programming
Comparing Convex Relaxations for Quadratically Constrained Quadratic Programming Kurt M. Anstreicher Dept. of Management Sciences University of Iowa European Workshop on MINLP, Marseille, April 2010 The
More informationSummary Notes on Maximization
Division of the Humanities and Social Sciences Summary Notes on Maximization KC Border Fall 2005 1 Classical Lagrange Multiplier Theorem 1 Definition A point x is a constrained local maximizer of f subject
More informationThe Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1
October 2003 The Relation Between Pseudonormality and Quasiregularity in Constrained Optimization 1 by Asuman E. Ozdaglar and Dimitri P. Bertsekas 2 Abstract We consider optimization problems with equality,
More informationOptimization Problems with Probabilistic Constraints
Optimization Problems with Probabilistic Constraints R. Henrion Weierstrass Institute Berlin 10 th International Conference on Stochastic Programming University of Arizona, Tucson Recommended Reading A.
More informationResearch Division. Computer and Automation Institute, Hungarian Academy of Sciences. H-1518 Budapest, P.O.Box 63. Ujvári, M. WP August, 2007
Computer and Automation Institute, Hungarian Academy of Sciences Research Division H-1518 Budapest, P.O.Box 63. ON THE PROJECTION ONTO A FINITELY GENERATED CONE Ujvári, M. WP 2007-5 August, 2007 Laboratory
More information3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions
A. LINEAR ALGEBRA. CONVEX SETS 1. Matrices and vectors 1.1 Matrix operations 1.2 The rank of a matrix 2. Systems of linear equations 2.1 Basic solutions 3. Vector spaces 3.1 Linear dependence and independence
More informationDuality Theory of Constrained Optimization
Duality Theory of Constrained Optimization Robert M. Freund April, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 The Practical Importance of Duality Duality is pervasive
More informationR u t c o r Research R e p o r t. Application of the Solution of the Univariate Discrete Moment Problem for the Multivariate Case. Gergely Mádi-Nagy a
R u t c o r Research R e p o r t Application of the Solution of the Univariate Discrete Moment Problem for the Multivariate Case Gergely Mádi-Nagy a RRR 9-28, April 28 RUTCOR Rutgers Center for Operations
More informationMath Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88
Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant
More information4. Convex optimization problems
Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization
More informationInteger programming: an introduction. Alessandro Astolfi
Integer programming: an introduction Alessandro Astolfi Outline Introduction Examples Methods for solving ILP Optimization on graphs LP problems with integer solutions Summary Introduction Integer programming
More informationj=1 [We will show that the triangle inequality holds for each p-norm in Chapter 3 Section 6.] The 1-norm is A F = tr(a H A).
Math 344 Lecture #19 3.5 Normed Linear Spaces Definition 3.5.1. A seminorm on a vector space V over F is a map : V R that for all x, y V and for all α F satisfies (i) x 0 (positivity), (ii) αx = α x (scale
More informationSymmetric Matrices and Eigendecomposition
Symmetric Matrices and Eigendecomposition Robert M. Freund January, 2014 c 2014 Massachusetts Institute of Technology. All rights reserved. 1 2 1 Symmetric Matrices and Convexity of Quadratic Functions
More informationCONSTRAINED NONLINEAR PROGRAMMING
149 CONSTRAINED NONLINEAR PROGRAMMING We now turn to methods for general constrained nonlinear programming. These may be broadly classified into two categories: 1. TRANSFORMATION METHODS: In this approach
More informationWritten Examination
Division of Scientific Computing Department of Information Technology Uppsala University Optimization Written Examination 202-2-20 Time: 4:00-9:00 Allowed Tools: Pocket Calculator, one A4 paper with notes
More informationChapter 2: Preliminaries and elements of convex analysis
Chapter 2: Preliminaries and elements of convex analysis Edoardo Amaldi DEIB Politecnico di Milano edoardo.amaldi@polimi.it Website: http://home.deib.polimi.it/amaldi/opt-14-15.shtml Academic year 2014-15
More informationGENERALIZED MATRIX MULTIPLICATION AND ITS SOME APPLICATION. 1. Introduction
FACTA UNIVERSITATIS (NIŠ) Ser Math Inform Vol, No 5 (7), 789 798 https://doiorg/9/fumi75789k GENERALIZED MATRIX MULTIPLICATION AND ITS SOME APPLICATION Osman Keçilioğlu and Halit Gündoğan Abstract In this
More informationInterior-Point Methods for Linear Optimization
Interior-Point Methods for Linear Optimization Robert M. Freund and Jorge Vera March, 204 c 204 Robert M. Freund and Jorge Vera. All rights reserved. Linear Optimization with a Logarithmic Barrier Function
More informationLinear programs, convex polyhedra, extreme points
MVE165/MMG631 Extreme points of convex polyhedra; reformulations; basic feasible solutions; the simplex method Ann-Brith Strömberg 2015 03 27 Linear programs, convex polyhedra, extreme points A linear
More informationChapter 1 Vector Spaces
Chapter 1 Vector Spaces Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 110 Linear Algebra Vector Spaces Definition A vector space V over a field
More informationConvex optimization problems. Optimization problem in standard form
Convex optimization problems optimization problem in standard form convex optimization problems linear optimization quadratic optimization geometric programming quasiconvex optimization generalized inequality
More informationNonlinear Coordinate Transformations for Unconstrained Optimization I. Basic Transformations
Nonlinear Coordinate Transformations for Unconstrained Optimization I. Basic Transformations TIBOR CSENDES Kalmár Laboratory, József Attila University, Szeged, Hungary and TAMÁS RAPCSÁK Computer and Automation
More informationA PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES
IJMMS 25:6 2001) 397 409 PII. S0161171201002290 http://ijmms.hindawi.com Hindawi Publishing Corp. A PROJECTED HESSIAN GAUSS-NEWTON ALGORITHM FOR SOLVING SYSTEMS OF NONLINEAR EQUATIONS AND INEQUALITIES
More informationLecture 3. Optimization Problems and Iterative Algorithms
Lecture 3 Optimization Problems and Iterative Algorithms January 13, 2016 This material was jointly developed with Angelia Nedić at UIUC for IE 598ns Outline Special Functions: Linear, Quadratic, Convex
More informationNONLINEAR. (Hillier & Lieberman Introduction to Operations Research, 8 th edition)
NONLINEAR PROGRAMMING (Hillier & Lieberman Introduction to Operations Research, 8 th edition) Nonlinear Programming g Linear programming has a fundamental role in OR. In linear programming all its functions
More informationMathematical Economics: Lecture 16
Mathematical Economics: Lecture 16 Yu Ren WISE, Xiamen University November 26, 2012 Outline 1 Chapter 21: Concave and Quasiconcave Functions New Section Chapter 21: Concave and Quasiconcave Functions Concave
More informationLinear Programming Duality
Summer 2011 Optimization I Lecture 8 1 Duality recap Linear Programming Duality We motivated the dual of a linear program by thinking about the best possible lower bound on the optimal value we can achieve
More informationAN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES
AN ELEMENTARY PROOF OF THE SPECTRAL RADIUS FORMULA FOR MATRICES JOEL A. TROPP Abstract. We present an elementary proof that the spectral radius of a matrix A may be obtained using the formula ρ(a) lim
More informationTECHNICAL NOTE. A Finite Algorithm for Finding the Projection of a Point onto the Canonical Simplex of R"
JOURNAL OF OPTIMIZATION THEORY AND APPLICATIONS: VoL 50, No. 1, JULY 1986 TECHNICAL NOTE A Finite Algorithm for Finding the Projection of a Point onto the Canonical Simplex of R" C. M I C H E L O T I Communicated
More informationConvex envelopes, cardinality constrained optimization and LASSO. An application in supervised learning: support vector machines (SVMs)
ORF 523 Lecture 8 Princeton University Instructor: A.A. Ahmadi Scribe: G. Hall Any typos should be emailed to a a a@princeton.edu. 1 Outline Convexity-preserving operations Convex envelopes, cardinality
More informationFunctions of Several Variables
Functions of Several Variables The Unconstrained Minimization Problem where In n dimensions the unconstrained problem is stated as f() x variables. minimize f()x x, is a scalar objective function of vector
More informationIntrinsic products and factorizations of matrices
Available online at www.sciencedirect.com Linear Algebra and its Applications 428 (2008) 5 3 www.elsevier.com/locate/laa Intrinsic products and factorizations of matrices Miroslav Fiedler Academy of Sciences
More informationCE 191: Civil and Environmental Engineering Systems Analysis. LEC 05 : Optimality Conditions
CE 191: Civil and Environmental Engineering Systems Analysis LEC : Optimality Conditions Professor Scott Moura Civil & Environmental Engineering University of California, Berkeley Fall 214 Prof. Moura
More information1 Positive definiteness and semidefiniteness
Positive definiteness and semidefiniteness Zdeněk Dvořák May 9, 205 For integers a, b, and c, let D(a, b, c) be the diagonal matrix with + for i =,..., a, D i,i = for i = a +,..., a + b,. 0 for i = a +
More informationThe Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment
he Nearest Doubly Stochastic Matrix to a Real Matrix with the same First Moment William Glunt 1, homas L. Hayden 2 and Robert Reams 2 1 Department of Mathematics and Computer Science, Austin Peay State
More informationLinear programming. Starch Proteins Vitamins Cost ($/kg) G G Nutrient content and cost per kg of food.
18.310 lecture notes September 2, 2013 Linear programming Lecturer: Michel Goemans 1 Basics Linear Programming deals with the problem of optimizing a linear objective function subject to linear equality
More information4. Convex optimization problems
Convex Optimization Boyd & Vandenberghe 4. Convex optimization problems optimization problem in standard form convex optimization problems quasiconvex optimization linear optimization quadratic optimization
More informationMathematics 206 Solutions for HWK 13b Section 5.2
Mathematics 206 Solutions for HWK 13b Section 5.2 Section Problem 7ac. Which of the following are linear combinations of u = (0, 2,2) and v = (1, 3, 1)? (a) (2, 2,2) (c) (0,4, 5) Solution. Solution by
More informationLMI MODELLING 4. CONVEX LMI MODELLING. Didier HENRION. LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ. Universidad de Valladolid, SP March 2009
LMI MODELLING 4. CONVEX LMI MODELLING Didier HENRION LAAS-CNRS Toulouse, FR Czech Tech Univ Prague, CZ Universidad de Valladolid, SP March 2009 Minors A minor of a matrix F is the determinant of a submatrix
More informationLecture: Convex Optimization Problems
1/36 Lecture: Convex Optimization Problems http://bicmr.pku.edu.cn/~wenzw/opt-2015-fall.html Acknowledgement: this slides is based on Prof. Lieven Vandenberghe s lecture notes Introduction 2/36 optimization
More informationLecture 13: Constrained optimization
2010-12-03 Basic ideas A nonlinearly constrained problem must somehow be converted relaxed into a problem which we can solve (a linear/quadratic or unconstrained problem) We solve a sequence of such problems
More informationThe Singular Value Decomposition
The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will
More informationConvexity of level sets for solutions to nonlinear elliptic problems in convex rings. Paola Cuoghi and Paolo Salani
Convexity of level sets for solutions to nonlinear elliptic problems in convex rings Paola Cuoghi and Paolo Salani Dip.to di Matematica U. Dini - Firenze - Italy 1 Let u be a solution of a Dirichlet problem
More informationII. Analysis of Linear Programming Solutions
Optimization Methods Draft of August 26, 2005 II. Analysis of Linear Programming Solutions Robert Fourer Department of Industrial Engineering and Management Sciences Northwestern University Evanston, Illinois
More informationMath (P)refresher Lecture 8: Unconstrained Optimization
Math (P)refresher Lecture 8: Unconstrained Optimization September 2006 Today s Topics : Quadratic Forms Definiteness of Quadratic Forms Maxima and Minima in R n First Order Conditions Second Order Conditions
More informationThe Equivalence of Ergodicity and Weak Mixing for Infinitely Divisible Processes1
Journal of Theoretical Probability. Vol. 10, No. 1, 1997 The Equivalence of Ergodicity and Weak Mixing for Infinitely Divisible Processes1 Jan Rosinski2 and Tomasz Zak Received June 20, 1995: revised September
More information1 Definition and Basic Properties of Compa Operator
1 Definition and Basic Properties of Compa Operator 1.1 Let X be a infinite dimensional Banach space. Show that if A C(X ), A does not have bounded inverse. Proof. Denote the unit ball of X by B and the
More information1. Which one of the following points is a singular point of. f(x) = (x 1) 2/3? f(x) = 3x 3 4x 2 5x + 6? (C)
Math 1120 Calculus Test 3 November 4, 1 Name In the first 10 problems, each part counts 5 points (total 50 points) and the final three problems count 20 points each Multiple choice section Circle the correct
More informationChapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations
Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2
More informationThe extreme rays of the 5 5 copositive cone
The extreme rays of the copositive cone Roland Hildebrand March 8, 0 Abstract We give an explicit characterization of all extreme rays of the cone C of copositive matrices. The results are based on the
More informationCHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS. W. Erwin Diewert January 31, 2008.
1 ECONOMICS 594: LECTURE NOTES CHAPTER 2: CONVEX SETS AND CONCAVE FUNCTIONS W. Erwin Diewert January 31, 2008. 1. Introduction Many economic problems have the following structure: (i) a linear function
More informationMATH 5720: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 2018
MATH 57: Unconstrained Optimization Hung Phan, UMass Lowell September 13, 18 1 Global and Local Optima Let a function f : S R be defined on a set S R n Definition 1 (minimizers and maximizers) (i) x S
More informationA Tropical Extremal Problem with Nonlinear Objective Function and Linear Inequality Constraints
A Tropical Extremal Problem with Nonlinear Objective Function and Linear Inequality Constraints NIKOLAI KRIVULIN Faculty of Mathematics and Mechanics St. Petersburg State University 28 Universitetsky Ave.,
More informationChapter 1: Linear Programming
Chapter 1: Linear Programming Math 368 c Copyright 2013 R Clark Robinson May 22, 2013 Chapter 1: Linear Programming 1 Max and Min For f : D R n R, f (D) = {f (x) : x D } is set of attainable values of
More informationLecture Unconstrained optimization. In this lecture we will study the unconstrained problem. minimize f(x), (2.1)
Lecture 2 In this lecture we will study the unconstrained problem minimize f(x), (2.1) where x R n. Optimality conditions aim to identify properties that potential minimizers need to satisfy in relation
More informationNotes on Linear Algebra and Matrix Theory
Massimo Franceschet featuring Enrico Bozzo Scalar product The scalar product (a.k.a. dot product or inner product) of two real vectors x = (x 1,..., x n ) and y = (y 1,..., y n ) is not a vector but a
More information1 Convexity, concavity and quasi-concavity. (SB )
UNIVERSITY OF MARYLAND ECON 600 Summer 2010 Lecture Two: Unconstrained Optimization 1 Convexity, concavity and quasi-concavity. (SB 21.1-21.3.) For any two points, x, y R n, we can trace out the line of
More informationPlanning and Optimization
Planning and Optimization C23. Linear & Integer Programming Malte Helmert and Gabriele Röger Universität Basel December 1, 2016 Examples Linear Program: Example Maximization Problem Example maximize 2x
More informationMCE693/793: Analysis and Control of Nonlinear Systems
MCE693/793: Analysis and Control of Nonlinear Systems Lyapunov Stability - I Hanz Richter Mechanical Engineering Department Cleveland State University Definition of Stability - Lyapunov Sense Lyapunov
More informationE5295/5B5749 Convex optimization with engineering applications. Lecture 5. Convex programming and semidefinite programming
E5295/5B5749 Convex optimization with engineering applications Lecture 5 Convex programming and semidefinite programming A. Forsgren, KTH 1 Lecture 5 Convex optimization 2006/2007 Convex quadratic program
More informationFour new upper bounds for the stability number of a graph
Four new upper bounds for the stability number of a graph Miklós Ujvári Abstract. In 1979, L. Lovász defined the theta number, a spectral/semidefinite upper bound on the stability number of a graph, which
More informationRobust linear optimization under general norms
Operations Research Letters 3 (004) 50 56 Operations Research Letters www.elsevier.com/locate/dsw Robust linear optimization under general norms Dimitris Bertsimas a; ;, Dessislava Pachamanova b, Melvyn
More informationMATH 445/545 Test 1 Spring 2016
MATH 445/545 Test Spring 06 Note the problems are separated into two sections a set for all students and an additional set for those taking the course at the 545 level. Please read and follow all of these
More informationLecture: Examples of LP, SOCP and SDP
1/34 Lecture: Examples of LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:
More informationMSc Quantitative Techniques Mathematics Weeks 1 and 2
MSc Quantitative Techniques Mathematics Weeks 1 and 2 DEPARTMENT OF ECONOMICS, MATHEMATICS AND STATISTICS LONDON WC1E 7HX September 2017 For MSc Programmes in Economics and Finance Who is this course for?
More informationMSc Quantitative Techniques Mathematics Weeks 1 and 2
MSc Quantitative Techniques Mathematics Weeks 1 and 2 DEPARTMENT OF ECONOMICS, MATHEMATICS AND STATISTICS LONDON WC1E 7HX September 2014 MSc Economics / Finance / Financial Economics / Financial Risk Management
More informationORIGINS OF STOCHASTIC PROGRAMMING
ORIGINS OF STOCHASTIC PROGRAMMING Early 1950 s: in applications of Linear Programming unknown values of coefficients: demands, technological coefficients, yields, etc. QUOTATION Dantzig, Interfaces 20,1990
More informationAn LP-based inconsistency monitoring of pairwise comparison matrices
arxiv:1505.01902v1 [cs.oh] 8 May 2015 An LP-based inconsistency monitoring of pairwise comparison matrices S. Bozóki, J. Fülöp W.W. Koczkodaj September 13, 2018 Abstract A distance-based inconsistency
More informationAbstract. We find Rodrigues type formula for multi-variate orthogonal. orthogonal polynomial solutions of a second order partial differential
Bull. Korean Math. Soc. 38 (2001), No. 3, pp. 463 475 RODRIGUES TYPE FORMULA FOR MULTI-VARIATE ORTHOGONAL POLYNOMIALS Yong Ju Kim a, Kil Hyun Kwon, and Jeong Keun Lee Abstract. We find Rodrigues type formula
More informationSolving Dual Problems
Lecture 20 Solving Dual Problems We consider a constrained problem where, in addition to the constraint set X, there are also inequality and linear equality constraints. Specifically the minimization problem
More informationOn well definedness of the Central Path
On well definedness of the Central Path L.M.Graña Drummond B. F. Svaiter IMPA-Instituto de Matemática Pura e Aplicada Estrada Dona Castorina 110, Jardim Botânico, Rio de Janeiro-RJ CEP 22460-320 Brasil
More informationSOME INDEFINITE METRICS AND COVARIANT DERIVATIVES OF THEIR CURVATURE TENSORS
C O L L O Q U I U M M A T H E M A T I C U M VOL. LXII 1991 FASC. 2 SOME INDEFINITE METRICS AND COVARIANT DERIVATIVES OF THEIR CURVATURE TENSORS BY W. R O T E R (WROC LAW) 1. Introduction. Let (M, g) be
More informationLINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM
LINEAR ALGEBRA BOOT CAMP WEEK 4: THE SPECTRAL THEOREM Unless otherwise stated, all vector spaces in this worksheet are finite dimensional and the scalar field F is R or C. Definition 1. A linear operator
More informationOn pairwise comparison matrices that can be made consistent by the modification of a few elements
Noname manuscript No. (will be inserted by the editor) On pairwise comparison matrices that can be made consistent by the modification of a few elements Sándor Bozóki 1,2 János Fülöp 1,3 Attila Poesz 2
More informationUNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems
UNDERGROUND LECTURE NOTES 1: Optimality Conditions for Constrained Optimization Problems Robert M. Freund February 2016 c 2016 Massachusetts Institute of Technology. All rights reserved. 1 1 Introduction
More informationLinear Programming. (Com S 477/577 Notes) Yan-Bin Jia. Nov 28, 2017
Linear Programming (Com S 4/ Notes) Yan-Bin Jia Nov 8, Introduction Many problems can be formulated as maximizing or minimizing an objective in the form of a linear function given a set of linear constraints
More information