A LINEAR PROGRAMMING BASED ANALYSIS OF THE CP-RANK OF COMPLETELY POSITIVE MATRICES

Size: px
Start display at page:

Download "A LINEAR PROGRAMMING BASED ANALYSIS OF THE CP-RANK OF COMPLETELY POSITIVE MATRICES"

Transcription

1 Int J Appl Math Comput Sci, 00, Vol 1, No 1, 5 1 A LINEAR PROGRAMMING BASED ANALYSIS OF HE CP-RANK OF COMPLEELY POSIIVE MARICES YINGBO LI, ANON KUMMER ANDREAS FROMMER Department of Electrical and Information Engineering University of Wuppertal, Rainer-Gruenter-Street Wuppertal, Germany {yingbo, kummert}@uni-wuppertalde Department of Mathematics University of Wuppertal, Gauß Street Wuppertal, Germany frommer@mathuni-wuppertalde A real matrix A is said to be completely positive (CP) if it can be decomposed as A BB, where the real matrix B has exclusively non-negative entries Let k be the rank of A and Φ k the least possible number of columns of the matrix B, the so-called completely positive rank (cp-rank) of A he present work is devoted to a study of a general upper bound for the cp-rank of an arbitrary completely positive matrix A and its dependence on the ordinary rank k his general upper bound of the cp-rank has been proved to be at most / In a recent pioneering work of Barioli and Berman it was slightly reduced by one, which means that Φ k / 1 holds for k An alternative constructive proof of the same result is given in the present paper based on the properties of the simplex algorithm known from linear programming Our proof illuminates complete positivity from a different point of view Discussions concerning dual cones are not needed here In addition to that, the proof is of constructive nature, ie starting from an arbitrary decomposition A B 1B 1 (B 1 0) a new decomposition A B B (B 0) can be generated in a constructive manner, where the number of column vectors of B does not exceed / 1 his algorithm is based mainly on the well-known techniques stemming from linear programming, where the pivot step of the simplex algorithm plays a key role Keywords: completely positive matrices, cp-rank, linear programming, simplex algorithm, basic feasible solution, pivot process 1 Introduction An n n real symmetric matrix A is called completely positive (belonging to the set CP of completely positive matrices) if an entry-wise non-negative n m matrix B exists with A BB he product A BB can alternatively be written in its rank 1 representation as A m i1 b ib i, in which each b i R n + denotes the i-th non-negative column of the matrix B he minimum number of columns m, for which such a factorization A BB exists, is called the cp-rank of A, and denoted by Φ k in relation to the ordinary rank k of A It is to be pointed out that all vectors throughout this paper are column vectors, unless explicitly defined to be row vectors he non-negative decomposition of completely positive matrices is of interest in various applications including, eg, the study of block designs arising in combinatorial analysis (Hall, 196), problems in statistics, the theory of inequalities, energy conservation (Gray and Wilson, 1980) and models of DNA evolution (Kelly, 199) No definitive test is known yet to determine whether a given real symmetric matrix A is completely positive Many studies on this problem have been performed (Berman and Plemmons, 199; Berman and Shaked- Monderer, 00) One sufficient condition for a nonnegative symmetric matrix to be completely positive given in (Drew et al, 199) says that the comparison matrix of A is an M-matrix A special case of this sufficient condition is that the matrix A is diagonally dominant (Kaykobad, 1988) A qualitative characterization of completely positive matrices using graphs is that the graph of a cp matrix contains no odd cycle of length 5 or more (Berman, 199; Kogan and Berman, 199)

2 6 he question of a general upper bound for the cprank which works for any completely positive matrix of dimension n n was asked by Hall and Newman (196), with the conclusion that Φ k < n he definition of the cp-rank as above implies that Φ k is obviously greater than or equal to the ordinary rank k of A Hannah and Laffey (198) derived the inequality Φ k n(n + 1)/ (which is independent of k, with k n) Recently, the sharpened upper bound / for Φ k was reduced by Barioli and Berman (00) to / 1 if k In (Drew et al, 199) it was conjectured that Φ k < n / However, this conjecture was proved only for matrices with special graphs, which are bipartite with the two parts as balanced as possible, or for graphs which contain no odd cycle of length 5 or more (Drew and Johnson, 1996), or for all graphs on 5 vertices which are not a complete graph (Berman, 199) In the present paper we give a constructive proof alternative to Barioli and Berman s result that Φ k 1, k he technique of the proof presented here involves the properties of the simplex algorithm known from linear programming and illuminates the problem from a different point of view Above all, discussions concerning dual cones are not required Linear programming is the process of minimizing (or maximizing) a linear real-valued objective function subject to a finite number of linear equality and inequality constraints Linear programming problems are encountered in many branches of technology, science, and economics In this work, the problem of determining an upper bound for the cp-rank of completely positive matrices is reduced to the problem of determining the maximal number of non-zero elements in the basic feasible solutions of a linear programming problem By means of appropriate matrix operations and based on the properties of the simplex algorithm, the desired result will be achieved Furthermore, according to the described constructive procedure, such a decomposition which satisfies the claimed bound constraint can be generated step by step from a known decomposition of an arbitrary size First Reduction Step We start with a known rank 1 representation A m i1 b ib i of a given n n completely positive matrix, where m is an arbitrary large, however finite, integer From now on, we assume that no column b i of B is zero, since otherwise such a column could have been removed from B After the application of the vec-operator (stacking the columns of a matrix to form a column vector) to Y Li et al both the sides, the following expression can be obtained: ( m ) m vec(a) vec b i b i vec(b i b i ) i1 i1 m b i b i, (1) i1 where denotes the Kronecker product Consequently, we may define an n m matrix C b 1 b 1, b b,, b m b m () with c i b i b i, i 1,, m, so that (1) can be represented as vec(a) C 1, 1,, 1 () It is evident that the non-negative linear system of equations vec(a) Cλ with λ i 0 has at least one valid solution, namely the column vector λ 0 1, 1,, 1 In fact, from now on we may look upon Cλ vec(a) with λ i 0 as the linear constraints for an arbitrary linear objective function, which is a linear programming problem in a standard form with all variables of λ non-negatively constrained First of all, it is important to indicate that the feasible polyhedron of this linear programming problem is on the one hand non-empty due to the existence of the feasible solution λ 0 and, on the other hand, bounded due to the non-negativity of C, vec(a) and λ, and the fact that no column in B, and thus no column in C, is zero hus, the feasible region represents a polytope Consequently, the available feasible solutions which satisfy these linear constraints can be expressed as convex combinations of the so-called basic feasible solutions, ie the extreme points (or vertices) of the polytope According to the properties of linear programming problems, in the basic feasible solutions of Cλ vec(a) with λ i 0, only the elements λ i which correspond to a maximal set of linear independent columns of C may be non-zero herefore, all of the basic feasible solutions possess at most rank(c) strictly positive elements Because of the existence of the solution λ 0 1, 1,, 1, it can be concluded that at least one basic feasible solution with at most rank(c) strictly positive elements for Cλ vec(a) with λ i 0 exists Writing rank(c) r, and, if necessary, permuting the columns of B, we may assume that there is a basic feasible solution of the form λ B λ 1, λ,, λ r, 0,, 0 with λ i 0 for 1 i r It is obvious that r r ( vec(a) Cλ B c i λ i λi b i ) ( ) λ i b i, i1 i1 ()

3 A linear programming based analysis of the CP-rank of completely positive matrices which is equivalent to A r ( )( ) λi b i λi b i (5) i1 Defining B λ 1 b 1, λ b,, λr b r, we have established an alternative non-negative factorization of A, namely A B B λ1 b 1,, λr b r λ1 b 1,, λr b r (6) It should be noted that a basic feasible solution can always be generated constructively by using linear programming techniques (Glashoff and Gustafson, 198; Karloff, 1991) Upper Bounds on the CP-Rank In this section we first clarify that the cp-rank of A, Φ k, is bounded by the rank of the matrix C, which is in turn limited by / again, where k rank(a) he proof of the sharpened upper bound of Φ k, namely / 1, will be provided afterwards Lemma 1 he cp-rank Φ k of A CP is bounded by the rank of the matrix C, ie we have Φ k r rank(c) () Proof Based on the non-negative factorization of A achieved in (6), which can always be realized constructively from an arbitrary non-negative decomposition of A, and the definition of Φ k as the least number of nonzero columns in any decomposition, it is obvious that Φ k r rank(c) Our final goal is to prove that Φ k is bounded by / 1 In order to keep the proof clear and progressive, we will first prove that rank(c) / heorem 1 he ordinary rank of matrix C as defined in () is bounded by rank(c) (8) Proof Since A BB with rank(a) k, it follows immediately that rank(b) k With respect to a geometric interpretation, this means that the dimension of the vector space generated by the b i s is k, ie dim(span(b i 1 i m)) k As a consequence, there must be an isometry available which maps b i R n + onto R k, i 1,, m In matrix terms, this means that we may find an orthonormal matrix V (V V I, where I is the identity matrix), so that V B ( F 0 ), the dimension of F being k m Computationally, V and F can be obtained from the standard QR-decomposition of B, see (Golub and Van Loan, 1989) he following equality results from this isometry: (V V )C (V V ) b 1 b 1,, b m b m (V b 1 ) (V b 1 ),,(V b m ) (V b m ) ( ) ( ) ( ) ( ) f 1 f 1 f,, m f m, (9) where f i V b i is the i-th column vector of matrix F Since there are at most / distinct non-zero row vectors in the matrix (V V )C and V V is orthonormal and thus non-singular, we have r rank(c) rank((v V )C) Corollary 1 In view of () and heorem 1, we can conclude that Φ k is at least bounded by Φ k (10) Let us note that the above proofs are constructive: Given A BB, we have shown how to obtain B in A B B by computing a basic feasible solution of a certain polytope he next step we want to take is proving the sharpened bound Φ k / 1 for k he corresponding proof can be divided into distinct steps If for a given matrix A the associated matrix C has a rank less than /, we can stop, otherwise we have rank(c) / and the reduction procedure must be continued he outline of this algorithm is as follows: After extension of C by a column vector, which is the Kronecker product of an adequately selected column of A with itself, we will show that the rank of the extended matrix C is identical to its original rank hen a pivot step known from simplex techniques will be used, so that a new non-negative decomposition A GG could be created, whereby the column number of G does not exceed that of B Relying on some requirements regarding G, the column number of G can be further reduced by 1 after some straightforward computations By this last step the proof will be completed heorem For k the cp-rank of any completely positive matrix A with its ordinary rank equal to k is

4 8 bounded by cp-rank (A) Φ k 1 (11) Proof From heorem 1 and Corollary 1 we know that Φ k rank(c) / If rank(c) < /, nothing has to be proven, so that from now on we assume that r rank(c) / Before the actual proof, we would like to point out a useful property of the matrix B (cf (6)) at first From the decomposition A B B in (6) with B λ 1 b 1, λ b,, λr b r, the fact that rank( B) k n and the assumption r /, it can be concluded that at least one of the rows of the non-negative matrix B has more than one non-zero entry Otherwise, if each row of the non-negative matrix B had at most one non-zero element, all columns of B (which does not have zero columns) would be linearly independent, so k r and thus Φ k r k / 1 for k and we have a contradiction to the assumption r / So, for the rest of the proof we may therefore assume that at least one row of B contains at least two non-zero entries Now we extend vec(a) Cλ B in (), which is associated with A a 1, a,, a n and rank(c) r /, to vec(a) C, a j a j λ E, (1) with λ E λ B0, where, in view of the above discussion, the integer j, 1 j n, can be chosen in such a way that the j-th row of non-negative B has at least two non-zero entries he rank of the extended matrix C, a j a j in (1) is unmodified compared with that of the matrix C, ie ) rank (C, a j a j rank(c) r, (1) as we show now he orthonormal matrix V from the proof of heorem 1 will be reused From A BB and V B ( F 0 ) it follows that V A V BB ( F 0 )B ( E 0 ) and, accordingly, V a j ( e j 0 ) with e j as the j-th column of E, whereby the dimension of E is k n, so that (V V ) C, a j a j (V V ) b 1 b 1,, b m b m, a j a j Y Li et al (V b 1 ) (V b 1 ),, (V b m ) (V b m ), (V a j ) (V a j ) ( ) ( ) ( f 1 f 1 f,, m ) ( f m 0 ( ) ( ) e j e j (1) 0 0 As in the proof of heorem 1, we have rank(c, a j a j ) / Since rank(c) was assumed to be equal to / and for the reason that rank(c) rank(c, a j a j ), we get rank(c, a j a j ) rank(c) / Because all entries of the extended linear constraints system (1) are also non-negative, and there exists a feasible solution λ E λ B0, the feasible region is bounded and non-empty, ie it is again a polytope Consequently, basic feasible solutions of the linear programming problem (1) exist and have at most r / nonnegative variables Furthermore, λ E is a basic solution where the basis is represented exactly by the variables λ 1,, λ r o be more specific, since rank(c, a j a j ) rank(c), any basis of the old linear system is also a basis of the extended system (his would not be true if the rank of the extended system were increased by one when compared with that of the old system) Suppose that the simplex tableau associated with the extended linear system is given for the basis λ 1,, λ r, ie λ m+1, the variable associated with column a j a j of the extended system is a non-basic variable yet Next, the simplex algorithm will be considered, which has been the method of choice used to solve linear programming problems for decades Due to the fact that the basic feasible solutions are nothing else than extreme points of the polytope, and the optimal point of the objective function subject to the polytope is always among these vertices, the simplex algorithm can be regarded as an ordered way of scanning through such vertices he simplex algorithm starts from an arbitrary vertex of the feasible polytope and tries to find a cheaper adjacent vertex, till ideally no neighbouring vertex with a cheaper cost is found he process of changing the basis to move to an adjacent vertex is called the pivot step Basically, a pivot step takes one basic variable out of the basis and, at the same time, it shifts another, originally non-basic, variable into the basis Hence, during a pivot step carried out on the simplex tableau defined above, the last variable of λ E, ),

5 A linear programming based analysis of the CP-rank of completely positive matrices 9 namely λ m+1, can be deliberately forced to enter the basis, for which one of the formerly basic variables, say λ s, leaves the basis It is worth noticing that through the exchange of the basic variable and the corresponding modification of the basis, the maximal number of non-zero elements in the new basic feasible solution, r /, is unchanged, and all of the elements are still non-negative Denoting by d 1,, d r the new non-negative basic variable values, we have vec(a) b 1 b 1,, b s 1 b s 1, a j a j, b s+1 b s+1,, b r b r d 1,, d s,, d r (15) Following the same procedure as above, a new nonnegative factorization of A of the form A d1 b 1,, d s 1 b s 1, d s a j, ds+1 b s+1,, d r b r d1 b 1,, d s 1 b s 1, d s a j, ds+1 b s+1,, d r b r (16) with G g 1,, g r 1 + αaj g j1,, g j,r 1, ( α (1 d s a jj ) r 1 gjl l1 ) + d s r 1 gjl l1 (1 d sa jj ) r 1 gjl l1 (19) Since d s > 0, the scalar α is also strictly positive, ie α > 0, which shows that G is non-negative Furthermore, α always exists since r 1 l1 g jl 0, which is true since r 1 l1 g jl r l1,l s d lb jl, and we have chosen j in such a manner that at least two values from among b jl, l 1,, r are non-zero he validity of the decomposition in (18) can be simply checked by inserting (19) into (18) and using g1,, g r 1 gj1,, g j,r 1 (1 d s a jj )a j, (0) which follows from a j g 1,, g r 1, ds a j g j1,, g j,r 1, ds a jj, (1) see (1) In view of (18) and (19), the number of columns in G equals / 1 Hence we have proved that cp-rank (A) Φ k r 1 1 () is obtained If at least one of the non-negative numbers d l, l 1,, r, is zero, then cp-rank (A) r 1 / 1, so that the desired result is obtained Otherwise we proceed as follows: After a reordering of the columns of the matrix d 1 b 1,, ds 1 b s 1, ds a j, ds+1 b s+1,, dr b r in such a way that the column d s a j enters the last position, we have the decomposition A GG g 1,, g r 1, g 1,, g r 1, ds a j ds a j (1) with g l g 1l,, g nl 0, l 1,, r 1, where the g l s denote the newly arranged non-negative columns Based on these results, we finally can construct the non-negative decomposition A G G, (18) In order to have all reduction steps in a condensed form without being interrupted by proofs, we summarize them in the form of the sequential algorithm, shown in Fig 1 Illustrative Example In this section the constructive algorithm is demonstrated by applying it to an illustrative example Consider a completely positive matrix A 1, whose rank also equals three here exists a non-negative decomposition A BB with B

6 0 start, given a cp-matrix A(n n) with rank(a) k, let B(n m) be any known solution to A BB, B 0 if column number of B, m, is less than / we have done, exit otherwise generate the linear system vec(a) Cλ, λ 0 with C defined by C b 1 b 1, b b,, b m b m compute a basic feasible solution λ B programming method by a known linear generate B by means of the non-zero elements of λ B if column number of B, r, is less than / A B B is already the desired factorisation, done and exit otherwise λ E λ B 0 extend C by a j a j, with 1 j n generate the new linear system vec(a) C, a j a j λ E apply one pivot step to bring non-basic variable λ m+1 into the basis, which leads to a new basic feasible solution d generate G by means of the non-zero elements of d if column number of G is less than / A GG and exit otherwise end is already the desired factorisation, done compute G according to (19), herewith we have A G G and the column number of G is at most / 1 Fig 1 Summary of all algorithmic steps he linear system vec(a) Cλ is given by λ 1 λ λ λ λ 5 λ 6 λ By means of standard techniques, the basic solution λ B 5/ 5/ 1 / 0 1/ /6 Y Li et al can be generated From (6) we have a new non-negative factorization, namely A B B, where B has only six columns, 5/ 0 0 / 1/ /6 B 0 5/ 0 / 0 1/ / 1/ Our next step is to construct a new linear system by extending the matrix C by a a, which leads to λ 1 λ λ λ λ 5 λ 6 λ λ 8 he Kronecker product a a is used here, which fulfils the requirement that the third row of B has at least two strictly positive entries It can be verified that the rank of the new extended system matrix is the same as that of C, namely r 6 Starting from the already available basic solution λ E 5/ 5/ 1 / 0 1/ /6 0, we can include λ 8 into the basis by applying only one pivot step, which leads to the new basic solution λ E From this a newly arranged non-negative factorization A GG results in 0 0 8/5 0 8/5 /5 G /0 0 8/5 1/5 8/5 9/0 0 /5 0 1/5 0 6/5 Since the matrix G has still six columns, we have to apply (19) in order to reduce the number of columns In our example the scalar α has a positive value of 5/88 1/1, which leads to the final decomposition A G G with G

7 A linear programming based analysis of the CP-rank of completely positive matrices As has been expected, the non-negative factorisation A eg e has at most / 1 5 columns By G eg e to A the correctness of this factorisation equating G can be easily verified It has thus been demonstrated that given any known decomposition of a cp-matrix A a decomposition with at most / 1 columns can be generated in a constructive manner Obviously, 1

A note on 5 5 Completely positive matrices

A note on 5 5 Completely positive matrices A note on 5 5 Completely positive matrices Hongbo Dong and Kurt Anstreicher October 2009; revised April 2010 Abstract In their paper 5 5 Completely positive matrices, Berman and Xu [BX04] attempt to characterize

More information

Bounding the CP-rank by graph parameters

Bounding the CP-rank by graph parameters Electronic Journal of Linear Algebra Volume 8 Volume 8: Special volume for Proceedings of Graph Theory, Matrix Theory and Interactions Conference Article 9 015 Bounding the CP-rank by graph parameters

More information

Unmixed Graphs that are Domains

Unmixed Graphs that are Domains Unmixed Graphs that are Domains Bruno Benedetti Institut für Mathematik, MA 6-2 TU Berlin, Germany benedetti@math.tu-berlin.de Matteo Varbaro Dipartimento di Matematica Univ. degli Studi di Genova, Italy

More information

Math Models of OR: Some Definitions

Math Models of OR: Some Definitions Math Models of OR: Some Definitions John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 12180 USA September 2018 Mitchell Some Definitions 1 / 20 Active constraints Outline 1 Active constraints

More information

Lesson 27 Linear Programming; The Simplex Method

Lesson 27 Linear Programming; The Simplex Method Lesson Linear Programming; The Simplex Method Math 0 April 9, 006 Setup A standard linear programming problem is to maximize the quantity c x + c x +... c n x n = c T x subject to constraints a x + a x

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.453: Combinatorial Optimization Michel X. Goemans April 5, 2017 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

3. Linear Programming and Polyhedral Combinatorics

3. Linear Programming and Polyhedral Combinatorics Massachusetts Institute of Technology 18.433: Combinatorial Optimization Michel X. Goemans February 28th, 2013 3. Linear Programming and Polyhedral Combinatorics Summary of what was seen in the introductory

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

Chapter 1. Preliminaries

Chapter 1. Preliminaries Introduction This dissertation is a reading of chapter 4 in part I of the book : Integer and Combinatorial Optimization by George L. Nemhauser & Laurence A. Wolsey. The chapter elaborates links between

More information

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization

Spring 2017 CO 250 Course Notes TABLE OF CONTENTS. richardwu.ca. CO 250 Course Notes. Introduction to Optimization Spring 2017 CO 250 Course Notes TABLE OF CONTENTS richardwu.ca CO 250 Course Notes Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4, 2018 Table

More information

The Simplex Algorithm

The Simplex Algorithm 8.433 Combinatorial Optimization The Simplex Algorithm October 6, 8 Lecturer: Santosh Vempala We proved the following: Lemma (Farkas). Let A R m n, b R m. Exactly one of the following conditions is true:.

More information

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman

Kernels of Directed Graph Laplacians. J. S. Caughman and J.J.P. Veerman Kernels of Directed Graph Laplacians J. S. Caughman and J.J.P. Veerman Department of Mathematics and Statistics Portland State University PO Box 751, Portland, OR 97207. caughman@pdx.edu, veerman@pdx.edu

More information

QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY

QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY QUALITATIVE CONTROLLABILITY AND UNCONTROLLABILITY BY A SINGLE ENTRY D.D. Olesky 1 Department of Computer Science University of Victoria Victoria, B.C. V8W 3P6 Michael Tsatsomeros Department of Mathematics

More information

Downloaded 07/16/12 to Redistribution subject to SIAM license or copyright; see

Downloaded 07/16/12 to Redistribution subject to SIAM license or copyright; see SIAM J. MATRIX ANAL. APPL. Vol. 33, No. 3, pp. 71 72 c 212 Society for Industrial and Applied Mathematics LINEAR-TIME COMPLETE POSITIVITY DETECTION AND DECOMPOSITION OF SPARSE MATRICES PETER J. C. DICKINSON

More information

Lecture 2: The Simplex method

Lecture 2: The Simplex method Lecture 2 1 Linear and Combinatorial Optimization Lecture 2: The Simplex method Basic solution. The Simplex method (standardform, b>0). 1. Repetition of basic solution. 2. One step in the Simplex algorithm.

More information

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear

More information

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations. POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems

More information

Chapter 3, Operations Research (OR)

Chapter 3, Operations Research (OR) Chapter 3, Operations Research (OR) Kent Andersen February 7, 2007 1 Linear Programs (continued) In the last chapter, we introduced the general form of a linear program, which we denote (P) Minimize Z

More information

The following two problems were posed by de Caen [4] (see also [6]):

The following two problems were posed by de Caen [4] (see also [6]): BINARY RANKS AND BINARY FACTORIZATIONS OF NONNEGATIVE INTEGER MATRICES JIN ZHONG Abstract A matrix is binary if each of its entries is either or The binary rank of a nonnegative integer matrix A is the

More information

Completely positive matrices of order 5 with ĈP -graph

Completely positive matrices of order 5 with ĈP -graph Note di Matematica ISSN 1123-2536, e-issn 1590-0932 Note Mat. 36 (2016) no. 1, 123 132. doi:10.1285/i15900932v36n1p123 Completely positive matrices of order 5 with ĈP -graph Micol Cedolin Castello 2153,

More information

1 Review Session. 1.1 Lecture 2

1 Review Session. 1.1 Lecture 2 1 Review Session Note: The following lists give an overview of the material that was covered in the lectures and sections. Your TF will go through these lists. If anything is unclear or you have questions

More information

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace

An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace An Alternative Proof of Primitivity of Indecomposable Nonnegative Matrices with a Positive Trace Takao Fujimoto Abstract. This research memorandum is aimed at presenting an alternative proof to a well

More information

Integer Programming, Part 1

Integer Programming, Part 1 Integer Programming, Part 1 Rudi Pendavingh Technische Universiteit Eindhoven May 18, 2016 Rudi Pendavingh (TU/e) Integer Programming, Part 1 May 18, 2016 1 / 37 Linear Inequalities and Polyhedra Farkas

More information

A Criterion for the Stochasticity of Matrices with Specified Order Relations

A Criterion for the Stochasticity of Matrices with Specified Order Relations Rend. Istit. Mat. Univ. Trieste Vol. XL, 55 64 (2009) A Criterion for the Stochasticity of Matrices with Specified Order Relations Luca Bortolussi and Andrea Sgarro Abstract. We tackle the following problem:

More information

TIM 206 Lecture 3: The Simplex Method

TIM 206 Lecture 3: The Simplex Method TIM 206 Lecture 3: The Simplex Method Kevin Ross. Scribe: Shane Brennan (2006) September 29, 2011 1 Basic Feasible Solutions Have equation Ax = b contain more columns (variables) than rows (constraints),

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

Sign Patterns that Allow Diagonalizability

Sign Patterns that Allow Diagonalizability Georgia State University ScholarWorks @ Georgia State University Mathematics Dissertations Department of Mathematics and Statistics 12-10-2018 Sign Patterns that Allow Diagonalizability Christopher Michael

More information

Math Models of OR: Extreme Points and Basic Feasible Solutions

Math Models of OR: Extreme Points and Basic Feasible Solutions Math Models of OR: Extreme Points and Basic Feasible Solutions John E. Mitchell Department of Mathematical Sciences RPI, Troy, NY 1180 USA September 018 Mitchell Extreme Points and Basic Feasible Solutions

More information

Math 775 Homework 1. Austin Mohr. February 9, 2011

Math 775 Homework 1. Austin Mohr. February 9, 2011 Math 775 Homework 1 Austin Mohr February 9, 2011 Problem 1 Suppose sets S 1, S 2,..., S n contain, respectively, 2, 3,..., n 1 elements. Proposition 1. The number of SDR s is at least 2 n, and this bound

More information

Math 273a: Optimization The Simplex method

Math 273a: Optimization The Simplex method Math 273a: Optimization The Simplex method Instructor: Wotao Yin Department of Mathematics, UCLA Fall 2015 material taken from the textbook Chong-Zak, 4th Ed. Overview: idea and approach If a standard-form

More information

Row and Column Distributions of Letter Matrices

Row and Column Distributions of Letter Matrices College of William and Mary W&M ScholarWorks Undergraduate Honors Theses Theses, Dissertations, & Master Projects 5-2016 Row and Column Distributions of Letter Matrices Xiaonan Hu College of William and

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

On Linear Copositive Lyapunov Functions and the Stability of Switched Positive Linear Systems

On Linear Copositive Lyapunov Functions and the Stability of Switched Positive Linear Systems 1 On Linear Copositive Lyapunov Functions and the Stability of Switched Positive Linear Systems O. Mason and R. Shorten Abstract We consider the problem of common linear copositive function existence for

More information

Lecture 7: Positive Semidefinite Matrices

Lecture 7: Positive Semidefinite Matrices Lecture 7: Positive Semidefinite Matrices Rajat Mittal IIT Kanpur The main aim of this lecture note is to prepare your background for semidefinite programming. We have already seen some linear algebra.

More information

Linear programming: theory, algorithms and applications

Linear programming: theory, algorithms and applications Linear programming: theory, algorithms and applications illes@math.bme.hu Department of Differential Equations Budapest 2014/2015 Fall Vector spaces A nonempty set L equipped with addition and multiplication

More information

On the adjacency matrix of a block graph

On the adjacency matrix of a block graph On the adjacency matrix of a block graph R. B. Bapat Stat-Math Unit Indian Statistical Institute, Delhi 7-SJSS Marg, New Delhi 110 016, India. email: rbb@isid.ac.in Souvik Roy Economics and Planning Unit

More information

Week 2. The Simplex method was developed by Dantzig in the late 40-ties.

Week 2. The Simplex method was developed by Dantzig in the late 40-ties. 1 The Simplex method Week 2 The Simplex method was developed by Dantzig in the late 40-ties. 1.1 The standard form The simplex method is a general description algorithm that solves any LPproblem instance.

More information

A NEW FACE METHOD FOR LINEAR PROGRAMMING. 1. Introduction We are concerned with the standard linear programming (LP) problem

A NEW FACE METHOD FOR LINEAR PROGRAMMING. 1. Introduction We are concerned with the standard linear programming (LP) problem A NEW FACE METHOD FOR LINEAR PROGRAMMING PING-QI PAN Abstract. An attractive feature of the face method [9 for solving LP problems is that it uses the orthogonal projection of the negative objective gradient

More information

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008

Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures

More information

Z-Pencils. November 20, Abstract

Z-Pencils. November 20, Abstract Z-Pencils J. J. McDonald D. D. Olesky H. Schneider M. J. Tsatsomeros P. van den Driessche November 20, 2006 Abstract The matrix pencil (A, B) = {tb A t C} is considered under the assumptions that A is

More information

Lectures 6, 7 and part of 8

Lectures 6, 7 and part of 8 Lectures 6, 7 and part of 8 Uriel Feige April 26, May 3, May 10, 2015 1 Linear programming duality 1.1 The diet problem revisited Recall the diet problem from Lecture 1. There are n foods, m nutrients,

More information

Another algorithm for nonnegative matrices

Another algorithm for nonnegative matrices Linear Algebra and its Applications 365 (2003) 3 12 www.elsevier.com/locate/laa Another algorithm for nonnegative matrices Manfred J. Bauch University of Bayreuth, Institute of Mathematics, D-95440 Bayreuth,

More information

A Faster Strongly Polynomial Time Algorithm for Submodular Function Minimization

A Faster Strongly Polynomial Time Algorithm for Submodular Function Minimization A Faster Strongly Polynomial Time Algorithm for Submodular Function Minimization James B. Orlin Sloan School of Management, MIT Cambridge, MA 02139 jorlin@mit.edu Abstract. We consider the problem of minimizing

More information

A Parametric Simplex Algorithm for Linear Vector Optimization Problems

A Parametric Simplex Algorithm for Linear Vector Optimization Problems A Parametric Simplex Algorithm for Linear Vector Optimization Problems Birgit Rudloff Firdevs Ulus Robert Vanderbei July 9, 2015 Abstract In this paper, a parametric simplex algorithm for solving linear

More information

An improved characterisation of the interior of the completely positive cone

An improved characterisation of the interior of the completely positive cone Electronic Journal of Linear Algebra Volume 2 Volume 2 (2) Article 5 2 An improved characterisation of the interior of the completely positive cone Peter J.C. Dickinson p.j.c.dickinson@rug.nl Follow this

More information

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016

1 Overview. 2 Extreme Points. AM 221: Advanced Optimization Spring 2016 AM 22: Advanced Optimization Spring 206 Prof. Yaron Singer Lecture 7 February 7th Overview In the previous lectures we saw applications of duality to game theory and later to learning theory. In this lecture

More information

Interior points of the completely positive cone

Interior points of the completely positive cone Electronic Journal of Linear Algebra Volume 17 Volume 17 (2008) Article 5 2008 Interior points of the completely positive cone Mirjam Duer duer@mathematik.tu-darmstadt.de Georg Still Follow this and additional

More information

Copositive matrices and periodic dynamical systems

Copositive matrices and periodic dynamical systems Extreme copositive matrices and periodic dynamical systems Weierstrass Institute (WIAS), Berlin Optimization without borders Dedicated to Yuri Nesterovs 60th birthday February 11, 2016 and periodic dynamical

More information

MATH 4211/6211 Optimization Linear Programming

MATH 4211/6211 Optimization Linear Programming MATH 4211/6211 Optimization Linear Programming Xiaojing Ye Department of Mathematics & Statistics Georgia State University Xiaojing Ye, Math & Stat, Georgia State University 0 The standard form of a Linear

More information

Combinatorial Optimization

Combinatorial Optimization Combinatorial Optimization 2017-2018 1 Maximum matching on bipartite graphs Given a graph G = (V, E), find a maximum cardinal matching. 1.1 Direct algorithms Theorem 1.1 (Petersen, 1891) A matching M is

More information

CO 250 Final Exam Guide

CO 250 Final Exam Guide Spring 2017 CO 250 Final Exam Guide TABLE OF CONTENTS richardwu.ca CO 250 Final Exam Guide Introduction to Optimization Kanstantsin Pashkovich Spring 2017 University of Waterloo Last Revision: March 4,

More information

Tropical decomposition of symmetric tensors

Tropical decomposition of symmetric tensors Tropical decomposition of symmetric tensors Melody Chan University of California, Berkeley mtchan@math.berkeley.edu December 11, 008 1 Introduction In [], Comon et al. give an algorithm for decomposing

More information

On mixed-integer sets with two integer variables

On mixed-integer sets with two integer variables On mixed-integer sets with two integer variables Sanjeeb Dash IBM Research sanjeebd@us.ibm.com Santanu S. Dey Georgia Inst. Tech. santanu.dey@isye.gatech.edu September 8, 2010 Oktay Günlük IBM Research

More information

TRANSPORTATION PROBLEMS

TRANSPORTATION PROBLEMS Chapter 6 TRANSPORTATION PROBLEMS 61 Transportation Model Transportation models deal with the determination of a minimum-cost plan for transporting a commodity from a number of sources to a number of destinations

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

Strongly Regular Decompositions of the Complete Graph

Strongly Regular Decompositions of the Complete Graph Journal of Algebraic Combinatorics, 17, 181 201, 2003 c 2003 Kluwer Academic Publishers. Manufactured in The Netherlands. Strongly Regular Decompositions of the Complete Graph EDWIN R. VAN DAM Edwin.vanDam@uvt.nl

More information

Algebraic Methods in Combinatorics

Algebraic Methods in Combinatorics Algebraic Methods in Combinatorics Po-Shen Loh 27 June 2008 1 Warm-up 1. (A result of Bourbaki on finite geometries, from Răzvan) Let X be a finite set, and let F be a family of distinct proper subsets

More information

Convex Analysis 2013 Let f : Q R be a strongly convex function with convexity parameter µ>0, where Q R n is a bounded, closed, convex set, which contains the origin. Let Q =conv(q, Q) andconsiderthefunction

More information

Ω R n is called the constraint set or feasible set. x 1

Ω R n is called the constraint set or feasible set. x 1 1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We

More information

A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo

A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo A lower bound for the Laplacian eigenvalues of a graph proof of a conjecture by Guo A. E. Brouwer & W. H. Haemers 2008-02-28 Abstract We show that if µ j is the j-th largest Laplacian eigenvalue, and d

More information

Solution Set 7, Fall '12

Solution Set 7, Fall '12 Solution Set 7, 18.06 Fall '12 1. Do Problem 26 from 5.1. (It might take a while but when you see it, it's easy) Solution. Let n 3, and let A be an n n matrix whose i, j entry is i + j. To show that det

More information

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A.

Midterm Review. Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. Midterm Review Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapter 1-4, Appendices) 1 Separating hyperplane

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Decomposing dense bipartite graphs into 4-cycles

Decomposing dense bipartite graphs into 4-cycles Decomposing dense bipartite graphs into 4-cycles Nicholas J. Cavenagh Department of Mathematics The University of Waikato Private Bag 3105 Hamilton 3240, New Zealand nickc@waikato.ac.nz Submitted: Jun

More information

3 The Simplex Method. 3.1 Basic Solutions

3 The Simplex Method. 3.1 Basic Solutions 3 The Simplex Method 3.1 Basic Solutions In the LP of Example 2.3, the optimal solution happened to lie at an extreme point of the feasible set. This was not a coincidence. Consider an LP in general form,

More information

ACO Comprehensive Exam October 18 and 19, Analysis of Algorithms

ACO Comprehensive Exam October 18 and 19, Analysis of Algorithms Consider the following two graph problems: 1. Analysis of Algorithms Graph coloring: Given a graph G = (V,E) and an integer c 0, a c-coloring is a function f : V {1,,...,c} such that f(u) f(v) for all

More information

A Review of Linear Programming

A Review of Linear Programming A Review of Linear Programming Instructor: Farid Alizadeh IEOR 4600y Spring 2001 February 14, 2001 1 Overview In this note we review the basic properties of linear programming including the primal simplex

More information

Root systems and optimal block designs

Root systems and optimal block designs Root systems and optimal block designs Peter J. Cameron School of Mathematical Sciences Queen Mary, University of London Mile End Road London E1 4NS, UK p.j.cameron@qmul.ac.uk Abstract Motivated by a question

More information

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J

Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J Central Groupoids, Central Digraphs, and Zero-One Matrices A Satisfying A 2 = J Frank Curtis, John Drew, Chi-Kwong Li, and Daniel Pragel September 25, 2003 Abstract We study central groupoids, central

More information

Reconstruction and Higher Dimensional Geometry

Reconstruction and Higher Dimensional Geometry Reconstruction and Higher Dimensional Geometry Hongyu He Department of Mathematics Louisiana State University email: hongyu@math.lsu.edu Abstract Tutte proved that, if two graphs, both with more than two

More information

1 Maximal Lattice-free Convex Sets

1 Maximal Lattice-free Convex Sets 47-831: Advanced Integer Programming Lecturer: Amitabh Basu Lecture 3 Date: 03/23/2010 In this lecture, we explore the connections between lattices of R n and convex sets in R n. The structures will prove

More information

Recall the convention that, for us, all vectors are column vectors.

Recall the convention that, for us, all vectors are column vectors. Some linear algebra Recall the convention that, for us, all vectors are column vectors. 1. Symmetric matrices Let A be a real matrix. Recall that a complex number λ is an eigenvalue of A if there exists

More information

Copositive Plus Matrices

Copositive Plus Matrices Copositive Plus Matrices Willemieke van Vliet Master Thesis in Applied Mathematics October 2011 Copositive Plus Matrices Summary In this report we discuss the set of copositive plus matrices and their

More information

Definition 2.3. We define addition and multiplication of matrices as follows.

Definition 2.3. We define addition and multiplication of matrices as follows. 14 Chapter 2 Matrices In this chapter, we review matrix algebra from Linear Algebra I, consider row and column operations on matrices, and define the rank of a matrix. Along the way prove that the row

More information

Linear Programming and its Extensions Prof. Prabha Shrama Department of Mathematics and Statistics Indian Institute of Technology, Kanpur

Linear Programming and its Extensions Prof. Prabha Shrama Department of Mathematics and Statistics Indian Institute of Technology, Kanpur Linear Programming and its Extensions Prof. Prabha Shrama Department of Mathematics and Statistics Indian Institute of Technology, Kanpur Lecture No. # 03 Moving from one basic feasible solution to another,

More information

Operations Research Lecture 2: Linear Programming Simplex Method

Operations Research Lecture 2: Linear Programming Simplex Method Operations Research Lecture 2: Linear Programming Simplex Method Notes taken by Kaiquan Xu@Business School, Nanjing University Mar 10th 2016 1 Geometry of LP 1.1 Graphical Representation and Solution Example

More information

RESEARCH ARTICLE. An extension of the polytope of doubly stochastic matrices

RESEARCH ARTICLE. An extension of the polytope of doubly stochastic matrices Linear and Multilinear Algebra Vol. 00, No. 00, Month 200x, 1 15 RESEARCH ARTICLE An extension of the polytope of doubly stochastic matrices Richard A. Brualdi a and Geir Dahl b a Department of Mathematics,

More information

Jurgen Garlo. the inequality sign in all components having odd index sum. For these intervals in

Jurgen Garlo. the inequality sign in all components having odd index sum. For these intervals in Intervals of Almost Totally Positive Matrices Jurgen Garlo University of Applied Sciences / FH Konstanz, Fachbereich Informatik, Postfach 100543, D-78405 Konstanz, Germany Abstract We consider the class

More information

Chapter 2: Matrix Algebra

Chapter 2: Matrix Algebra Chapter 2: Matrix Algebra (Last Updated: October 12, 2016) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). Write A = 1. Matrix operations [a 1 a n. Then entry

More information

Optimization methods NOPT048

Optimization methods NOPT048 Optimization methods NOPT048 Jirka Fink https://ktiml.mff.cuni.cz/ fink/ Department of Theoretical Computer Science and Mathematical Logic Faculty of Mathematics and Physics Charles University in Prague

More information

Part 1. The Review of Linear Programming

Part 1. The Review of Linear Programming In the name of God Part 1. The Review of Linear Programming 1.5. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Formulation of the Dual Problem Primal-Dual Relationship Economic Interpretation

More information

Arithmetic Progressions with Constant Weight

Arithmetic Progressions with Constant Weight Arithmetic Progressions with Constant Weight Raphael Yuster Department of Mathematics University of Haifa-ORANIM Tivon 36006, Israel e-mail: raphy@oranim.macam98.ac.il Abstract Let k n be two positive

More information

CS 6820 Fall 2014 Lectures, October 3-20, 2014

CS 6820 Fall 2014 Lectures, October 3-20, 2014 Analysis of Algorithms Linear Programming Notes CS 6820 Fall 2014 Lectures, October 3-20, 2014 1 Linear programming The linear programming (LP) problem is the following optimization problem. We are given

More information

The dual simplex method with bounds

The dual simplex method with bounds The dual simplex method with bounds Linear programming basis. Let a linear programming problem be given by min s.t. c T x Ax = b x R n, (P) where we assume A R m n to be full row rank (we will see in the

More information

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2)

Note 3: LP Duality. If the primal problem (P) in the canonical form is min Z = n (1) then the dual problem (D) in the canonical form is max W = m (2) Note 3: LP Duality If the primal problem (P) in the canonical form is min Z = n j=1 c j x j s.t. nj=1 a ij x j b i i = 1, 2,..., m (1) x j 0 j = 1, 2,..., n, then the dual problem (D) in the canonical

More information

arxiv: v3 [math.ra] 10 Jun 2016

arxiv: v3 [math.ra] 10 Jun 2016 To appear in Linear and Multilinear Algebra Vol. 00, No. 00, Month 0XX, 1 10 The critical exponent for generalized doubly nonnegative matrices arxiv:1407.7059v3 [math.ra] 10 Jun 016 Xuchen Han a, Charles

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

A FUNDAMENTAL PROBLEM IN LINEAR INEQUALITIES WITH APPLICATIONS TO THE TRAVELLING SALESMAN PROBLEM *

A FUNDAMENTAL PROBLEM IN LINEAR INEQUALITIES WITH APPLICATIONS TO THE TRAVELLING SALESMAN PROBLEM * Mathematical Programming 2 (1972) 296-308. orth-holland Publishing Company A FUDAMETAL PROBLEM I LIEAR IEQUALITIES WITH APPLICATIOS TO THE TRAVELLIG SALESMA PROBLEM * Katta G. MURTY University of Michigan,

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n

4. Duality Duality 4.1 Duality of LPs and the duality theorem. min c T x x R n, c R n. s.t. ai Tx = b i i M a i R n 2 4. Duality of LPs and the duality theorem... 22 4.2 Complementary slackness... 23 4.3 The shortest path problem and its dual... 24 4.4 Farkas' Lemma... 25 4.5 Dual information in the tableau... 26 4.6

More information

(17) (18)

(17) (18) Module 4 : Solving Linear Algebraic Equations Section 3 : Direct Solution Techniques 3 Direct Solution Techniques Methods for solving linear algebraic equations can be categorized as direct and iterative

More information

Chapter 7 Matchings and r-factors

Chapter 7 Matchings and r-factors Chapter 7 Matchings and r-factors Section 7.0 Introduction Suppose you have your own company and you have several job openings to fill. Further, suppose you have several candidates to fill these jobs and

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

Chapter 5 Linear Programming (LP)

Chapter 5 Linear Programming (LP) Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize f(x) subject to x R n is called the constraint set or feasible set. any point x is called a feasible point We consider

More information

Bounds on parameters of minimally non-linear patterns

Bounds on parameters of minimally non-linear patterns Bounds on parameters of minimally non-linear patterns P.A. CrowdMath Department of Mathematics Massachusetts Institute of Technology Massachusetts, U.S.A. crowdmath@artofproblemsolving.com Submitted: Jan

More information

Fundamental Theorems of Optimization

Fundamental Theorems of Optimization Fundamental Theorems of Optimization 1 Fundamental Theorems of Math Prog. Maximizing a concave function over a convex set. Maximizing a convex function over a closed bounded convex set. 2 Maximizing Concave

More information

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5

Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Semidefinite and Second Order Cone Programming Seminar Fall 2001 Lecture 5 Instructor: Farid Alizadeh Scribe: Anton Riabov 10/08/2001 1 Overview We continue studying the maximum eigenvalue SDP, and generalize

More information

Lecture 6. Numerical methods. Approximation of functions

Lecture 6. Numerical methods. Approximation of functions Lecture 6 Numerical methods Approximation of functions Lecture 6 OUTLINE 1. Approximation and interpolation 2. Least-square method basis functions design matrix residual weighted least squares normal equation

More information

Discrete Optimization 23

Discrete Optimization 23 Discrete Optimization 23 2 Total Unimodularity (TU) and Its Applications In this section we will discuss the total unimodularity theory and its applications to flows in networks. 2.1 Total Unimodularity:

More information

Necessary And Sufficient Conditions For Existence of the LU Factorization of an Arbitrary Matrix.

Necessary And Sufficient Conditions For Existence of the LU Factorization of an Arbitrary Matrix. arxiv:math/0506382v1 [math.na] 19 Jun 2005 Necessary And Sufficient Conditions For Existence of the LU Factorization of an Arbitrary Matrix. Adviser: Charles R. Johnson Department of Mathematics College

More information