AN EFFICIENT APPROACH TO UPDATING SIMPLEX MULTIPLIERS IN THE SIMPLEX ALGORITHM

Size: px
Start display at page:

Download "AN EFFICIENT APPROACH TO UPDATING SIMPLEX MULTIPLIERS IN THE SIMPLEX ALGORITHM"

Transcription

1 AN EFFICIENT APPROACH TO UPDATING SIMPLEX MULTIPLIERS IN THE SIMPLEX ALGORITHM JIAN-FENG HU AND PING-QI PAN Abstract. The simplex algorithm computes the simplex multipliers by solving a system (or two triangular systems) at each iteration. This note offers an efficient approach to updating the simplex multipliers in conjunction with the Bartels-Golub and Forrest-Tomlin updates for LU factors of the basis. It only solves one triangular system. The approach was implemented within and tested against MINOS 5.51 on 129 problems from Netlib, Kennington and BPMPD. Computational results show that the new approach improves simplex implementations. Key words. linear programming, simplex algorithm, simplex multipliers, LU factorization, recurrence approach, Bartels-Golub update, Forrest-Tomlin update AMS subject classifications. 65K05, 90C05 1. Introduction. Consider the following linear programming (LP) problem minimize c T x subject to Ax = b, l x u. (1.1) where A R m n (m < n), rank(a) = m, c, l, u R n, b R m. Let B be the current basis and let N be the associated nonbasis. Without confusion, we will henceforth denote both the basic (nonbasic) index set and the basis (nonbasis) by the same notation. For instance, c B R m is the vector consisting of basic components of c, and c N R n m consisting of its nonbasic components, and so on. The solution of the two m m systems at each iteration dominates computations of the simplex algorithm (see, e.g., [5, 14]). One of them defines the simplex multipliers π, i.e., B T π = c B. (1.2) Once π is available, the reduced costs are obtained by so-called pricing, i.e., Let the LU factorization of B be known, say, z N = c N N T π. (1.3) B = LU, (1.4) where L is unit lower-triangular, and U is upper-triangular with nonzero diagonals. Then, the simplex multipliers defined by (1.2) can be obtained by solving the following two triangular systems, successively, U T v = c B, (1.5) L T π = v. (1.6) Department of Mathematics, Southeast University, Nanjing, , People s Republic of China (lakerhjf@163.com). Department of Mathematics, Southeast University, Nanjing, , People s Republic of China (panpq@seu.edu.cn). Project supported by National Natural Science Foundation of China. 1

2 2 Jian-Feng Hu and Ping-Qi Pan 1.1. Standard approaches. Practical approaches were proposed in the past for updating reduced costs or simplex multipliers. Instead of (1.2), as will be decoded, these approaches share a feature of handling a system of the form B T π p = e p, (1.7) where e p denotes the p-th coordinate vector. System (1.7) is practically more advantageous than (1.2), since π p is usually much sparser than π. To simplify our exposition, we assume that the first m indices are basic at current iteration, i.e., B = {1,..., m}. Assume now that an entering index q N = {m + 1,..., n} and a leaving index p B have been determined. Thus, the new basic index set results from the old with p replaced by q. We use a prime to denote the new basis (nonbasis) and its associated vectors (π, π p, z ), and so on; e.g, we have where a. denotes the column of A indexed by.. B = B + (a q a p )e T p, (1.8) 1.2. Zoutendijk s scheme. By (1.8) and the Sheman-Morrison formula [8], it holds that e T p B 1 a j = e T p B 1 a j /e T p B 1 a q, j {1,..., n}. (1.9) Therefore, it can further be shown that z j = z j z q (a T j π p), j {1,..., n}, (1.10) where z q is the old reduced cost associate with the entering index q, and π p is the solution to B T π p = e p. (1.11) Ignoring the basic part of computations, (1.10) can be written z N = z N z qn T π p, (1.12) which is Zoutendijk s formula for updating nonbasic reduced costs [15]. Note that z p = 0 for the leaving index p N Bixby s scheme. Let y be the solution to By = a q. (1.13) Using π p instead of π p, a sightly different formula results from (1.10) and (1.9), that is, or z j = z j (z q /y p )a T j π p, j {1,..., n}, (1.14) z N = z N (z q/y p )N T π p. (1.15) Bixby used the preceding formula [2]. It should be attractive compared with (1.12), as (1.13) is solved for y at each iteration independent of pricing, and the computation of π p is cheaper than that of π p. Moreover, Bixby additionally stored the nonbasic columns by row (provided that computer memory is relatively cheap); therefore, with a single test for each zero component of π p, he avoided all the arithmetic for the corresponding row of N in the computation of N T π p, and consequently achieved substantial speedup, especially in case of m n.

3 An Efficient Approach to Updating Simplex Multipliers in the Simplex Algorithm Tomlin s scheme. A disadvantage of the preceding schemes is that updating the (n m)-vector of nonbasic reduced costs makes partial pricing (computing only a part of the costs) virtually impossible; such a would-be-very-long vector has to be kept in core or buffered in and out. This disadvantage can be removed by updating the simplex multipliers insdead. To this end, substituting into (1.10) gives z j = c j a T j π, j {1,..., n}, (1.16) z j = c j a T j π z q a T j π p = c j a T j (π + z q π p), j {1,..., n}. (1.17) Therefore, it holds that a T j (π + z q π p) = a T j π, j {1,..., n}, (1.18) which along with Rank(A) = m implies that π = π + z q π p. (1.19) Thus, we have proved the following result, forming a base for Tomlin s scheme [10]. Theorem 1.1. Let π be the solution to B T π = c B. If π p solves B T π p = e p (1.11), then π defined by (1.19) solves B T π = c B. The preceding says that (1.19) together with (1.11) can serve as a formula for updating π. According to Tomlin, such doing is favorable in the contexts of using Harris Devex column selection method [9] and/or maintaining LU factors of the basis with Forrest-Tomlin update. It is noted that all the preceding shemes still solve two triangular systems at each iteration. In the next section, we offer a new approach for updating the simplex multipliers, which involves one triangular system only. In section 3, we report computational experiments with a sparse code, implemented within and tested against MINOS 5.51 (the latest version of MINOS) [12], showing the new approach s considerable efficiency. 2. The new approach. In this section, we derive the recurrence approach first, and then discuss the computational complexity of the simplex algorithm using it Derivation. With section 1.4, it is seen that the new simplex multipliers π can be computed by (1.19) along with (1.11). A key point of the proposed approach is that (1.11) can be simplified in conjunction with maintaining LU factors of the basis. We derive the approach in the context of using the Bartels-Golub update [1]. From (1.4) and (1.8), it follows that where B = LR, (2.1) R = U + (h q Ue p )e T p, (2.2) Lh q = a q. (2.3) Note that R is just the upper triangular matrix U, except for column p replaced by h q and that (2.3) is solved for h q at each iteration independent of pricing. Perform a

4 4 Jian-Feng Hu and Ping-Qi Pan cyclic permutation that moves column p of R to the end position m, and apply the same permutation to rows p through m. Then the resulting is an upper triangular matrix, except for the m p possible nonzeros in row m (in columns p through m 1). Thus, if Q denotes the permutation, then the resulting matrix can be written Q T RQ. These entries can be eliminated by a series of Gaussian transformations with some row exchanges [8]. That is to say, permutations P i and unit lower triangular matrices L i, i = 1,..., s (1 s m p), can be determined such that L 1 s P s L 1 1 P 1Q T RQ = U (2.4) is upper triangular with nonzero diagonals. Thus, the LU factorization of B Q follows from (2.1) and (2.4), i.e., B Q = L U, L = LQP T 1 L 1 P T s L s. (2.5) Fortunately, the LU factorization (2.5) is also useful for updating the simplex multipliers, as is claimed in the following theorem. Theorem 2.1. Let π be the solution to B T π = c B and let an LU factorization of B Q be given by (2.5). If π p solves L T π p = (1/u mm)e m, (2.6) where u mm is the m-th diagonal of U, then π defined by π = π + z q π p (1.19) is the solution to B T π = c B. Proof. According to Theorem 1.1, it suffices to show that π p defined by (2.6) solves (1.11). Premultiplying by U T the two sides of (2.6) yields which along with (2.5) and U T e m = u mme m gives U T L T π p = (1/u mm)u T e m, (2.7) Q T B T π p = e m. (2.8) By the definition of the cyclic permutation Q, on the other hand, we have e T p Q = e T m and hence e p = Qe m, (2.9) combining which and (2.8) leads to (1.11). According to Theorem 2.1, (1.19) together with (2.6) can serve as a formula for updating π. This approach involves only one triangular system. It is clear that although the (2.5) was derived from the Bartels-Golub updating process, any LU factorization of B Q (with or without row exchanges) applies. With respect to this, further remarks are in order: 2.2. Bartels-Golub update. LA05 [13] and LUSOL [11] are the best known Bartels-Golub implementations (LUSOL is employed within MINOS [12]). In both cases, the p-th column and row of U are permuted to position l rather than to the end (position m), where l marks the position of the last nonzero in h q. The aim is to reduce the number of transformations L i in (2.4). In practice it is often true that l = m, and there is little loss of efficiency in setting l = m always. This is what we did in our use of LUSOL.

5 An Efficient Approach to Updating Simplex Multipliers in the Simplex Algorithm 5 Table 2.1 The flops ratios for a single iteration Case n=2m n=3m n=4m p=1 p=m p=1 p=m p=1 p=m Classical 4m 2 3m 2 5m 2 4m 2 6m 2 5m 2 Recurrence (7/2)m 2 (5/2)m 2 (9/2)m 2 (7/2)m 2 (11/2)m 2 (9/2)m 2 Ratio(m,n,p) Forrest-Tomlin update. The proposed approach also can be employed in conjuction with the Forrest-Tomlin update (which is used in CPLEX [3]). In fact, this update results in an LU factorization of B Q, in which all permutations in (2.4) are P i = I [6]. (In addition, the matrices L i are combined into a single row transformation.) The update can be implemented more efficiently, at the expense of guaranteed stability Computational complexity. The proposed approach for updating π saves a unit lower triangular system solve. But, how much does it improve the efficiency of the simplex algorithm as a whole? With infinite precision, the simplex variant using the new approach requires the same iterations as its simplex counterpart, and hence what we gain depends on how much it reduces the computational complexity in a single iteration. For dense computations, a simple counting indicates that a single iteration of the standard simplex algorithm requires while the variant requires m 2 + mn + (m p) 2 + 2(m p) + 4m 1 (flops) (1/2)m 2 + mn + (m p) 2 + 2(m p) + (7/2)m 1 (flops). Ignoring the terms of lower order, we obtain the flops ratio of the standard to the variant below: ratio(m, n, p) = 2m2 + 2mn + 2(m p) 2 m 2 > 1. (2.10) + 2mn + 2(m p) 2 It is clear that ratio(m, n, p) increases with p (1 p m) and that 4m + 2n 2m + 2n ratio(m, n, p) 3m + 2n m + 2n. (2.11) In particular, we list in Table 2.1 the cases for n = 2m, 3m and 4m. In summary, the computational complexity of the simplex algorithm is reduced, and hence the proposed approach is favorable for dense computations. 3. Computational experiments. To see how it performs in large-scale sparse computations, the proposed approach was implemented within and compared with MINOS 5.51 [12]. The resulting code, named NEW, was a slight modification of MINOS 5.51, using the proposed approach in both Phases 1 and 2. In Phase 1, it was utilized to update π until a piecewice linear objective renewed. The implementation of the proposed approach is a simple matter. Compiled using the Visual Fortran 5.0, code NEW and MINOS 5.51 were both run under a Windows 2000 system on a PentiumIV 1.7GHz personal computer with

6 6 Jian-Feng Hu and Ping-Qi Pan 256MB of memory. The machine precision was about 16 decimal digits. The CPU time was measured in seconds with utility routine CP U T IME. Tested problems were classified into three sets. Set 1 included 96 problems from Netlib 1. In fact, these are all of the Netlib problems available, except for QAP15, as it is too time-consuming to solve for both MINOS 5.51 and NEW. Set 2 included all the 16 problems from Kennington 2, and set 3 included all 17 problems from BPMPD 3 that were of sizes of no less than 500KB, in compressed form. Numerical results obtained with set 1 3 are displayed in Tables respectively, in the order of increasing sum m + n before slack variables are added. In these tables, statistics obtained with MINOS 5.51 and NEW are listed under columns labeled MINOS 5.51 and NEW and the total iterations and run time required for solving each problem are listed in columns labeled Itns and Time.Final objective values reached are not listed. Tables compare the performance of the two codes by giving iterations and time ratios of MINOS 5.51 to NEW for each problem. These results are summarized in Table 3.7, where the 96 Netlib problems are categorized into three groups: group Small includes the first 38 problems (AFIRO through DEGEN2), Medium includes the next 41 problems (FIT1D through CYCLE), and Large the last 17 problems (SHIP08L through STOCFOR3). Serving as an overall comparison between the two codes, the bottom six lines of Table 3.7 indicate that iteration ratios are higher that one, with the total iteration ratio The NEW s gain should be due to less accumulated roundoff errors. Moreover, time ratios are even much higher. It appears that time ratios increase with sizes of the tested problems. The bottom line labeled Total gives the total time ratio Thus, code NEW outperformed MINOS 5.51 in terms of both iterations and run time. This credit was entirely due to the use of the new recurrence approach. Finally, we conclude that the proposed approach for updating the simplex multipliers should be utilized in simplex implementations. Acknowledgment. The authors are grateful to Professor Michael A. Saunders for providing them the MINOS 5.51 package. They are also grateful to an anonymous referee for his very valuable comments and assistance, which improved this article considerably. In particular, we thank the referee for his idea of using the proposed approach in both Phases 1 and 2. REFERENCES [1] R. H. Bartels and G. H. Golub, The simpex method of linear programming using LU decomposition, Communication ACM, 12 (1969), [2] R.E. Bixby, Progress in linear programming, ORSA J. on Computing, 6(1994), [3] CPLEX, CPLEX Callable Library, CPLEX Optimization, Inc. [4] G. B. Dantzig, Maximization of a linear function of variables subject to linear inequalities, in: Activity Analysis of Production and Allocation(Tj.C. Koopmans, ed.), Wiley, New York, 1951, [5] G. B. Dantzig, Linear Programming and Extensions, Princeton University Press, Princeton, NJ, [6] J.J.H. Forrest and J.A. Tomlin, Updating triangular factors of the basis to maintain sparsity in the product form simplex method, Mathematical Programming, 2(1972), [7] D. M. Gay, Electronic mail distribution of linear programming test problems, Mathematical Programming Society COAL Newsletter, 13 (1985), meszaros/bpmpd/

7 An Efficient Approach to Updating Simplex Multipliers in the Simplex Algorithm 7 Table 3.1 Statistics for set 1 of 96 Netlib problems Problem m n m + n MINOS 5.51 NEW Itns Time Itns Time AFIRO KB SC50B SC50A ADLITTLE BLEND SHARE2B SC STOCFOR SCAGR RECIPE BOEING ISRAEL SHARE1B VTP.BASE SC BEACONFD GROW LOTFI BRANDY E BORE3D FORPLAN CAPRI AGG BOEING SCORPION BANDM SCTAP SCFXM AGG AGG STAIR SCSD TUFF GROW SCAGR DEGEN FIT1D ETAMACRO FINNIS FFFFF GROW PILOT STANDATA SCSD STANDMPS SEBA

8 8 Jian-Feng Hu and Ping-Qi Pan Problem m n m + n MINOS 5.51 NEW Itns Time Itns Time STANDGUB SCFXM SCRS GFRD-PNC BNL SHIP04S PEROLD MAROS FIT1P MODSZK SHELL SCFXM FV SHIP04L QAP WOOD1P PILOT.JA SCTAP GANGES SCSD PILOTNOV SHIP08S SIERRA DEGEN PILOT.WE NESM SHIP12S SCTAP STOCFOR CZPROB CYCLE SHIP08L PILOT BNL SHIP12L D6CUBE D6CUBE PILOT D2Q06C GREENBEA WOODW FIT2D QAP BAU3B MAROS-R FIT2P DFL STOCFOR

9 An Efficient Approach to Updating Simplex Multipliers in the Simplex Algorithm 9 Table 3.2 Statistics for set 2 of 16 Kennington problems Problem m n m + n MINOS 5.51 NEW Itns Time Itns Time KEN CRE-C CRE-A PDS OSA KEN PDS OSA PDS KEN CRE-D CRE-B OSA PDS OSA KEN Table 3.3 Statistics for set 3 of 17 BPMPD problems Problem m n m + n MINOS 5.51 NEW Itns Time Itns Time RAT7A NSCT NSCT ROUTING DBIR DBIR T0331-4L NEMSEMM SOUTHERN RADIO.PRIM WORLD.MOD WORLD RADIO.DUAL NEMSEMM NW LPL DBIC [8] G.H. Golub and C.F. Van Loan, Matrix Computations, The Johns Hopkins University Press, Baltimore, [9] P.M.J. Harris, Pivot selection methods of the devex LP code, Mathematical Programming, 5(1973), [10] J.A. Tomlin, On pricing and backward trasformation in linear programming, Mathematical Programming, 6(1974), [11] P. E. Gill, W. Murray, M. A. Saunders and M. H. Wright, Maintaining LU factors of a general sparse matrix, Linear Algebra and Its Applications, 88/89 (1987), [12] B.A. Murtagh and M. A. Saunders, MINOS 5.5 User s Guide, Technical Report SOL 83-20R, Dept. of Operations Research, Stanford University, Stanford, [13] J. K. Reid, A sparsity-exploiting variant of the Bartels-Golub decomposition for linear programming bases, Mathematical Programming, 24 (1982), [14] R. J. Vanderbei, Linear Programming: Foundations and Extensions, Kluwer Academic Pub-

10 10 Jian-Feng Hu and Ping-Qi Pan Table 3.4 Ratios of MINOS 5.51 to NEW for set 1 Problem Itns Time Problem Itns Time AFIRO STANDGUB KB SCFXM SC50B SCRS SC50A GFRD-PNC ADLITTLE BNL BLEND SHIP04S SHARE2B PEROLD SC MAROS STOCFOR FIT1P SCAGR MODSZK RECIPE SHELL BOEING SCFXM ISRAEL FV SHARE1B SHIP04L VTP.BASE QAP SC WOOD1P BEACONFD PILOT.JA GROW SCTAP LOTFI GANGES BRANDY SCSD E PILOTNOV BORE3D SHIP08S FORPLAN SIERRA CAPRI DEGEN AGG PILOT.WE BOEING NESM SCORPION SHIP12S BANDM SCTAP SCTAP STOCFOR SCFXM CZPROB AGG CYCLE AGG SHIP08L STAIR PILOT SCSD BNL TUFF SHIP12L GROW D6CUBE SCAGR D6CUBE DEGEN PILOT FIT1D D2Q06C ETAMACRO GREENBEA FINNIS WOODW FFFFF FIT2D GROW QAP PILOT BAU3B STANDATA MAROS-R SCSD FIT2P STANDMPS DFL SEBA STOCFOR

11 An Efficient Approach to Updating Simplex Multipliers in the Simplex Algorithm 11 lishers, Boston, [15] G. Zoutendijk, Methods of feasible directions, Elsevier, Amsterdam, 1960.

12 12 Jian-Feng Hu and Ping-Qi Pan Table 3.5 Ratios of MINOS 5.51 to NEW for set 2 Problem Itns Time KEN CRE-C CRE-A PDS OSA KEN PDS OSA PDS KEN CRE-D CRE-B OSA PDS OSA KEN Table 3.6 Ratios of MINOS 5.51 to NEW for set 3 Problem Itns Time RAT7A NSCT NSCT ROUTING DBIR DBIR T0331-4L NEMSEMM SOUTHERN RADIO.PRIM WORLD.MOD WORLD RADIO.DUAL NEMSEMM NW LPL DBIC

13 An Efficient Approach to Updating Simplex Multipliers in the Simplex Algorithm 13 Table 3.7 Summary for all test problems Problem Itns Time Small(38) Medium(41) For MINOS 5.51 Large(17) Kennington(16) BPMPD(17) Total Small(38) Medium(41) For NEW Large(17) Kennington(16) BPMPD(17) Total Small(38) Medium(41) Ratios of Large(17) M.5.51 to NEW Kennington(16) BPMPD(17) Total

McMaster University. Advanced Optimization Laboratory. Title: Computational Experience with Self-Regular Based Interior Point Methods

McMaster University. Advanced Optimization Laboratory. Title: Computational Experience with Self-Regular Based Interior Point Methods McMaster University Advanced Optimization Laboratory Title: Computational Experience with Self-Regular Based Interior Point Methods Authors: Guoqing Zhang, Jiming Peng, Tamás Terlaky, Lois Zhu AdvOl-Report

More information

Computational Experience with Rigorous Error Bounds for the Netlib Linear Programming Library

Computational Experience with Rigorous Error Bounds for the Netlib Linear Programming Library Computational Experience with Rigorous Error Bounds for the Netlib Linear Programming Library Christian Keil (c.keil@tu-harburg.de) and Christian Jansson (jansson@tu-harburg.de) Hamburg University of Technology

More information

1. Introduction. We are concerned with the linear programming (LP) problem in the standard form. minimize subject to Ax = b, x 0,

1. Introduction. We are concerned with the linear programming (LP) problem in the standard form. minimize subject to Ax = b, x 0, A GENERALIZATION OF THE REVISED SIMPLEX ALGORITHM FOR LINEAR PROGRAMMING PING-QI PAN Abstract. Recently the concept of basis, which plays a fundamental role in the simplex methodology, were generalized

More information

PREDICTOR-CORRECTOR SMOOTHING METHODS FOR LINEAR PROGRAMS WITH A MORE FLEXIBLE UPDATE OF THE SMOOTHING PARAMETER 1

PREDICTOR-CORRECTOR SMOOTHING METHODS FOR LINEAR PROGRAMS WITH A MORE FLEXIBLE UPDATE OF THE SMOOTHING PARAMETER 1 PREDICTOR-CORRECTOR SMOOTHING METHODS FOR LINEAR PROGRAMS WITH A MORE FLEXIBLE UPDATE OF THE SMOOTHING PARAMETER 1 Stephan Engelke and Christian Kanzow University of Hamburg Department of Mathematics Center

More information

On Mehrotra-Type Predictor-Corrector Algorithms

On Mehrotra-Type Predictor-Corrector Algorithms On Mehrotra-Type Predictor-Corrector Algorithms M. Salahi, J. Peng, T. Terlaky October 10, 006 (Revised) Abstract In this paper we discuss the polynomiality of a feasible version of Mehrotra s predictor-corrector

More information

How good are projection methods for convex feasibility problems?

How good are projection methods for convex feasibility problems? RAL-TR-2006-015 How good are projection methods for convex feasibility problems? N I M Gould September 21, 2006 Council for the Central Laboratory of the Research Councils c Council for the Central Laboratory

More information

Finding an interior point in the optimal face of linear programs

Finding an interior point in the optimal face of linear programs Mathematical Programming 62 (1993) 497-515 497 North-Holland Finding an interior point in the optimal face of linear programs Sanjay Mehrotra* Department of lndustrial Engineering and Management Sciences,

More information

Research Article On the Simplex Algorithm Initializing

Research Article On the Simplex Algorithm Initializing Abstract and Applied Analysis Volume 2012, Article ID 487870, 15 pages doi:10.1155/2012/487870 Research Article On the Simplex Algorithm Initializing Nebojša V. Stojković, 1 Predrag S. Stanimirović, 2

More information

A primal-dual aggregation algorithm for minimizing Conditional Value-at-Risk in linear programs

A primal-dual aggregation algorithm for minimizing Conditional Value-at-Risk in linear programs Computational Optimization and Applications manuscript No. (will be inserted by the editor) A primal-dual aggregation algorithm for minimizing Conditional Value-at-Risk in linear programs Daniel Espinoza

More information

A primal-dual aggregation algorithm for minimizing conditional value-at-risk in linear programs

A primal-dual aggregation algorithm for minimizing conditional value-at-risk in linear programs Comput Optim Appl (2014) 59:617 638 DOI 10.1007/s10589-014-9692-6 A primal-dual aggregation algorithm for minimizing conditional value-at-risk in linear programs Daniel Espinoza Eduardo Moreno Received:

More information

c 2005 Society for Industrial and Applied Mathematics

c 2005 Society for Industrial and Applied Mathematics SIAM J. OPTIM. Vol. 15, No. 3, pp. 635 653 c 2005 Society for Industrial and Applied Mathematics CONVERGENCE CONDITIONS AND KRYLOV SUBSPACE BASED CORRECTIONS FOR PRIMAL-DUAL INTERIOR-POINT METHOD SANJAY

More information

Open-pit mining problem and the BZ algorithm

Open-pit mining problem and the BZ algorithm Open-pit mining problem and the BZ algorithm Eduardo Moreno (Universidad Adolfo Ibañez) Daniel Espinoza (Universidad de Chile) Marcos Goycoolea (Universidad Adolfo Ibañez) Gonzalo Muñoz (Ph.D. student

More information

A less conservative variant of Robust Optimization

A less conservative variant of Robust Optimization A less conservative variant of Robust Optimization Ernst Roos, Dick den Hertog CentER and Department of Econometrics and Operations Research, Tilburg University e.j.roos@tilburguniversity.edu, d.denhertog@tilburguniversity.edu

More information

A review of sparsity vs stability in LU updates

A review of sparsity vs stability in LU updates A review of sparsity vs stability in LU updates Michael Saunders Dept of Management Science and Engineering (MS&E) Systems Optimization Laboratory (SOL) Institute for Computational Mathematics and Engineering

More information

Large-scale Linear and Nonlinear Optimization in Quad Precision

Large-scale Linear and Nonlinear Optimization in Quad Precision Large-scale Linear and Nonlinear Optimization in Quad Precision Ding Ma and Michael Saunders MS&E and ICME, Stanford University US-Mexico Workshop on Optimization and its Applications Mérida, Yucatán,

More information

MODIFICATION OF SIMPLEX METHOD AND ITS IMPLEMENTATION IN VISUAL BASIC. Nebojša V. Stojković, Predrag S. Stanimirović and Marko D.

MODIFICATION OF SIMPLEX METHOD AND ITS IMPLEMENTATION IN VISUAL BASIC. Nebojša V. Stojković, Predrag S. Stanimirović and Marko D. MODIFICATION OF SIMPLEX METHOD AND ITS IMPLEMENTATION IN VISUAL BASIC Nebojša V Stojković, Predrag S Stanimirović and Marko D Petković Abstract We investigate the problem of finding the first basic solution

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2015 Notes 4: The Primal Simplex Method 1 Linear

More information

LUSOL: A basis package for constrained optimization

LUSOL: A basis package for constrained optimization LUSOL: A basis package for constrained optimization IFORS Triennial Conference on OR/MS Honolulu, HI, July 11 15, 2005 Michael O Sullivan and Michael Saunders Dept of Engineering Science Dept of Management

More information

SOLVING REDUCED KKT SYSTEMS IN BARRIER METHODS FOR LINEAR AND QUADRATIC PROGRAMMING

SOLVING REDUCED KKT SYSTEMS IN BARRIER METHODS FOR LINEAR AND QUADRATIC PROGRAMMING SOLVING REDUCED KKT SYSTEMS IN BARRIER METHODS FOR LINEAR AND QUADRATIC PROGRAMMING Philip E. GILL, Walter MURRAY, Dulce B. PONCELEÓN and Michael A. SAUNDERS Technical Report SOL 91-7 July 1991 Abstract

More information

EomhhhhEEEEEEEI EhhmhmhEmhhhEK

EomhhhhEEEEEEEI EhhmhmhEmhhhEK TEST PROBLEMS(U) STANFORD UNIV CA SYSTEMS OPTIMIZATION UNCLASSIFIEDb LAB I J LUSTIG UG 7 SOL-97-1± NOSS14-7-K-142 F/G 12/5 ML EomhhhhEEEEEEEI EhhmhmhEmhhhEK "V - is L S ystems in Optimization Laboratorv

More information

Solving Linear and Integer Programs

Solving Linear and Integer Programs Solving Linear and Integer Programs Robert E. Bixby ILOG, Inc. and Rice University Ed Rothberg ILOG, Inc. DAY 2 2 INFORMS Practice 2002 1 Dual Simplex Algorithm 3 Some Motivation Dual simplex vs. primal

More information

An Enhanced Piecewise Linear Dual Phase-1 Algorithm for the Simplex Method

An Enhanced Piecewise Linear Dual Phase-1 Algorithm for the Simplex Method An Enhanced Piecewise Linear Dual Phase-1 Algorithm for the Simplex Method István Maros Department of Computing, Imperial College, London Email: i.maros@ic.ac.uk Departmental Technical Report 2002/15 ISSN

More information

Novel update techniques for the revised simplex method (and their application)

Novel update techniques for the revised simplex method (and their application) Novel update techniques for the revised simplex method (and their application) Qi Huangfu 1 Julian Hall 2 Others 1 FICO 2 School of Mathematics, University of Edinburgh ERGO 30 November 2016 Overview Background

More information

Revised Simplex Method

Revised Simplex Method DM545 Linear and Integer Programming Lecture 7 Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. 2 Motivation Complexity of single pivot operation

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME MS&E 38 (CME 338 Large-Scale Numerical Optimization Course description Instructor: Michael Saunders Spring 28 Notes : Review The course teaches

More information

A projective simplex method for linear programming

A projective simplex method for linear programming Linear Algebra and its Applications 292 (1999) 99±125 A projective simplex method for linear programming Ping-Qi Pan 1 Department of Applied Mathematics, Southeast University, Nanjing 210096, People's

More information

The practical revised simplex method (Part 2)

The practical revised simplex method (Part 2) The practical revised simplex method (Part 2) Julian Hall School of Mathematics University of Edinburgh January 25th 2007 The practical revised simplex method Overview (Part 2) Practical implementation

More information

Key words. numerical linear algebra, direct methods, Cholesky factorization, sparse matrices, mathematical software, matrix updates.

Key words. numerical linear algebra, direct methods, Cholesky factorization, sparse matrices, mathematical software, matrix updates. ROW MODIFICATIONS OF A SPARSE CHOLESKY FACTORIZATION TIMOTHY A. DAVIS AND WILLIAM W. HAGER Abstract. Given a sparse, symmetric positive definite matrix C and an associated sparse Cholesky factorization

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline

More information

A NEW FACE METHOD FOR LINEAR PROGRAMMING. 1. Introduction We are concerned with the standard linear programming (LP) problem

A NEW FACE METHOD FOR LINEAR PROGRAMMING. 1. Introduction We are concerned with the standard linear programming (LP) problem A NEW FACE METHOD FOR LINEAR PROGRAMMING PING-QI PAN Abstract. An attractive feature of the face method [9 for solving LP problems is that it uses the orthogonal projection of the negative objective gradient

More information

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

Chapter 8 Cholesky-based Methods for Sparse Least Squares: The Benefits of Regularization

Chapter 8 Cholesky-based Methods for Sparse Least Squares: The Benefits of Regularization In L. Adams and J. L. Nazareth eds., Linear and Nonlinear Conjugate Gradient-Related Methods, SIAM, Philadelphia, 92 100 1996. Chapter 8 Cholesky-based Methods for Sparse Least Squares: The Benefits of

More information

New Artificial-Free Phase 1 Simplex Method

New Artificial-Free Phase 1 Simplex Method International Journal of Basic & Applied Sciences IJBAS-IJENS Vol:09 No:10 69 New Artificial-Free Phase 1 Simplex Method Nasiruddin Khan, Syed Inayatullah*, Muhammad Imtiaz and Fozia Hanif Khan Department

More information

Exploiting hyper-sparsity when computing preconditioners for conjugate gradients in interior point methods

Exploiting hyper-sparsity when computing preconditioners for conjugate gradients in interior point methods Exploiting hyper-sparsity when computing preconditioners for conjugate gradients in interior point methods Julian Hall, Ghussoun Al-Jeiroudi and Jacek Gondzio School of Mathematics University of Edinburgh

More information

DM545 Linear and Integer Programming. Lecture 7 Revised Simplex Method. Marco Chiarandini

DM545 Linear and Integer Programming. Lecture 7 Revised Simplex Method. Marco Chiarandini DM545 Linear and Integer Programming Lecture 7 Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. 2 Motivation Complexity of single pivot operation

More information

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations

A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations A Method for Constructing Diagonally Dominant Preconditioners based on Jacobi Rotations Jin Yun Yuan Plamen Y. Yalamov Abstract A method is presented to make a given matrix strictly diagonally dominant

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

Downloaded 08/28/12 to Redistribution subject to SIAM license or copyright; see

Downloaded 08/28/12 to Redistribution subject to SIAM license or copyright; see SIAM J. MATRIX ANAL. APPL. Vol. 26, No. 3, pp. 621 639 c 2005 Society for Industrial and Applied Mathematics ROW MODIFICATIONS OF A SPARSE CHOLESKY FACTORIZATION TIMOTHY A. DAVIS AND WILLIAM W. HAGER Abstract.

More information

Lecture 11 Linear programming : The Revised Simplex Method

Lecture 11 Linear programming : The Revised Simplex Method Lecture 11 Linear programming : The Revised Simplex Method 11.1 The Revised Simplex Method While solving linear programming problem on a digital computer by regular simplex method, it requires storing

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME MS&E 318 (CME 338 Large-Scale Numerical Optimization 1 Origins Instructor: Michael Saunders Spring 2018 Notes 7: LUSOL: a Basis Factorization

More information

PRIMAL-DUAL ENTROPY-BASED INTERIOR-POINT ALGORITHMS FOR LINEAR OPTIMIZATION

PRIMAL-DUAL ENTROPY-BASED INTERIOR-POINT ALGORITHMS FOR LINEAR OPTIMIZATION PRIMAL-DUAL ENTROPY-BASED INTERIOR-POINT ALGORITHMS FOR LINEAR OPTIMIZATION MEHDI KARIMI, SHEN LUO, AND LEVENT TUNÇEL Abstract. We propose a family of search directions based on primal-dual entropy in

More information

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3 Pivoting Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3 In the previous discussions we have assumed that the LU factorization of A existed and the various versions could compute it in a stable manner.

More information

Sparse Rank-Revealing LU Factorization

Sparse Rank-Revealing LU Factorization Sparse Rank-Revealing LU Factorization Householder Symposium XV on Numerical Linear Algebra Peebles, Scotland, June 2002 Michael O Sullivan and Michael Saunders Dept of Engineering Science Dept of Management

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 2 Systems of Linear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Lesson 27 Linear Programming; The Simplex Method

Lesson 27 Linear Programming; The Simplex Method Lesson Linear Programming; The Simplex Method Math 0 April 9, 006 Setup A standard linear programming problem is to maximize the quantity c x + c x +... c n x n = c T x subject to constraints a x + a x

More information

CSE 160 Lecture 13. Numerical Linear Algebra

CSE 160 Lecture 13. Numerical Linear Algebra CSE 16 Lecture 13 Numerical Linear Algebra Announcements Section will be held on Friday as announced on Moodle Midterm Return 213 Scott B Baden / CSE 16 / Fall 213 2 Today s lecture Gaussian Elimination

More information

Part 1. The Review of Linear Programming

Part 1. The Review of Linear Programming In the name of God Part 1. The Review of Linear Programming 1.2. Spring 2010 Instructor: Dr. Masoud Yaghini Outline Introduction Basic Feasible Solutions Key to the Algebra of the The Simplex Algorithm

More information

Solving the normal equations system arising from interior po. linear programming by iterative methods

Solving the normal equations system arising from interior po. linear programming by iterative methods Solving the normal equations system arising from interior point methods for linear programming by iterative methods Aurelio Oliveira - aurelio@ime.unicamp.br IMECC - UNICAMP April - 2015 Partners Interior

More information

MATH 3511 Lecture 1. Solving Linear Systems 1

MATH 3511 Lecture 1. Solving Linear Systems 1 MATH 3511 Lecture 1 Solving Linear Systems 1 Dmitriy Leykekhman Spring 2012 Goals Review of basic linear algebra Solution of simple linear systems Gaussian elimination D Leykekhman - MATH 3511 Introduction

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4 Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math Week # 1 Saturday, February 1, 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

LSMR: An iterative algorithm for least-squares problems

LSMR: An iterative algorithm for least-squares problems LSMR: An iterative algorithm for least-squares problems David Fong Michael Saunders Institute for Computational and Mathematical Engineering (icme) Stanford University Copper Mountain Conference on Iterative

More information

Lecture 9: Numerical Linear Algebra Primer (February 11st)

Lecture 9: Numerical Linear Algebra Primer (February 11st) 10-725/36-725: Convex Optimization Spring 2015 Lecture 9: Numerical Linear Algebra Primer (February 11st) Lecturer: Ryan Tibshirani Scribes: Avinash Siravuru, Guofan Wu, Maosheng Liu Note: LaTeX template

More information

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra CS227-Scientific Computing Lecture 4: A Crash Course in Linear Algebra Linear Transformation of Variables A common phenomenon: Two sets of quantities linearly related: y = 3x + x 2 4x 3 y 2 = 2.7x 2 x

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff

More information

Sparse Rank-Revealing LU Factorization

Sparse Rank-Revealing LU Factorization Sparse Rank-Revealing LU Factorization SIAM Conference on Optimization Toronto, May 2002 Michael O Sullivan and Michael Saunders Dept of Engineering Science Dept of Management Sci & Eng University of Auckland

More information

Linear Programming The Simplex Algorithm: Part II Chapter 5

Linear Programming The Simplex Algorithm: Part II Chapter 5 1 Linear Programming The Simplex Algorithm: Part II Chapter 5 University of Chicago Booth School of Business Kipp Martin October 17, 2017 Outline List of Files Key Concepts Revised Simplex Revised Simplex

More information

Consider the following example of a linear system:

Consider the following example of a linear system: LINEAR SYSTEMS Consider the following example of a linear system: Its unique solution is x + 2x 2 + 3x 3 = 5 x + x 3 = 3 3x + x 2 + 3x 3 = 3 x =, x 2 = 0, x 3 = 2 In general we want to solve n equations

More information

Dense LU factorization and its error analysis

Dense LU factorization and its error analysis Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,

More information

Solving Linear Systems Using Gaussian Elimination. How can we solve

Solving Linear Systems Using Gaussian Elimination. How can we solve Solving Linear Systems Using Gaussian Elimination How can we solve? 1 Gaussian elimination Consider the general augmented system: Gaussian elimination Step 1: Eliminate first column below the main diagonal.

More information

I-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x

I-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x Technical Report CS-93-08 Department of Computer Systems Faculty of Mathematics and Computer Science University of Amsterdam Stability of Gauss-Huard Elimination for Solving Linear Systems T. J. Dekker

More information

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method

Yinyu Ye, MS&E, Stanford MS&E310 Lecture Note #06. The Simplex Method The Simplex Method Yinyu Ye Department of Management Science and Engineering Stanford University Stanford, CA 94305, U.S.A. http://www.stanford.edu/ yyye (LY, Chapters 2.3-2.5, 3.1-3.4) 1 Geometry of Linear

More information

Lecture 11: Post-Optimal Analysis. September 23, 2009

Lecture 11: Post-Optimal Analysis. September 23, 2009 Lecture : Post-Optimal Analysis September 23, 2009 Today Lecture Dual-Simplex Algorithm Post-Optimal Analysis Chapters 4.4 and 4.5. IE 30/GE 330 Lecture Dual Simplex Method The dual simplex method will

More information

Sparsity Matters. Robert J. Vanderbei September 20. IDA: Center for Communications Research Princeton NJ.

Sparsity Matters. Robert J. Vanderbei September 20. IDA: Center for Communications Research Princeton NJ. Sparsity Matters Robert J. Vanderbei 2017 September 20 http://www.princeton.edu/ rvdb IDA: Center for Communications Research Princeton NJ The simplex method is 200 times faster... The simplex method is

More information

6 Linear Systems of Equations

6 Linear Systems of Equations 6 Linear Systems of Equations Read sections 2.1 2.3, 2.4.1 2.4.5, 2.4.7, 2.7 Review questions 2.1 2.37, 2.43 2.67 6.1 Introduction When numerically solving two-point boundary value problems, the differential

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

Sparse BLAS-3 Reduction

Sparse BLAS-3 Reduction Sparse BLAS-3 Reduction to Banded Upper Triangular (Spar3Bnd) Gary Howell, HPC/OIT NC State University gary howell@ncsu.edu Sparse BLAS-3 Reduction p.1/27 Acknowledgements James Demmel, Gene Golub, Franc

More information

Fast Algorithms for LAD Lasso Problems

Fast Algorithms for LAD Lasso Problems Fast Algorithms for LAD Lasso Problems Robert J. Vanderbei 2015 Nov 3 http://www.princeton.edu/ rvdb INFORMS Philadelphia Lasso Regression The problem is to solve a sparsity-encouraging regularized regression

More information

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear

More information

Ω R n is called the constraint set or feasible set. x 1

Ω R n is called the constraint set or feasible set. x 1 1 Chapter 5 Linear Programming (LP) General constrained optimization problem: minimize subject to f(x) x Ω Ω R n is called the constraint set or feasible set. any point x Ω is called a feasible point We

More information

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix EE507 - Computational Techniques for EE 7. LU factorization Jitkomut Songsiri factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization

More information

1 Introduction In this paper we describe a method for the inverse representation of a large and sparse unsymmetric matrix A. The representation is exp

1 Introduction In this paper we describe a method for the inverse representation of a large and sparse unsymmetric matrix A. The representation is exp Applying Schur Complements for Handling General Updates of a Large, Sparse, Unsymmetric Matrix æ Jacek Gondzio Systems Research Institute, Polish Academy of Sciences, Newelska 6, 01-447 Warsaw, Poland

More information

A convex QP solver based on block-lu updates

A convex QP solver based on block-lu updates Block-LU updates p. 1/24 A convex QP solver based on block-lu updates PP06 SIAM Conference on Parallel Processing for Scientific Computing San Francisco, CA, Feb 22 24, 2006 Hanh Huynh and Michael Saunders

More information

(17) (18)

(17) (18) Module 4 : Solving Linear Algebraic Equations Section 3 : Direct Solution Techniques 3 Direct Solution Techniques Methods for solving linear algebraic equations can be categorized as direct and iterative

More information

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization

More information

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725

Numerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725 Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple

More information

CG and MINRES: An empirical comparison

CG and MINRES: An empirical comparison School of Engineering CG and MINRES: An empirical comparison Prequel to LSQR and LSMR: Two least-squares solvers David Fong and Michael Saunders Institute for Computational and Mathematical Engineering

More information

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations

Lecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 2 Systems of Linear Equations Copyright c 2001. Reproduction permitted only for noncommercial,

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 9 1. qr and complete orthogonal factorization poor man s svd can solve many problems on the svd list using either of these factorizations but they

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 12: Gaussian Elimination and LU Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 10 Gaussian Elimination

More information

Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices

Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices Sparsity-Preserving Difference of Positive Semidefinite Matrix Representation of Indefinite Matrices Jaehyun Park June 1 2016 Abstract We consider the problem of writing an arbitrary symmetric matrix as

More information

Applications. Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang

Applications. Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang Introduction to Large-Scale Linear Programming and Applications Stephen J. Stoyan, Maged M. Dessouky*, and Xiaoqing Wang Daniel J. Epstein Department of Industrial and Systems Engineering, University of

More information

Numerical Methods I: Numerical linear algebra

Numerical Methods I: Numerical linear algebra 1/3 Numerical Methods I: Numerical linear algebra Georg Stadler Courant Institute, NYU stadler@cimsnyuedu September 1, 017 /3 We study the solution of linear systems of the form Ax = b with A R n n, x,

More information

Week_4: simplex method II

Week_4: simplex method II Week_4: simplex method II 1 1.introduction LPs in which all the constraints are ( ) with nonnegative right-hand sides offer a convenient all-slack starting basic feasible solution. Models involving (=)

More information

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0

Slack Variable. Max Z= 3x 1 + 4x 2 + 5X 3. Subject to: X 1 + X 2 + X x 1 + 4x 2 + X X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0 Simplex Method Slack Variable Max Z= 3x 1 + 4x 2 + 5X 3 Subject to: X 1 + X 2 + X 3 20 3x 1 + 4x 2 + X 3 15 2X 1 + X 2 + 4X 3 10 X 1 0, X 2 0, X 3 0 Standard Form Max Z= 3x 1 +4x 2 +5X 3 + 0S 1 + 0S 2

More information

LOWER BOUNDS FOR THE MAXIMUM NUMBER OF SOLUTIONS GENERATED BY THE SIMPLEX METHOD

LOWER BOUNDS FOR THE MAXIMUM NUMBER OF SOLUTIONS GENERATED BY THE SIMPLEX METHOD Journal of the Operations Research Society of Japan Vol 54, No 4, December 2011, pp 191 200 c The Operations Research Society of Japan LOWER BOUNDS FOR THE MAXIMUM NUMBER OF SOLUTIONS GENERATED BY THE

More information

9.1 Linear Programs in canonical form

9.1 Linear Programs in canonical form 9.1 Linear Programs in canonical form LP in standard form: max (LP) s.t. where b i R, i = 1,..., m z = j c jx j j a ijx j b i i = 1,..., m x j 0 j = 1,..., n But the Simplex method works only on systems

More information

2 Computing complex square roots of a real matrix

2 Computing complex square roots of a real matrix On computing complex square roots of real matrices Zhongyun Liu a,, Yulin Zhang b, Jorge Santos c and Rui Ralha b a School of Math., Changsha University of Science & Technology, Hunan, 410076, China b

More information

On a wide region of centers and primal-dual interior. Abstract

On a wide region of centers and primal-dual interior. Abstract 1 On a wide region of centers and primal-dual interior point algorithms for linear programming Jos F. Sturm Shuzhong Zhang y Revised on May 9, 1995 Abstract In the adaptive step primal-dual interior point

More information

5.1 Banded Storage. u = temperature. The five-point difference operator. uh (x, y + h) 2u h (x, y)+u h (x, y h) uh (x + h, y) 2u h (x, y)+u h (x h, y)

5.1 Banded Storage. u = temperature. The five-point difference operator. uh (x, y + h) 2u h (x, y)+u h (x, y h) uh (x + h, y) 2u h (x, y)+u h (x h, y) 5.1 Banded Storage u = temperature u= u h temperature at gridpoints u h = 1 u= Laplace s equation u= h u = u h = grid size u=1 The five-point difference operator 1 u h =1 uh (x + h, y) 2u h (x, y)+u h

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 0 GENE H GOLUB 1 What is Numerical Analysis? In the 1973 edition of the Webster s New Collegiate Dictionary, numerical analysis is defined to be the

More information

1 Implementation (continued)

1 Implementation (continued) Mathematical Programming Lecture 13 OR 630 Fall 2005 October 6, 2005 Notes by Saifon Chaturantabut 1 Implementation continued We noted last time that B + B + a q Be p e p BI + ā q e p e p. Now, we want

More information