The practical revised simplex method (Part 2)

Size: px
Start display at page:

Download "The practical revised simplex method (Part 2)"

Transcription

1 The practical revised simplex method (Part 2) Julian Hall School of Mathematics University of Edinburgh January 25th 2007 The practical revised simplex method

2 Overview (Part 2) Practical implementation of the revised simplex method Representation of B 1

3 Overview (Part 2) Practical implementation of the revised simplex method Representation of B 1 Strategies for CHUZC

4 Overview (Part 2) Practical implementation of the revised simplex method Representation of B 1 Strategies for CHUZC Partial pricing

5 Overview (Part 2) Practical implementation of the revised simplex method Representation of B 1 Strategies for CHUZC Partial pricing Hyper-sparsity

6 Overview (Part 2) Practical implementation of the revised simplex method Representation of B 1 Strategies for CHUZC Partial pricing Hyper-sparsity Parallel simplex

7 Overview (Part 2) Practical implementation of the revised simplex method Representation of B 1 Strategies for CHUZC Partial pricing Hyper-sparsity Parallel simplex Research frontiers The practical revised simplex method 1

8 Representing B 1 The key to the efficiency of the revised simplex method is the representation of B 1.

9 Representing B 1 The key to the efficiency of the revised simplex method is the representation of B 1. Periodically: INVERT operation determines representation of B 1 using Gaussian elimination

10 Representing B 1 The key to the efficiency of the revised simplex method is the representation of B 1. Periodically: INVERT operation determines representation of B 1 using Gaussian elimination Each iteration: UPDATE operation updates representation of B 1 according to B 1 := I (â q e p )e T p â pq! B 1 The practical revised simplex method 2

11 INVERT The representation of B 1 is based on LU decomposition from Gaussian elimination. Classical factors B = LU fill in. Perform row and column interchanges to maintain sparsity: yields P BQ = LU.

12 INVERT The representation of B 1 is based on LU decomposition from Gaussian elimination. Classical factors B = LU fill in. Perform row and column interchanges to maintain sparsity: yields P BQ = LU. Markowitz pivot selection criterion (1957) preserves sparsity well.

13 INVERT The representation of B 1 is based on LU decomposition from Gaussian elimination. Classical factors B = LU fill in. Perform row and column interchanges to maintain sparsity: yields P BQ = LU. Markowitz pivot selection criterion (1957) preserves sparsity well. Tomlin pivot selection criterion (1972) more efficient for most LP basis matrices.

14 INVERT The representation of B 1 is based on LU decomposition from Gaussian elimination. Classical factors B = LU fill in. Perform row and column interchanges to maintain sparsity: yields P BQ = LU. Markowitz pivot selection criterion (1957) preserves sparsity well. Tomlin pivot selection criterion (1972) more efficient for most LP basis matrices. Permutations P and Q can be absorbed into the ordering of basic variables and constraints. The practical revised simplex method 3

15 1 Find identity columns in B Tomlin INVERT

16 1 Find identity columns in B Tomlin INVERT 2a Find columns with only one nonzero

17 1 Find identity columns in B Tomlin INVERT 2b Find rows with only one nonzero 2a Find columns with only one nonzero

18 1 Find identity columns in B Tomlin INVERT 2b Find rows with only one nonzero 2a Find columns with only one nonzero 3 Factorise the bump using GE

19 1 Find identity columns in B Tomlin INVERT 2b Find rows with only one nonzero 2a Find columns with only one nonzero 3 Factorise the bump using GE Efficiency depends on the bump being small

20 1 Find identity columns in B Tomlin INVERT 2b Find rows with only one nonzero 2a Find columns with only one nonzero 3 Factorise the bump using GE Efficiency depends on the bump being small For many practical LP problems there is little or no bump The practical revised simplex method 4

21 Using B = LU from Gaussian elimination Gaussian elimination yields B = LU where L = l 1... l n and U = u 11 u 2... u n u u nn 3 7 5

22 Using B = LU from Gaussian elimination Gaussian elimination yields B = LU where L = l 1... l n and U = u 11 u 2... u n u u nn Bx = r is solved by solving Ly = r and Ux = y.

23 Using B = LU from Gaussian elimination Gaussian elimination yields B = LU where L = l 1... l n and U = u 11 u 2... u n u u nn Bx = r is solved by solving Ly = r and Ux = y. In practice, r is transformed into x as follows. r := r r k l k k = 1,..., n 1; r k := r k, r := r r k u k k = n,..., 2; u kk r 1 := r 1 u 11. The practical revised simplex method 5

24 LU factors as a product of elementary matrices B = LU may be written as n 1 Y B = ( L k )( ny U k ) where L k = l. k k=1 k=1 and U k = u k 1 u kk

25 LU factors as a product of elementary matrices B = LU may be written as n 1 Y B = ( L k )( ny U k ) where L k = l. k k=1 k=1 and U k = u k 1 u kk So B 1 = ( ny k=1 U 1 k n 1 )( Y k=1 L 1 k ) The practical revised simplex method 6

26 Product form and the eta file In general, B 1 = 1Y E 1 k k=k I where E 1 k = η k 1. 1 η k p k 1 1 η k p k η k m 1 1 µ k

27 Product form and the eta file In general, B 1 = 1Y E 1 k k=k I where E 1 k = η k 1. 1 η k p k 1 1 η k p k η k m 1 1 µ k The pivot is µ k ; the index of the pivotal row is p k.

28 Product form and the eta file In general, B 1 = 1Y E 1 k k=k I where E 1 k = η k 1. 1 η k p k 1 1 η k p k η k m 1 1 µ k The pivot is µ k ; the index of the pivotal row is p k. The set {p k, µ k, η k } K I k=1 is known as the eta file. The practical revised simplex method 7

29 Eta file from Tomlin INVERT The practical revised simplex method 8

30 Eta file (25fv47) Basis matrix is of dimension is 821; bump is of dimension 411 The practical revised simplex method 9

31 Eta file (stair) Basis matrix is of dimension is 356; bump is of dimension 324 The practical revised simplex method 10

32 Eta file (sgpf5y6) Basis matrix is of dimension is ; bump is of dimension 32 The practical revised simplex method 11

33 Using the eta file The solution of Bx = r is formed by transforming r into x as follows. r pk := µ k r pk, r := r r pk η k k = 1,..., K I.

34 Using the eta file The solution of Bx = r is formed by transforming r into x as follows. r pk := µ k r pk, r := r r pk η k k = 1,..., K I. This requires a sparse eta-vector to be added to r (up to) K I times.

35 Using the eta file The solution of Bx = r is formed by transforming r into x as follows. r pk := µ k r pk, r := r r pk η k k = 1,..., K I. This requires a sparse eta-vector to be added to r (up to) K I times. The solution of B T x = r is (r T B 1 ) T and is formed by transforming r into x as follows. r pk := µ k (r pk r T η k ) k = K I,..., 1

36 Using the eta file The solution of Bx = r is formed by transforming r into x as follows. r pk := µ k r pk, r := r r pk η k k = 1,..., K I. This requires a sparse eta-vector to be added to r (up to) K I times. The solution of B T x = r is (r T B 1 ) T and is formed by transforming r into x as follows. r pk := µ k (r pk r T η k ) k = K I,..., 1 This requires the inner product between a sparse eta-vector and r (up to) K I times. The practical revised simplex method 12

37 UPDATE Update of B 1 defined by B 1 := I (â q e p )e T p âpq «B 1

38 UPDATE Update of B 1 defined by B 1 := = I (â «q e p )e T p B 1 âpq 2 1 â 1q â p 1 q 1 6 â p+1 q â mq /â pq B 1

39 UPDATE Update of B 1 defined by B 1 := I (â «q e p )e T p B 1 âpq 2 1 â 1q â p 1 q = 1 6 â p+1 q â mq 1 = E 1 k B /â pq B 1 for p k = p, µ k = 1/â pq and η k = â q â pq e p. The practical revised simplex method 1

40 UPDATE (cont.) In general, K U UPDATEs after INVERT B 1 = 1Y E 1 k k=k I +K U

41 UPDATE (cont.) In general, K U UPDATEs after INVERT B 1 = 1Y E 1 k k=k I +K U Periodically it will more efficient, or necessary for numerical stability, to re-invert. Note that K U K I = O(m) The practical revised simplex method 2

42 FTRAN and BTRAN Using the representation B 1 = 1Y E 1 k k=k I +K U

43 FTRAN and BTRAN Using the representation B 1 = 1Y E 1 k k=k I +K U The pivotal column â q = B 1 a q is formed by transforming r = a q into â q by the FTRAN operation r pk := µ k r pk, r := r r pk η k k = 1,..., K I + K U.

44 FTRAN and BTRAN Using the representation B 1 = 1Y E 1 k k=k I +K U The pivotal column â q = B 1 a q is formed by transforming r = a q into â q by the FTRAN operation r pk := µ k r pk, r := r r pk η k k = 1,..., K I + K U. The vector π T = e T p B 1 required to find the pivotal row is formed by transforming r = e p into π by the BTRAN operation r pk := µ k (r pk r T η k ) k = K I + K U,..., 1. The practical revised simplex method 3

45 Strategies for CHUZC Most negative reduced cost rule is not the best. Example: minimize f = 2x 1 x 2 subject to 100x 1 + x 2 1 x 1 0 and x 2 0.

46 Strategies for CHUZC Most negative reduced cost rule is not the best. Example: minimize f = 2x 1 x 2 subject to 100x 1 + x 2 1 x 1 0 and x 2 0. Slack variable y 1 puts problem in standard form minimize f = 2x 1 x 2 subject to 100x 1 + x 2 + y 1 = 1 x 1 0, x 2 0 and y 1 0. The practical revised simplex method 4

47 Example (cont.) minimize f = 2x 1 x 2 subject to 100x 1 + x 2 + y 1 = 1 x 1 0, x 2 0 and y 1 0. Starting from logical basis y 1 = 1: Reduced costs are 2 for x 1 1 for x 2

48 Example (cont.) minimize f = 2x 1 x 2 subject to 100x 1 + x 2 + y 1 = 1 x 1 0, x 2 0 and y 1 0. Starting from logical basis y 1 = 1: Reduced costs are 2 for x 1 1 for x 2 x 1 is best under Dantzig rule but a unit change in x 1 requires a change of 100 in y 1. Problem solved in two iterations. The practical revised simplex method 5

49 The practical revised simplex method 6

50 Example (cont.) minimize f = 2x 1 x 2 subject to 100x 1 + x 2 + y 1 = 1 x 1 0, x 2 0 and y 1 0. Starting from logical basis y 1 = 1: Rates of change of objective are 2/ for x 1 1/ for x 2

51 Example (cont.) minimize f = 2x 1 x 2 subject to 100x 1 + x 2 + y 1 = 1 x 1 0, x 2 0 and y 1 0. Starting from logical basis y 1 = 1: Rates of change of objective are 2/ for x 1 1/ for x 2 x 2 is best under steepest edge rule. Problem solved in one iteration. The practical revised simplex method 7

52 Most negative reduced cost rule (Dantzig) Edge selection strategies

53 Edge selection strategies Most negative reduced cost rule (Dantzig) Greatest reduction rule (1951) Too expensive in practice

54 Edge selection strategies Most negative reduced cost rule (Dantzig) Greatest reduction rule (1951) Too expensive in practice Steepest Edge (1963) Updates exact step lengths Requires extra BTRAN and PRICE each iteration Prohibitively expensive start-up cost unless B = I initially

55 Edge selection strategies Most negative reduced cost rule (Dantzig) Greatest reduction rule (1951) Too expensive in practice Steepest Edge (1963) Updates exact step lengths Requires extra BTRAN and PRICE each iteration Prohibitively expensive start-up cost unless B = I initially Devex (1965) Updates approximate step lengths No additional calculation No start-up cost The practical revised simplex method 8

56 Partial PRICE Some LP problems naturally have very many more columns than rows Rows Columns Nonzeros Columns per row Name (m) (n) (τ) (n/m) τ/m 2 fit2d nw

57 Partial PRICE Some LP problems naturally have very many more columns than rows Rows Columns Nonzeros Columns per row Name (m) (n) (τ) (n/m) τ/m 2 fit2d nw PRICE completely dominates solution time

58 Partial PRICE Some LP problems naturally have very many more columns than rows Rows Columns Nonzeros Columns per row Name (m) (n) (τ) (n/m) τ/m 2 fit2d nw PRICE completely dominates solution time Simplex method only needs to choose an attractive column, not necessarily the best

59 Partial PRICE Some LP problems naturally have very many more columns than rows Rows Columns Nonzeros Columns per row Name (m) (n) (τ) (n/m) τ/m 2 fit2d nw PRICE completely dominates solution time Simplex method only needs to choose an attractive column, not necessarily the best Compute only a small subset of the reduced costs

60 Partial PRICE Some LP problems naturally have very many more columns than rows Rows Columns Nonzeros Columns per row Name (m) (n) (τ) (n/m) τ/m 2 fit2d nw PRICE completely dominates solution time Simplex method only needs to choose an attractive column, not necessarily the best Compute only a small subset of the reduced costs Number of iterations may increase but each iteration is vastly faster The practical revised simplex method 9

61 Hyper-sparsity Hyper-sparsity exists when the pivotal column â q = B 1 a q is sparse; (FTRAN)

62 Hyper-sparsity Hyper-sparsity exists when the pivotal column â q = B 1 a q is sparse; the π vector π T = e T p B 1 is sparse; (FTRAN) (BTRAN)

63 Hyper-sparsity Hyper-sparsity exists when the pivotal column â q = B 1 a q is sparse; the π vector π T = e T p B 1 is sparse; the pivotal row â T p = πt N is sparse. (FTRAN) (BTRAN) (PRICE)

64 Hyper-sparsity Hyper-sparsity exists when the pivotal column â q = B 1 a q is sparse; the π vector π T = e T p B 1 is sparse; the pivotal row â T p = πt N is sparse. Exploit hyper-sparsity both when forming and using these vectors. (FTRAN) (BTRAN) (PRICE) See J. A. J. Hall and K. I. M. McKinnon. Hyper-sparsity in the revised simplex method and how to exploit it. Computational Optimization and Applications 32(3), , The practical revised simplex method 10

65 Hyper-sparsity in FTRAN FTRAN forms â q = B 1 a q using r = a q in do 10, k = 1, K I + K U if (r pk.eq. 0) go to 10 r pk := µ k r pk r := r r pk η k 10 continue

66 Hyper-sparsity in FTRAN FTRAN forms â q = B 1 a q using r = a q in do 10, k = 1, K I + K U if (r pk.eq. 0) go to 10 r pk := µ k r pk r := r r pk η k 10 continue When â q is sparse, most of the work of FTRAN is the test for zero!

67 Hyper-sparsity in FTRAN FTRAN forms â q = B 1 a q using r = a q in do 10, k = 1, K I + K U if (r pk.eq. 0) go to 10 r pk := µ k r pk r := r r pk η k 10 continue When â q is sparse, most of the work of FTRAN is the test for zero! Remedy: Identify the INVERT etas to be applied by knowing etas associated with a nonzero in a particular row Hall and McKinnon (1999)

68 Hyper-sparsity in FTRAN FTRAN forms â q = B 1 a q using r = a q in do 10, k = 1, K I + K U if (r pk.eq. 0) go to 10 r pk := µ k r pk r := r r pk η k 10 continue When â q is sparse, most of the work of FTRAN is the test for zero! Remedy: Identify the INVERT etas to be applied by knowing etas associated with a nonzero in a particular row Hall and McKinnon (1999) representing etas as a graph and perform a depth-first search Gilbert and Peierls (1988) The practical revised simplex method 11

69 Hyper-sparsity in BTRAN BTRAN forms π T = e T p B 1 using r = e T p in do 10, k = K I + K U, 1 r pk := µ k (r pk r T η k ) 10 continue

70 Hyper-sparsity in BTRAN BTRAN forms π T = e T p B 1 using r = e T p in do 10, k = K I + K U, 1 r pk := µ k (r pk r T η k ) 10 continue When π is sparse, almost all operations in BTRAN are with zeros

71 Hyper-sparsity in BTRAN BTRAN forms π T = e T p B 1 using r = e T p in do 10, k = K I + K U, 1 r pk := µ k (r pk r T η k ) 10 continue When π is sparse, almost all operations in BTRAN are with zeros Remedy: Store INVERT etas row-wise Use special techniques developed for FTRAN The practical revised simplex method 12

72 Hyper-sparsity in PRICE PRICE forms â T p = πt N When π is sparse, components of â p are inner products between two sparse vectors

73 Hyper-sparsity in PRICE PRICE forms â T p = πt N When π is sparse, components of â p are inner products between two sparse vectors Most operations are with zeros

74 Hyper-sparsity in PRICE PRICE forms â T p = πt N When π is sparse, components of â p are inner products between two sparse vectors Most operations are with zeros Remedy: Store N row-wise Form â T p by combining rows of N corresponding to nonzeros in π. The practical revised simplex method 13

75 Parallelising the simplex method Parallel architectures Distributed Memory

76 Parallelising the simplex method Parallel architectures Distributed Memory Shared Memory The practical revised simplex method 14

77 Progress to date Standard simplex method parallelises beautifully! Still slower than sequential revised simplex

78 Progress to date Standard simplex method parallelises beautifully! Still slower than sequential revised simplex Revised simplex method viewed as sequential Computational components performed in sequence PRICE, CHUZC and CHUZR parallelise

79 Progress to date Standard simplex method parallelises beautifully! Still slower than sequential revised simplex Revised simplex method viewed as sequential Computational components performed in sequence PRICE, CHUZC and CHUZR parallelise BTRAN, FTRAN and INVERT are sequential

80 Progress to date Standard simplex method parallelises beautifully! Still slower than sequential revised simplex Revised simplex method viewed as sequential Computational components performed in sequence PRICE, CHUZC and CHUZR parallelise BTRAN, FTRAN and INVERT are sequential Hall and McKinnon (1995) ParLP: n-processor asynchronous (Dantzig) Speed-up factor:

81 Progress to date Standard simplex method parallelises beautifully! Still slower than sequential revised simplex Revised simplex method viewed as sequential Computational components performed in sequence PRICE, CHUZC and CHUZR parallelise BTRAN, FTRAN and INVERT are sequential Hall and McKinnon (1995) ParLP: n-processor asynchronous (Dantzig) Speed-up factor: Wunderling (1996) SMoPlex: 2-processor synchronous (Steepest Edge) Speed-up factor:

82 Progress to date Standard simplex method parallelises beautifully! Still slower than sequential revised simplex Revised simplex method viewed as sequential Computational components performed in sequence PRICE, CHUZC and CHUZR parallelise BTRAN, FTRAN and INVERT are sequential Hall and McKinnon (1995) ParLP: n-processor asynchronous (Dantzig) Speed-up factor: Wunderling (1996) SMoPlex: 2-processor synchronous (Steepest Edge) Speed-up factor: Hall and McKinnon ( ) PARSMI: n-processor asynchronous (Devex) Speed-up factor: The practical revised simplex method 15

83 Research frontiers Hypersparsity Column and row selection strategies to promote hypersparsity in the revised simplex method Very challenging

84 Research frontiers Hypersparsity Column and row selection strategies to promote hypersparsity in the revised simplex method Very challenging Specialist variants of Gilbert-Peierls for LP

85 Research frontiers Hypersparsity Column and row selection strategies to promote hypersparsity in the revised simplex method Very challenging Specialist variants of Gilbert-Peierls for LP Parallel simplex SYNPLEX: stable synchronous version of PARSMI Implemented but needs improvements and/or parallel INVERT

86 Research frontiers Hypersparsity Column and row selection strategies to promote hypersparsity in the revised simplex method Very challenging Specialist variants of Gilbert-Peierls for LP Parallel simplex SYNPLEX: stable synchronous version of PARSMI Implemented but needs improvements and/or parallel INVERT Parallel Tomlin INVERT Serial simulation for hyper-sparse LPs gives very encouraging results. Parallel implementation due Summer 2007

87 Research frontiers Hypersparsity Column and row selection strategies to promote hypersparsity in the revised simplex method Very challenging Specialist variants of Gilbert-Peierls for LP Parallel simplex SYNPLEX: stable synchronous version of PARSMI Implemented but needs improvements and/or parallel INVERT Parallel Tomlin INVERT Serial simulation for hyper-sparse LPs gives very encouraging results. Parallel implementation due Summer 2007 Parallel FTRAN and BTRAN

88 Research frontiers Hypersparsity Column and row selection strategies to promote hypersparsity in the revised simplex method Very challenging Specialist variants of Gilbert-Peierls for LP Parallel simplex SYNPLEX: stable synchronous version of PARSMI Implemented but needs improvements and/or parallel INVERT Parallel Tomlin INVERT Serial simulation for hyper-sparse LPs gives very encouraging results. Parallel implementation due Summer 2007 Parallel FTRAN and BTRAN Some scope for standard LPs Very challenging for hyper-sparse LPs! The practical revised simplex method 16

89 Conclusions The simplex method has been around for 60 years

90 Conclusions The simplex method has been around for 60 years Value of method means that the computational state-of-the-art is very advanced

91 Conclusions The simplex method has been around for 60 years Value of method means that the computational state-of-the-art is very advanced There is still scope for further improvements

92 Conclusions The simplex method has been around for 60 years Value of method means that the computational state-of-the-art is very advanced There is still scope for further improvements Making advances is demanding but attracts attention!

93 Conclusions The simplex method has been around for 60 years Value of method means that the computational state-of-the-art is very advanced There is still scope for further improvements Making advances is demanding but attracts attention! Slides available as The practical revised simplex method 17

Novel update techniques for the revised simplex method (and their application)

Novel update techniques for the revised simplex method (and their application) Novel update techniques for the revised simplex method (and their application) Qi Huangfu 1 Julian Hall 2 Others 1 FICO 2 School of Mathematics, University of Edinburgh ERGO 30 November 2016 Overview Background

More information

Exploiting hyper-sparsity when computing preconditioners for conjugate gradients in interior point methods

Exploiting hyper-sparsity when computing preconditioners for conjugate gradients in interior point methods Exploiting hyper-sparsity when computing preconditioners for conjugate gradients in interior point methods Julian Hall, Ghussoun Al-Jeiroudi and Jacek Gondzio School of Mathematics University of Edinburgh

More information

Revised Simplex Method

Revised Simplex Method DM545 Linear and Integer Programming Lecture 7 Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. 2 Motivation Complexity of single pivot operation

More information

DM545 Linear and Integer Programming. Lecture 7 Revised Simplex Method. Marco Chiarandini

DM545 Linear and Integer Programming. Lecture 7 Revised Simplex Method. Marco Chiarandini DM545 Linear and Integer Programming Lecture 7 Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark Outline 1. 2. 2 Motivation Complexity of single pivot operation

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

Solving Linear and Integer Programs

Solving Linear and Integer Programs Solving Linear and Integer Programs Robert E. Bixby ILOG, Inc. and Rice University Ed Rothberg ILOG, Inc. DAY 2 2 INFORMS Practice 2002 1 Dual Simplex Algorithm 3 Some Motivation Dual simplex vs. primal

More information

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

Computation of the mtx-vec product based on storage scheme on vector CPUs

Computation of the mtx-vec product based on storage scheme on vector CPUs BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix Computation of the mtx-vec product based on storage scheme on

More information

Gaussian Elimination and Back Substitution

Gaussian Elimination and Back Substitution Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving

More information

BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix-Vector-Product Analysis of Matrix-Matrix Product

BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix-Vector-Product Analysis of Matrix-Matrix Product Level-1 BLAS: SAXPY BLAS-Notation: S single precision (D for double, C for complex) A α scalar X vector P plus operation Y vector SAXPY: y = αx + y Vectorization of SAXPY (αx + y) by pipelining: page 8

More information

Linear Systems of n equations for n unknowns

Linear Systems of n equations for n unknowns Linear Systems of n equations for n unknowns In many application problems we want to find n unknowns, and we have n linear equations Example: Find x,x,x such that the following three equations hold: x

More information

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3

Pivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3 Pivoting Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3 In the previous discussions we have assumed that the LU factorization of A existed and the various versions could compute it in a stable manner.

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.

COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems. COURSE 9 4 Numerical methods for solving linear systems Practical solving of many problems eventually leads to solving linear systems Classification of the methods: - direct methods - with low number of

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given

More information

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13

STAT 309: MATHEMATICAL COMPUTATIONS I FALL 2018 LECTURE 13 STAT 309: MATHEMATICAL COMPUTATIONS I FALL 208 LECTURE 3 need for pivoting we saw that under proper circumstances, we can write A LU where 0 0 0 u u 2 u n l 2 0 0 0 u 22 u 2n L l 3 l 32, U 0 0 0 l n l

More information

Permutations in the factorization of simplex bases

Permutations in the factorization of simplex bases Permutations in the factorization of simplex bases Ricardo Fukasawa, Laurent Poirrier {rfukasawa,lpoirrier}@uwaterloo.ca December 13, 2016 Abstract The basis matrices corresponding to consecutive iterations

More information

Exploiting Fill-in and Fill-out in Gaussian-like Elimination Procedures on the Extended Jacobian Matrix

Exploiting Fill-in and Fill-out in Gaussian-like Elimination Procedures on the Extended Jacobian Matrix 2nd European Workshop on AD 1 Exploiting Fill-in and Fill-out in Gaussian-like Elimination Procedures on the Extended Jacobian Matrix Andrew Lyons (Vanderbilt U.) / Uwe Naumann (RWTH Aachen) 2nd European

More information

(17) (18)

(17) (18) Module 4 : Solving Linear Algebraic Equations Section 3 : Direct Solution Techniques 3 Direct Solution Techniques Methods for solving linear algebraic equations can be categorized as direct and iterative

More information

Roundoff Analysis of Gaussian Elimination

Roundoff Analysis of Gaussian Elimination Jim Lambers MAT 60 Summer Session 2009-0 Lecture 5 Notes These notes correspond to Sections 33 and 34 in the text Roundoff Analysis of Gaussian Elimination In this section, we will perform a detailed error

More information

LUSOL: A basis package for constrained optimization

LUSOL: A basis package for constrained optimization LUSOL: A basis package for constrained optimization IFORS Triennial Conference on OR/MS Honolulu, HI, July 11 15, 2005 Michael O Sullivan and Michael Saunders Dept of Engineering Science Dept of Management

More information

Direct solution methods for sparse matrices. p. 1/49

Direct solution methods for sparse matrices. p. 1/49 Direct solution methods for sparse matrices p. 1/49 p. 2/49 Direct solution methods for sparse matrices Solve Ax = b, where A(n n). (1) Factorize A = LU, L lower-triangular, U upper-triangular. (2) Solve

More information

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University February 6, 2018 Linear Algebra (MTH

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one

More information

Numerical Methods I: Numerical linear algebra

Numerical Methods I: Numerical linear algebra 1/3 Numerical Methods I: Numerical linear algebra Georg Stadler Courant Institute, NYU stadler@cimsnyuedu September 1, 017 /3 We study the solution of linear systems of the form Ax = b with A R n n, x,

More information

MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year

MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year 1 MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year 2013-14 OUTLINE OF WEEK 2 Linear Systems and solutions Systems of linear

More information

TR/02/87 February An Investigation of Pricing. Strategies Within Simplex

TR/02/87 February An Investigation of Pricing. Strategies Within Simplex TR/0/87 February 1987 An Investigation of Pricing Strategies Within Simplex by Gautam Mitra Ivonia Rebelo Mehrdad Tamiz z1607313 AN INVESTIGATION OF PRICING STRATEGIES WITHIN SIMPLEX by Gautam Mitra* Ivonia

More information

2.1 Gaussian Elimination

2.1 Gaussian Elimination 2. Gaussian Elimination A common problem encountered in numerical models is the one in which there are n equations and n unknowns. The following is a description of the Gaussian elimination method for

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

Maths for Signals and Systems Linear Algebra for Engineering Applications

Maths for Signals and Systems Linear Algebra for Engineering Applications Maths for Signals and Systems Linear Algebra for Engineering Applications Lectures 1-2, Tuesday 11 th October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE LONDON

More information

Introduction to Mathematical Programming

Introduction to Mathematical Programming Introduction to Mathematical Programming Ming Zhong Lecture 6 September 12, 2018 Ming Zhong (JHU) AMS Fall 2018 1 / 20 Table of Contents 1 Ming Zhong (JHU) AMS Fall 2018 2 / 20 Solving Linear Systems A

More information

Linear Programming The Simplex Algorithm: Part II Chapter 5

Linear Programming The Simplex Algorithm: Part II Chapter 5 1 Linear Programming The Simplex Algorithm: Part II Chapter 5 University of Chicago Booth School of Business Kipp Martin October 17, 2017 Outline List of Files Key Concepts Revised Simplex Revised Simplex

More information

Matrix Computations: Direct Methods II. May 5, 2014 Lecture 11

Matrix Computations: Direct Methods II. May 5, 2014 Lecture 11 Matrix Computations: Direct Methods II May 5, 2014 ecture Summary You have seen an example of how a typical matrix operation (an important one) can be reduced to using lower level BS routines that would

More information

Parallel Scientific Computing

Parallel Scientific Computing IV-1 Parallel Scientific Computing Matrix-vector multiplication. Matrix-matrix multiplication. Direct method for solving a linear equation. Gaussian Elimination. Iterative method for solving a linear equation.

More information

Gaussian Elimination without/with Pivoting and Cholesky Decomposition

Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination WITHOUT pivoting Notation: For a matrix A R n n we define for k {,,n} the leading principal submatrix a a k A

More information

MATH 3511 Lecture 1. Solving Linear Systems 1

MATH 3511 Lecture 1. Solving Linear Systems 1 MATH 3511 Lecture 1 Solving Linear Systems 1 Dmitriy Leykekhman Spring 2012 Goals Review of basic linear algebra Solution of simple linear systems Gaussian elimination D Leykekhman - MATH 3511 Introduction

More information

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1

Scientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1 Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

5 Solving Systems of Linear Equations

5 Solving Systems of Linear Equations 106 Systems of LE 5.1 Systems of Linear Equations 5 Solving Systems of Linear Equations 5.1 Systems of Linear Equations System of linear equations: a 11 x 1 + a 12 x 2 +... + a 1n x n = b 1 a 21 x 1 +

More information

1 Implementation (continued)

1 Implementation (continued) Mathematical Programming Lecture 13 OR 630 Fall 2005 October 6, 2005 Notes by Saifon Chaturantabut 1 Implementation continued We noted last time that B + B + a q Be p e p BI + ā q e p e p. Now, we want

More information

Linear Programming: Simplex

Linear Programming: Simplex Linear Programming: Simplex Stephen J. Wright 1 2 Computer Sciences Department, University of Wisconsin-Madison. IMA, August 2016 Stephen Wright (UW-Madison) Linear Programming: Simplex IMA, August 2016

More information

SOLVING LINEAR SYSTEMS

SOLVING LINEAR SYSTEMS SOLVING LINEAR SYSTEMS We want to solve the linear system a, x + + a,n x n = b a n, x + + a n,n x n = b n This will be done by the method used in beginning algebra, by successively eliminating unknowns

More information

Solving LP and MIP Models with Piecewise Linear Objective Functions

Solving LP and MIP Models with Piecewise Linear Objective Functions Solving LP and MIP Models with Piecewise Linear Obective Functions Zonghao Gu Gurobi Optimization Inc. Columbus, July 23, 2014 Overview } Introduction } Piecewise linear (PWL) function Convex and convex

More information

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns 5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns (1) possesses the solution and provided that.. The numerators and denominators are recognized

More information

An introduction to parallel algorithms

An introduction to parallel algorithms An introduction to parallel algorithms Knut Mørken Department of Informatics Centre of Mathematics for Applications University of Oslo Winter School on Parallel Computing Geilo January 20 25, 2008 1/26

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix

7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix EE507 - Computational Techniques for EE 7. LU factorization Jitkomut Songsiri factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization

More information

Numerical Linear Algebra

Numerical Linear Algebra Chapter 3 Numerical Linear Algebra We review some techniques used to solve Ax = b where A is an n n matrix, and x and b are n 1 vectors (column vectors). We then review eigenvalues and eigenvectors and

More information

Sparse Rank-Revealing LU Factorization

Sparse Rank-Revealing LU Factorization Sparse Rank-Revealing LU Factorization Householder Symposium XV on Numerical Linear Algebra Peebles, Scotland, June 2002 Michael O Sullivan and Michael Saunders Dept of Engineering Science Dept of Management

More information

Lecture: Algorithms for LP, SOCP and SDP

Lecture: Algorithms for LP, SOCP and SDP 1/53 Lecture: Algorithms for LP, SOCP and SDP Zaiwen Wen Beijing International Center For Mathematical Research Peking University http://bicmr.pku.edu.cn/~wenzw/bigdata2018.html wenzw@pku.edu.cn Acknowledgement:

More information

1.5 Gaussian Elimination With Partial Pivoting.

1.5 Gaussian Elimination With Partial Pivoting. Gaussian Elimination With Partial Pivoting In the previous section we discussed Gaussian elimination In that discussion we used equation to eliminate x from equations through n Then we used equation to

More information

Consider the following example of a linear system:

Consider the following example of a linear system: LINEAR SYSTEMS Consider the following example of a linear system: Its unique solution is x + 2x 2 + 3x 3 = 5 x + x 3 = 3 3x + x 2 + 3x 3 = 3 x =, x 2 = 0, x 3 = 2 In general we want to solve n equations

More information

CO 602/CM 740: Fundamentals of Optimization Problem Set 4

CO 602/CM 740: Fundamentals of Optimization Problem Set 4 CO 602/CM 740: Fundamentals of Optimization Problem Set 4 H. Wolkowicz Fall 2014. Handed out: Wednesday 2014-Oct-15. Due: Wednesday 2014-Oct-22 in class before lecture starts. Contents 1 Unique Optimum

More information

Lecture 11 Linear programming : The Revised Simplex Method

Lecture 11 Linear programming : The Revised Simplex Method Lecture 11 Linear programming : The Revised Simplex Method 11.1 The Revised Simplex Method While solving linear programming problem on a digital computer by regular simplex method, it requires storing

More information

Matrix decompositions

Matrix decompositions Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

Solving Linear Systems Using Gaussian Elimination

Solving Linear Systems Using Gaussian Elimination Solving Linear Systems Using Gaussian Elimination DEFINITION: A linear equation in the variables x 1,..., x n is an equation that can be written in the form a 1 x 1 +...+a n x n = b, where a 1,...,a n

More information

1 GSW Sets of Systems

1 GSW Sets of Systems 1 Often, we have to solve a whole series of sets of simultaneous equations of the form y Ax, all of which have the same matrix A, but each of which has a different known vector y, and a different unknown

More information

MS&E 318 (CME 338) Large-Scale Numerical Optimization

MS&E 318 (CME 338) Large-Scale Numerical Optimization Stanford University, Management Science & Engineering (and ICME) MS&E 318 (CME 338) Large-Scale Numerical Optimization Instructor: Michael Saunders Spring 2015 Notes 4: The Primal Simplex Method 1 Linear

More information

Solving linear systems (6 lectures)

Solving linear systems (6 lectures) Chapter 2 Solving linear systems (6 lectures) 2.1 Solving linear systems: LU factorization (1 lectures) Reference: [Trefethen, Bau III] Lecture 20, 21 How do you solve Ax = b? (2.1.1) In numerical linear

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors 5 Eigenvalues and Eigenvectors 5.2 THE CHARACTERISTIC EQUATION DETERMINANATS n n Let A be an matrix, let U be any echelon form obtained from A by row replacements and row interchanges (without scaling),

More information

Section 5.6. LU and LDU Factorizations

Section 5.6. LU and LDU Factorizations 5.6. LU and LDU Factorizations Section 5.6. LU and LDU Factorizations Note. We largely follow Fraleigh and Beauregard s approach to this topic from Linear Algebra, 3rd Edition, Addison-Wesley (995). See

More information

I-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x

I-v k e k. (I-e k h kt ) = Stability of Gauss-Huard Elimination for Solving Linear Systems. 1 x 1 x x x x Technical Report CS-93-08 Department of Computer Systems Faculty of Mathematics and Computer Science University of Amsterdam Stability of Gauss-Huard Elimination for Solving Linear Systems T. J. Dekker

More information

A review of sparsity vs stability in LU updates

A review of sparsity vs stability in LU updates A review of sparsity vs stability in LU updates Michael Saunders Dept of Management Science and Engineering (MS&E) Systems Optimization Laboratory (SOL) Institute for Computational Mathematics and Engineering

More information

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015

CS412: Lecture #17. Mridul Aanjaneya. March 19, 2015 CS: Lecture #7 Mridul Aanjaneya March 9, 5 Solving linear systems of equations Consider a lower triangular matrix L: l l l L = l 3 l 3 l 33 l n l nn A procedure similar to that for upper triangular systems

More information

Hani Mehrpouyan, California State University, Bakersfield. Signals and Systems

Hani Mehrpouyan, California State University, Bakersfield. Signals and Systems Hani Mehrpouyan, Department of Electrical and Computer Engineering, Lecture 26 (LU Factorization) May 30 th, 2013 The material in these lectures is partly taken from the books: Elementary Numerical Analysis,

More information

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II) Math 59 Lecture 2 (Tue Mar 5) Gaussian elimination and LU factorization (II) 2 Gaussian elimination - LU factorization For a general n n matrix A the Gaussian elimination produces an LU factorization if

More information

30.3. LU Decomposition. Introduction. Prerequisites. Learning Outcomes

30.3. LU Decomposition. Introduction. Prerequisites. Learning Outcomes LU Decomposition 30.3 Introduction In this Section we consider another direct method for obtaining the solution of systems of equations in the form AX B. Prerequisites Before starting this Section you

More information

CS475: Linear Equations Gaussian Elimination LU Decomposition Wim Bohm Colorado State University

CS475: Linear Equations Gaussian Elimination LU Decomposition Wim Bohm Colorado State University CS475: Linear Equations Gaussian Elimination LU Decomposition Wim Bohm Colorado State University Except as otherwise noted, the content of this presentation is licensed under the Creative Commons Attribution

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 12: Gaussian Elimination and LU Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 10 Gaussian Elimination

More information

Solving Linear Systems of Equations

Solving Linear Systems of Equations 1 Solving Linear Systems of Equations Many practical problems could be reduced to solving a linear system of equations formulated as Ax = b This chapter studies the computational issues about directly

More information

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4

Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math 365 Week #4 Linear Algebra Linear Algebra : Matrix decompositions Monday, February 11th Math Week # 1 Saturday, February 1, 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x

More information

Solving Dense Linear Systems I

Solving Dense Linear Systems I Solving Dense Linear Systems I Solving Ax = b is an important numerical method Triangular system: [ l11 l 21 if l 11, l 22 0, ] [ ] [ ] x1 b1 = l 22 x 2 b 2 x 1 = b 1 /l 11 x 2 = (b 2 l 21 x 1 )/l 22 Chih-Jen

More information

0.1 O. R. Katta G. Murty, IOE 510 Lecture slides Introductory Lecture. is any organization, large or small.

0.1 O. R. Katta G. Murty, IOE 510 Lecture slides Introductory Lecture. is any organization, large or small. 0.1 O. R. Katta G. Murty, IOE 510 Lecture slides Introductory Lecture Operations Research is the branch of science dealing with techniques for optimizing the performance of systems. System is any organization,

More information

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization

More information

Solving linear equations with Gaussian Elimination (I)

Solving linear equations with Gaussian Elimination (I) Term Projects Solving linear equations with Gaussian Elimination The QR Algorithm for Symmetric Eigenvalue Problem The QR Algorithm for The SVD Quasi-Newton Methods Solving linear equations with Gaussian

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

INVERSE OF A MATRIX [2.2]

INVERSE OF A MATRIX [2.2] INVERSE OF A MATRIX [2.2] The inverse of a matrix: Introduction We have a mapping from R n to R n represented by a matrix A. Can we invert this mapping? i.e. can we find a matrix (call it B for now) such

More information

Linear Solvers. Andrew Hazel

Linear Solvers. Andrew Hazel Linear Solvers Andrew Hazel Introduction Thus far we have talked about the formulation and discretisation of physical problems...... and stopped when we got to a discrete linear system of equations. Introduction

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information

An Enhanced Piecewise Linear Dual Phase-1 Algorithm for the Simplex Method

An Enhanced Piecewise Linear Dual Phase-1 Algorithm for the Simplex Method An Enhanced Piecewise Linear Dual Phase-1 Algorithm for the Simplex Method István Maros Department of Computing, Imperial College, London Email: i.maros@ic.ac.uk Departmental Technical Report 2002/15 ISSN

More information

CSE 160 Lecture 13. Numerical Linear Algebra

CSE 160 Lecture 13. Numerical Linear Algebra CSE 16 Lecture 13 Numerical Linear Algebra Announcements Section will be held on Friday as announced on Moodle Midterm Return 213 Scott B Baden / CSE 16 / Fall 213 2 Today s lecture Gaussian Elimination

More information

4.2 Floating-Point Numbers

4.2 Floating-Point Numbers 101 Approximation 4.2 Floating-Point Numbers 4.2 Floating-Point Numbers The number 3.1416 in scientific notation is 0.31416 10 1 or (as computer output) -0.31416E01..31416 10 1 exponent sign mantissa base

More information

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization

More information

Practical Linear Algebra: A Geometry Toolbox

Practical Linear Algebra: A Geometry Toolbox Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla

More information

A Hybrid Method for the Wave Equation. beilina

A Hybrid Method for the Wave Equation.   beilina A Hybrid Method for the Wave Equation http://www.math.unibas.ch/ beilina 1 The mathematical model The model problem is the wave equation 2 u t 2 = (a 2 u) + f, x Ω R 3, t > 0, (1) u(x, 0) = 0, x Ω, (2)

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors 5 Eigenvalues and Eigenvectors 5.2 THE CHARACTERISTIC EQUATION DETERMINANATS nn Let A be an matrix, let U be any echelon form obtained from A by row replacements and row interchanges (without scaling),

More information

LINEAR PROGRAMMING ALGORITHMS USING LEAST-SQUARES METHOD

LINEAR PROGRAMMING ALGORITHMS USING LEAST-SQUARES METHOD LINEAR PROGRAMMING ALGORITHMS USING LEAST-SQUARES METHOD A Thesis Presented to The Academic Faculty by Seunghyun Kong In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the

More information

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in Vectors and Matrices Continued Remember that our goal is to write a system of algebraic equations as a matrix equation. Suppose we have the n linear algebraic equations a x + a 2 x 2 + a n x n = b a 2

More information