Computation of the mtx-vec product based on storage scheme on vector CPUs
|
|
- Gerald Elliott
- 5 years ago
- Views:
Transcription
1 BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix Computation of the mtx-vec product based on storage scheme on vector CPUs For i = 1,..., n : c i = A i b = j General TRIAD, no SAXPY: for s = β : β r i a ij b j = a i,i+s b i+s = s=l i...for i = max{1 s, 1} : min{n s, n}...c i = c i + ã i,s b i+s...end end r i s=l i ã i,s b i+s Parallel Numerics, WT 2016/ Elementary Linear Algebra Problems page 25 of 39
2 BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix Computation of the mtx-vec product based on storage scheme on vector CPUs For i = 1,..., n : c i = A i b = j General TRIAD, no SAXPY: for s = β : β r i a ij b j = a i,i+s b i+s = s=l i...for i = max{1 s, 1} : min{n s, n}...c i = c i + ã i,s b i+s...end end or, partial DOT-product: for i = 1 : n...for s = max{ β, 1 i} : max{β, n i}...c i = c i + ã i,s b i+s...end end Sparsity less operations, but also loss of efficiency. r i s=l i ã i,s b i+s Parallel Numerics, WT 2016/ Elementary Linear Algebra Problems page 25 of 39
3 BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix Band Ab in Parallel Partitioning: R 1, n = I r, disjunct r=1 for i I r...c i = end r i s=l i ã is b i+s Processor P r gets rows to index set I r := [m r, M r ] in order to compute its part of the final vector c. What part of vector b does processor P r need in order to compute its part of c? Parallel Numerics, WT 2016/ Elementary Linear Algebra Problems page 26 of 39
4 BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix Band Ab in Parallel Necessary for I r : b j = b i+s : j = i + s m r + max{ β, 1 m r } = max{m r β, 1} j = i + s M r + r Mr = M r + min{β, n M r } = min{m r + β, n} Processor P r with index set I r needs from b the indices j [max{1, m r β}, min{n, M r + β}] Parallel Numerics, WT 2016/ Elementary Linear Algebra Problems page 27 of 39
5 BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix 2.6. Analysis of Matrix-Matrix Product A = (a ij ) i=1,...,n j=1,...,m R n m, B(b ij ) i=1,...,m j=1,...,q R m q, for i = 1 : n...for j = 1 : q m...c ij = a ik b kj...end end k=1 C = AB = (c ij ) i=1,...,n j=1,...,q R n q Parallel Numerics, WT 2016/ Elementary Linear Algebra Problems page 28 of 39
6 BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix Vectorization Algorithm 1: (ijk)-form: for i = 1 : n...for j = 1 : q...for k = 1 : m...c. ij = c ij + a ik b kj } DOT-product of length m...end...end end c ij = A i B j for all i, j All entries c ij are fully computed, one after another. Access to A and C is rowwise, to B columnwise (depends on inner most loops!) Parallel Numerics, WT 2016/ Elementary Linear Algebra Problems page 29 of 39
7 BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix Other View on the Matrix-Matrix Product Matrix A considered as combination of columns or rows A = A 1 e T A m e T m = (A 1 0 )+(0 A )+...+(... 0 A m ) a 1 = e 1 a e n a n = a n AB = n A j ej T j=1 m k=1 e k b k = k,j A j (e T j e k )b k = m A k b k }{{} k=1 full n q matrices as a sum of full matrices A k b k by outer product of the kth column of A and the kth row of B. Parallel Numerics, WT 2016/ Elementary Linear Algebra Problems page 30 of 39
8 BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix Algorithm 2: (jki)-form for j=1,...,q...for k=1,...,m...for i=1,...,n...c ij = c ij + a ik b kj...end...end end Vector update: c j = c j + a k b kj Sequence of SAXPYs for the same vector: c j = k b kj a k C computed columnwise; access to A columnwise. Access to B columnwise, but delayed. Parallel Numerics, WT 2016/ Elementary Linear Algebra Problems page 31 of 39
9 BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix Algorithm 3: (kji)-form for k=1,...,m...for j=1,...,q...for i=1,...,n...c ij = c ij + a ik b kj...end...end end Vector update: c j = c j + a k b kj Sequence of SAXPYs for different vectors c j (no GAXPY) Access to A columnwise. Access to B rowwise + delayed. C computed with intermediate values c (k) ij which are computed columnwise. Parallel Numerics, WT 2016/ Elementary Linear Algebra Problems page 32 of 39
10 BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix Overview of Different Forms Access A by Access B by to to Comput. of C ijk ikj kij jik jki kji Alg. 1 Alg. 2 Alg. 3 row row column column column row row column row row row column column column Computat ion of c ij direct delayed delayed direct delayed delayed Vector operation Vector length DOT GAXPY SAXPY DOT GAXPY SAXPY m q q m n n Better: GAXPY (longer vector length). Access to matrices according to storage scheme (rowwise or columnwise) Parallel Numerics, WT 2016/ Elementary Linear Algebra Problems page 33 of 39
11 BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix Matrix-Matrix Product in Parallel 1, n = R I r, 1, m = r=1 S T K s, 1, q = Distribute the blocks relative to index sets I r, K s, and J t to processor array P rst : s=1 t=1 J t 1. Processor P rst computes small matrix-matrix product. All processors in parallel: c (s) rt = A rs B st 2. Compute sum by fan-in in s: c rt = S s=1 c (s) rt Parallel Numerics, WT 2016/ Elementary Linear Algebra Problems page 34 of 39
12 BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix Mtx-Mtx in Parallel: Special Case S = 1 Each processor P rt can compute its part of c, c rt, independently without communication. Each processor needs full block of rows of A, relative to index set I r, and full block of columns of B, relative to index set J t, to compute c rt relative to rows I k and columns J t. Parallel Numerics, WT 2016/ Elementary Linear Algebra Problems page 35 of 39
13 BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix Mtx-Mtx in Parallel: Special Case S = 1 With n q processors each processor has to compute one DOT-product with O(m) parallel time steps. c rt = m a rk b kt k=1 Fan-in by m nq additional processors for all DOT-products reduces number of parallel time steps to O(log(m)). Parallel Numerics, WT 2016/ Elementary Linear Algebra Problems page 36 of 39
14 BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix 1D-Parallelization of A B 1D: p processors linear, each processor gets full A and column slice of B, computing the related column slice of C = AB Communication: N 2 p for A and (N N p ) p = N2 Granularity: N 3 N 2 (1 + p) = N 1 + p Blocking only in i, the columns of B! for i = 1 : n...for j = 1 : n...for k = 1 : n...c j,i = C j,i + A j,k B k,i Parallel Numerics, WT 2016/ Elementary Linear Algebra Problems for B page 37 of 39
15 BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix 2D-Parallelization of A B 2D: p processors square, q := p, each proc. gets row slice of A and column slice of B computing full subblock of C = AB Communication: N 2 p for A and N 2 p for B Granularity: N 3 2N 2 p = N 2 p Blocking in i and j, the columns of B and the rows of A! for i = 1 : n...for j = 1 : n...for k = 1 : n...c j,i = C j,i + A j,k B k,i Parallel Numerics, WT 2016/ Elementary Linear Algebra Problems page 38 of 39
16 BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix 3D-Parallelization A B 3D: p processors cubic, each processor gets subblock of A and subblock of B, computing part of subblock of C = AB. Additional fan-in to collect parts to full subblock of C. (q = p 1 3 ). Communication: N 2 p 1 3 for A and for B ( = p N2 p 2 3 ) = p blocksize, fan-in: N 2 p 1 3 Granularity: N 3 3N 2 p 1 3 = N 3p 1 3 Blocking in i, j, and k! for i = 1 : n...for j = 1 : n...for k = 1 : n...c j,i = C j,i + A j,k B k,i Parallel Numerics, WT 2016/ Elementary Linear Algebra Problems page 39 of 39
17 Linear Systems of Equations with Dense Matrices GE in Parallel: Blockwise QR-Decomposition QR-Decomposition Parallel Numerics, WT 2016/ Linear Systems of Equations with Dense Matrices Parallel Numerics, WT 2016/ Linear Systems of Equations with Dense Matrices page 1 of 36
18 Linear Systems of Equations with Dense Matrices GE in Parallel: Blockwise QR-Decomposition QR-Decomposition Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations 2 Elementary Linear Algebra Problems 2.1 BLAS: Basic Linear Algebra Subroutines 2.2 Matrix-Vector Operations 2.3 Matrix-Matrix-Product 3 Linear Systems of Equations with Dense Matrices 3.1 Gaussian Elimination 3.2 Parallelization 3.3 QR-Decomposition with Householder matrices 4 Sparse Matrices 4.1 General Properties, Storage 4.2 Sparse Matrices and Graphs 4.3 Reordering 4.4 Gaussian Elimination for Sparse Matrices 5 Iterative Methods for Sparse Matrices 5.1 Stationary Methods 5.2 Nonstationary Methods 5.3 Preconditioning 6 Domain Decomposition 6.1 Overlapping Domain Decomposition 6.2 Non-overlapping Domain Decomposition 6.3 Schur Complements Parallel Numerics, WT 2016/ Linear Systems of Equations with Dense Matrices page 2 of 36
19 Linear Systems of Equations with Dense Matrices GE in Parallel: Blockwise QR-Decomposition QR-Decomposition 3.1. Linear Systems of Equations with Dense Matrices Gaussian Elimination: Basic Properties Linear system of equations: a 11 x a 1n x n = b 1.. a n1 x a nn x n = b n Solve Ax = b a 11 a 1n..... a n1 a nn x 1. x n = b 1. b n Generate simpler linear equations (matrices). Transform A in triangular form: A = A (1) A (2)... A (n) = U. Parallel Numerics, WT 2016/ Linear Systems of Equations with Dense Matrices page 3 of 36
20 Linear Systems of Equations with Dense Matrices GE in Parallel: Blockwise QR-Decomposition QR-Decomposition Transformation to Upper Triangular Form a 11 a 12 a 1n a 21 a 22 a 2n a n1 a n2 a nn row transformations: (2) (2) a 21 a 11 (1),..., (n) (n) a n1 a 11 (1) leads to a 11 a 12 a 13 a 1n 0 a (2) 22 a (2) 23 a (2) 2n A (2) = 0 a (2) 32 a (2) 33 a (2) 3n a (2) n2 a (2) n3 a (2) nn next transformations: (3) (3) a(2) 32 a (2) 22 Parallel Numerics, WT 2016/ Linear Systems of Equations with Dense Matrices (2),..., (n) (n) a(2) n2 (2) a (2) 22 page 4 of 36
21 Linear Systems of Equations with Dense Matrices GE in Parallel: Blockwise QR-Decomposition QR-Decomposition Transformation to Triangular Form (cont.) a 11 a 12 a 13 a 1n 0 a (2) 22 a (2) 23 a (2) 2n A (3) = 0 0 a (3) 33 a (3) 3n a (3) n3 a (3) nn next transformations: (4) (4) a(3) 43 (3),..., (n) (n) a(3) n3 (3) a (3) 33 a (3) 33 a 11 a 12 a 13 a 1n 0 a (2) 22 a (2) 23 a (2) 2n A (n) = 0 0 a (3) 33 a (3) 3n = U a (n) nn Parallel Numerics, WT 2016/ Linear Systems of Equations with Dense Matrices page 5 of 36
22 Linear Systems of Equations with Dense Matrices GE in Parallel: Blockwise QR-Decomposition QR-Decomposition Pseudocode Gaussian Elimination (GE) Simplification: assume that no pivoting is necessary. a (k) kk 0 or a(k) kk ρ > 0 for k = 1, 2,..., n for k = 1 : n 1...for i = k + 1 : n...l i,k = a i,k a k,k...end...for i = k + 1 : n...for j = k + 1 : n...a i,j = a i,j l i,k a k,j...end...end end In practice: Include pivoting and include right hand side b. There is still to solve a triangular system in U! Parallel Numerics, WT 2016/ Linear Systems of Equations with Dense Matrices page 6 of 36
23 Linear Systems of Equations with Dense Matrices GE in Parallel: Blockwise QR-Decomposition QR-Decomposition Intermediate Systems A (k), k = 1, 2,..., n with A = A (1) and U = A (n) a (1) 11 a (1) 1,k 1 a (1) 1,k a (1) 1,n (k 1) a k 1,k 1 a (k 1) k 1,k a (k 1) k 1,n 0 0 a (k) k,k a (k) k,n a (k) n,k a (k) n,n Parallel Numerics, WT 2016/ Linear Systems of Equations with Dense Matrices page 7 of 36
24 Linear Systems of Equations with Dense Matrices GE in Parallel: Blockwise QR-Decomposition QR-Decomposition Define Auxiliary Matrices L = l 2, l n,1 l n,n 1 1 and U = A (n) L k := , L = I l k+1,k 0 0 k l n,k 0 0 L k Parallel Numerics, WT 2016/ Linear Systems of Equations with Dense Matrices page 8 of 36
25 Linear Systems of Equations with Dense Matrices GE in Parallel: Blockwise QR-Decomposition QR-Decomposition Elimination Step in Terms of Auxiliary Matrices A (k+1) = (I L k ) A (k) = A (k) L k A (k) U = A (n) = (I L n 1 ) A (n 1) =... = (I L n 1 ) (I L 1 )A (1) = L A L := (I L n 1 ) (I L 1 ) A = L 1 U with U upper triangular and L lower triangular Parallel Numerics, WT 2016/ Linear Systems of Equations with Dense Matrices page 9 of 36
26 Linear Systems of Equations with Dense Matrices GE in Parallel: Blockwise QR-Decomposition QR-Decomposition Elimination Step in Terms of Auxiliary Matrices A (k+1) = (I L k ) A (k) = A (k) L k A (k) U = A (n) = (I L n 1 ) A (n 1) =... = (I L n 1 ) (I L 1 )A (1) = L A L := (I L n 1 ) (I L 1 ) A = L 1 U with U upper triangular and L lower triangular Theorem 2: L 1 = L and therefore A = LU. Advantage: Every further problem Ax = b j can be reduced to (LU)x = b j for arbitrary j. Solve two triangular problems (LU)x = Ly = b and Ux = y. Parallel Numerics, WT 2016/ Linear Systems of Equations with Dense Matrices page 9 of 36
27 Linear Systems of Equations with Dense Matrices GE in Parallel: Blockwise QR-Decomposition QR-Decomposition Theorem 2: L 1 = L A = LU for i j : L i L j = (I + L j )(I L j ) = I + L j L j L 2 j = I (I L j ) 1 = I + L j (I + L i )(I + L j ) = I + L i + L j + L i L j = I + L i + L j }{{} Parallel Numerics, WT 2016/ Linear Systems of Equations with Dense Matrices page 10 of 36
28 Linear Systems of Equations with Dense Matrices GE in Parallel: Blockwise QR-Decomposition QR-Decomposition Theorem 2: L 1 = L A = LU for i j : L i L j = (I + L j )(I L j ) = I + L j L j L 2 j = I (I L j ) 1 = I + L j (I + L i )(I + L j ) = I + L i + L j + L i L j = I + L i + L j }{{} L 1 = [(I L n 1 ) (I L 1 )] 1 = (I L 1 ) 1 (I L n 1 ) 1 = (I + L 1 )(I + L 2 ) (I + L n 1 ) = I + L 1 + L L n 1 = L Parallel Numerics, WT 2016/ Linear Systems of Equations with Dense Matrices page 10 of 36
29 Vectorization of GE (kij)-form (standard form): For k = 1 : n-1 For i = k+1 : n l i,k = a i,k / a k,k ; end For i = k+1 : n For j = k+1 : n a i,j = a i,j l i,k a k,j ; end end end Vector operation x SAXPY in rows a i and a k No GAXPY U computed rowwise, L columnwise. 1
30 already computed, remains unchanged, not used anymore U L A (k) newly computed updated in every step Standard (kij) form is also called rightlooking GE. 2
31 First Elimination step: A (1) Compute first column of L Update A (1) 3
32 Second step: A (2) Compute second column of L Update A (2) 4
33 Second step: A (3) Compute third column of L Update A (3) 5
34 k-1st step: U L A (k) Compute k-th column of L Update A (k) 6
35 Rules for different i,j,k forms: In the following we again interchange the kij loops. Necessary conditions: 1 1 k k i n j n Furthermore: Innermost index i,j, or k determines whether the computation is done row, column, or block-wise. Outermost index shows how the final parts are derived. Weights l jk have to be computed before they are used to eliminate related entries. 7
36 1 k i n (ikj)-form: 1 k j n For i = 2 : n For k = 1 : i-1 l i,k = a i,k / a k,k ; For j = k+1 : n a i,j = a i,j l i,k a k,j ; end end end GAXPY in a.i 8
37 already computed, not used any more L U already computed and used i newly computed A unchanged, not used L and U computed rowwise. Compute l i,1, then SAXPY for 1st and i-th row; then l i,2 and so on 9
38 First step A (1) 10
39 Second step A (2) 11
40 k-1-st step L U A (k-1) 12
41 (ijk)-form: 1 k i n 1 k j n For i = 2 : n For j = 2 : i l i,j-1 = a i,j-1 / a j-1,j-1 ; For k = 1 : j-1 a i,j = a i,j l i,k a k,j ; end new row Dot product left part end For j = i+1 : n For k = 1 : i-1 a i,j = a i,j l i,k a k,j ; end end Dot product right part end Compute l i,1 and update a i,2 ; then compute l i,2 and update a i,2 and a i,3,. Accumulating a i,j 13
42 1 k i n 1 k j n (jki)-form: For j = 2 : n For k = j : n l k,j-1 = a k,j-1 / a j-1,j-1 ; end For k = 1 : j-1 For i = k+1 : n a i,j = a i,j l i,k a k,j ; end end end x new column of L GAXPY in a.i 14
43 Left looking GE computed, not used U already computed and used L A unchanged, not used j-1, newly computed 15
44 First step A 16
45 Second step A 17
46 k-1-st step U L A 18
47 Overview kij kji ikj ijk jki jik Access to A and U Access to L Computat ion of U Computat ion of L Vector operation Vector length row column row column column column column row column row row row row row column column column column row row column column SAXPY SAXPY GAXPY DOT GAXPY DOT 2/3 n 2/3 n 2/3 n n/3 2/3 n n/3 Vector length = average of occuring vector lengths 19 Optimal form depends on storage of matrices and vector length.
48 Linear Systems of Equations with Dense Matrices GE in Parallel: Blockwise QR-Decomposition QR-Decomposition 3.2. GE in Parallel: Blockwise Main idea: Blocking of GE to avoid data transfer between processors. Basic Concepts: Replace GE or large LU-decomposition of full matrix by small intermediate steps (by sequence of small block operations): Solving collection of small triangular systems LU k = B k (parallelism in columns of U) A A LU updating matrices (also easy to parallelize) small B = LU-decompositions (parallelism in rows of B) Parallel Numerics, WT 2016/ Linear Systems of Equations with Dense Matrices page 11 of 36
49 Linear Systems of Equations with Dense Matrices GE in Parallel: Blockwise QR-Decomposition QR-Decomposition How to Choose Blocks in L, resp. U Satisfying LU = A L U 11 U 12 U 13 A 11 A 12 A 13 L 21 L U 22 U 23 = = L 31 L 32 L U 33 A 21 A 22 A 23 A 31 A 32 A 33 L 11 U 11 L 11 U 12 L 11 U 13 = L 21 U 11 L 21 U 12 + L 22 U 22 L 21 U 13 + L 22 U 23 L 31 U 11 L 31 U 12 + L 32 U 22 Parallel Numerics, WT 2016/ Linear Systems of Equations with Dense Matrices page 12 of 36
50 Linear Systems of Equations with Dense Matrices GE in Parallel: Blockwise QR-Decomposition QR-Decomposition How to Choose Blocks in L, resp. U Satisfying LU = A L U 11 U 12 U 13 A 11 A 12 A 13 L 21 L U 22 U 23 = = L 31 L 32 L U 33 A 21 A 22 A 23 A 31 A 32 A 33 L 11 U 11 L 11 U 12 L 11 U 13 = L 21 U 11 L 21 U 12 + L 22 U 22 L 21 U 13 + L 22 U 23 L 31 U 11 L 31 U 12 + L 32 U 22 Different ways of computing L and U depending on start (assume first entry/row/column of L/U as given) how to compute new entry/row/column of L/U update of block structure of L/U by grouping in known blocks blocks newly to compute blocks to be computed later Parallel Numerics, WT 2016/ Linear Systems of Equations with Dense Matrices page 12 of 36
51 Linear Systems of Equations with Dense Matrices GE in Parallel: Blockwise QR-Decomposition QR-Decomposition Crout Form Parallel Numerics, WT 2016/ Linear Systems of Equations with Dense Matrices page 13 of 36
52 Linear Systems of Equations with Dense Matrices GE in Parallel: Blockwise QR-Decomposition QR-Decomposition Crout Form (cont.) 1. Solve by small LU-decomposition of the modified part of A L 22, L 32, and U Solve by solving small triangular systems of equations in L 22 U 23. Parallel Numerics, WT 2016/ Linear Systems of Equations with Dense Matrices page 14 of 36
53 Linear Systems of Equations with Dense Matrices GE in Parallel: Blockwise QR-Decomposition QR-Decomposition Crout Form (cont.) 1. Solve by small LU-decomposition of the modified part of A L 22, L 32, and U Solve by solving small triangular systems of equations in L 22 U 23. Initial steps: L 11 U 11 = A 11, ( ) ( ) L21 A21 U L 11 =, L 31 A 11 (U 12 U 13 ) = (A 12 A 13 ) 31 Parallel Numerics, WT 2016/ Linear Systems of Equations with Dense Matrices page 14 of 36
54 Linear Systems of Equations with Dense Matrices GE in Parallel: Blockwise QR-Decomposition QR-Decomposition New Partitioning Combine already computed parts from second column of L and second row of U into first column of L and first row of U. Split the until now ignored parts L 33 and U 33 into new columns/rows. Repeat this overall procedure until L and U are fully computed. Parallel Numerics, WT 2016/ Linear Systems of Equations with Dense Matrices page 15 of 36
55 Linear Systems of Equations with Dense Matrices GE in Parallel: Blockwise QR-Decomposition QR-Decomposition Block Structure Intermediate block structure: Solve for red blocks. Parallel Numerics, WT 2016/ Linear Systems of Equations with Dense Matrices page 16 of 36
56 Linear Systems of Equations with Dense Matrices GE in Parallel: Blockwise QR-Decomposition QR-Decomposition Block Structure Intermediate block structure: Solve for red blocks. Reconfigure the block structure: Repeat until done. Parallel Numerics, WT 2016/ Linear Systems of Equations with Dense Matrices page 16 of 36
57 Linear Systems of Equations with Dense Matrices GE in Parallel: Blockwise QR-Decomposition QR-Decomposition Left Looking GE Parallel Numerics, WT 2016/ Linear Systems of Equations with Dense Matrices page 17 of 36
58 Linear Systems of Equations with Dense Matrices GE in Parallel: Blockwise QR-Decomposition QR-Decomposition Left Looking GE Solve L 11 U 12 = A 12 by a couple of parallel triangular solves and ( ) ( ) ( ) ) L22 A22 L21 (Â22 U L 22 = U 32 A 32 L 12 =: 31 Â 32 update part of A and perform small LU-decompostion. Parallel Numerics, WT 2016/ Linear Systems of Equations with Dense Matrices page 17 of 36
59 Linear Systems of Equations with Dense Matrices GE in Parallel: Blockwise QR-Decomposition QR-Decomposition Left Looking GE Solve L 11 U 12 = A 12 by a couple of parallel triangular solves and ( ) ( ) ( ) ) L22 A22 L21 (Â22 U L 22 = U 32 A 32 L 12 =: 31 Â 32 update part of A and perform small LU-decompostion. Reorder blocks and repeat until ready. Start: L 11 U 11 = A 11, L 21 U 11 = A 21, and L 31 U 11 = A 31. Parallel Numerics, WT 2016/ Linear Systems of Equations with Dense Matrices page 17 of 36
60 Linear Systems of Equations with Dense Matrices GE in Parallel: Blockwise QR-Decomposition QR-Decomposition Block Structure Intermediate block structure: Solve for red blocks. Parallel Numerics, WT 2016/ Linear Systems of Equations with Dense Matrices page 18 of 36
61 Linear Systems of Equations with Dense Matrices GE in Parallel: Blockwise QR-Decomposition QR-Decomposition Block Structure Intermediate block structure: Solve for red blocks. Reconfigure the block structure: Repeat until done. Parallel Numerics, WT 2016/ Linear Systems of Equations with Dense Matrices page 18 of 36
BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix-Vector-Product Analysis of Matrix-Matrix Product
Level-1 BLAS: SAXPY BLAS-Notation: S single precision (D for double, C for complex) A α scalar X vector P plus operation Y vector SAXPY: y = αx + y Vectorization of SAXPY (αx + y) by pipelining: page 8
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.
More information2.1 Gaussian Elimination
2. Gaussian Elimination A common problem encountered in numerical models is the one in which there are n equations and n unknowns. The following is a description of the Gaussian elimination method for
More informationDirect Methods for Solving Linear Systems. Matrix Factorization
Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011
More information1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )
Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical
More informationDirect Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le
Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization
More informationLU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU
LU Factorization To further improve the efficiency of solving linear systems Factorizations of matrix A : LU and QR LU Factorization Methods: Using basic Gaussian Elimination (GE) Factorization of Tridiagonal
More informationDense LU factorization and its error analysis
Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,
More informationNumerical Linear Algebra
Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost
More informationProgram Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects
Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen
More informationMODULE 7. where A is an m n real (or complex) matrix. 2) Let K(t, s) be a function of two variables which is continuous on the square [0, 1] [0, 1].
Topics: Linear operators MODULE 7 We are going to discuss functions = mappings = transformations = operators from one vector space V 1 into another vector space V 2. However, we shall restrict our sights
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 12: Gaussian Elimination and LU Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 10 Gaussian Elimination
More informationGaussian Elimination and Back Substitution
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving
More informationSolution of Linear Systems
Solution of Linear Systems Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico May 12, 2016 CPD (DEI / IST) Parallel and Distributed Computing
More informationLU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b
AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization
More information. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in
Vectors and Matrices Continued Remember that our goal is to write a system of algebraic equations as a matrix equation. Suppose we have the n linear algebraic equations a x + a 2 x 2 + a n x n = b a 2
More informationCS412: Lecture #17. Mridul Aanjaneya. March 19, 2015
CS: Lecture #7 Mridul Aanjaneya March 9, 5 Solving linear systems of equations Consider a lower triangular matrix L: l l l L = l 3 l 3 l 33 l n l nn A procedure similar to that for upper triangular systems
More information5. Direct Methods for Solving Systems of Linear Equations. They are all over the place...
5 Direct Methods for Solving Systems of Linear Equations They are all over the place Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13, 2012
More informationSOLVING LINEAR SYSTEMS
SOLVING LINEAR SYSTEMS We want to solve the linear system a, x + + a,n x n = b a n, x + + a n,n x n = b n This will be done by the method used in beginning algebra, by successively eliminating unknowns
More informationReview of matrices. Let m, n IN. A rectangle of numbers written like A =
Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an
More informationLU Factorization a 11 a 1 a 1n A = a 1 a a n (b) a n1 a n a nn L = l l 1 l ln1 ln 1 75 U = u 11 u 1 u 1n 0 u u n 0 u n...
.. Factorizations Reading: Trefethen and Bau (1997), Lecture 0 Solve the n n linear system by Gaussian elimination Ax = b (1) { Gaussian elimination is a direct method The solution is found after a nite
More informationParallel Numerics. Prof. Dr. Thomas Huckle. July 2, Technische Universität München, Institut für Informatik
Parallel Numerics Prof Dr Thomas Huckle July 2, 2006 Technische Universität München, Institut für Informatik 1 Contents 1 Introduction 4 11 Computer Science Aspects of Parallel Numerics 4 111 Parallelism
More informationCOURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.
COURSE 9 4 Numerical methods for solving linear systems Practical solving of many problems eventually leads to solving linear systems Classification of the methods: - direct methods - with low number of
More informationLinear Algebraic Equations
Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff
More informationMATH 3511 Lecture 1. Solving Linear Systems 1
MATH 3511 Lecture 1 Solving Linear Systems 1 Dmitriy Leykekhman Spring 2012 Goals Review of basic linear algebra Solution of simple linear systems Gaussian elimination D Leykekhman - MATH 3511 Introduction
More informationLinear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4
Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix
More informationCS475: Linear Equations Gaussian Elimination LU Decomposition Wim Bohm Colorado State University
CS475: Linear Equations Gaussian Elimination LU Decomposition Wim Bohm Colorado State University Except as otherwise noted, the content of this presentation is licensed under the Creative Commons Attribution
More informationLinear Systems of n equations for n unknowns
Linear Systems of n equations for n unknowns In many application problems we want to find n unknowns, and we have n linear equations Example: Find x,x,x such that the following three equations hold: x
More information1.5 Gaussian Elimination With Partial Pivoting.
Gaussian Elimination With Partial Pivoting In the previous section we discussed Gaussian elimination In that discussion we used equation to eliminate x from equations through n Then we used equation to
More informationThe Solution of Linear Systems AX = B
Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has
More informationInstitute for Advanced Computer Studies. Department of Computer Science. Two Algorithms for the The Ecient Computation of
University of Maryland Institute for Advanced Computer Studies Department of Computer Science College Park TR{98{12 TR{3875 Two Algorithms for the The Ecient Computation of Truncated Pivoted QR Approximations
More informationScientific Computing
Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting
More informationLinear Equations and Matrix
1/60 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Gaussian Elimination 2/60 Alpha Go Linear algebra begins with a system of linear
More informationNext topics: Solving systems of linear equations
Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:
More informationParallel Scientific Computing
IV-1 Parallel Scientific Computing Matrix-vector multiplication. Matrix-matrix multiplication. Direct method for solving a linear equation. Gaussian Elimination. Iterative method for solving a linear equation.
More informationParallel Numerics, WT 2016/ Iterative Methods for Sparse Linear Systems of Equations. page 1 of 1
Parallel Numerics, WT 2016/2017 5 Iterative Methods for Sparse Linear Systems of Equations page 1 of 1 Contents 1 Introduction 1.1 Computer Science Aspects 1.2 Numerical Problems 1.3 Graphs 1.4 Loop Manipulations
More informationPivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3
Pivoting Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3 In the previous discussions we have assumed that the LU factorization of A existed and the various versions could compute it in a stable manner.
More information14.2 QR Factorization with Column Pivoting
page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution
More informationSolving linear systems (6 lectures)
Chapter 2 Solving linear systems (6 lectures) 2.1 Solving linear systems: LU factorization (1 lectures) Reference: [Trefethen, Bau III] Lecture 20, 21 How do you solve Ax = b? (2.1.1) In numerical linear
More informationDraft. Lecture 12 Gaussian Elimination and LU Factorization. MATH 562 Numerical Analysis II. Songting Luo
Lecture 12 Gaussian Elimination and LU Factorization Songting Luo Department of Mathematics Iowa State University MATH 562 Numerical Analysis II ongting Luo ( Department of Mathematics Iowa State University[0.5in]
More informationLU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark
DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline
More informationDirect solution methods for sparse matrices. p. 1/49
Direct solution methods for sparse matrices p. 1/49 p. 2/49 Direct solution methods for sparse matrices Solve Ax = b, where A(n n). (1) Factorize A = LU, L lower-triangular, U upper-triangular. (2) Solve
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More informationFundamentals of Engineering Analysis (650163)
Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is
More informationParallel Programming. Parallel algorithms Linear systems solvers
Parallel Programming Parallel algorithms Linear systems solvers Terminology System of linear equations Solve Ax = b for x Special matrices Upper triangular Lower triangular Diagonally dominant Symmetric
More informationGAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)
GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.
More informationAlgebra C Numerical Linear Algebra Sample Exam Problems
Algebra C Numerical Linear Algebra Sample Exam Problems Notation. Denote by V a finite-dimensional Hilbert space with inner product (, ) and corresponding norm. The abbreviation SPD is used for symmetric
More informationMath 304 (Spring 2010) - Lecture 2
Math 304 (Spring 010) - Lecture Emre Mengi Department of Mathematics Koç University emengi@ku.edu.tr Lecture - Floating Point Operation Count p.1/10 Efficiency of an algorithm is determined by the total
More informationLinear Algebra (Review) Volker Tresp 2018
Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c
More informationLinear Algebra (Review) Volker Tresp 2017
Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.
More informationRoundoff Analysis of Gaussian Elimination
Jim Lambers MAT 60 Summer Session 2009-0 Lecture 5 Notes These notes correspond to Sections 33 and 34 in the text Roundoff Analysis of Gaussian Elimination In this section, we will perform a detailed error
More informationChapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations
Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2
More informationCHAPTER 6. Direct Methods for Solving Linear Systems
CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to
More informationCSE 160 Lecture 13. Numerical Linear Algebra
CSE 16 Lecture 13 Numerical Linear Algebra Announcements Section will be held on Friday as announced on Moodle Midterm Return 213 Scott B Baden / CSE 16 / Fall 213 2 Today s lecture Gaussian Elimination
More informationTopics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij
Topics Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij or a ij lives in row i and column j Definition of a matrix
More informationVECTORS, TENSORS AND INDEX NOTATION
VECTORS, TENSORS AND INDEX NOTATION Enrico Nobile Dipartimento di Ingegneria e Architettura Università degli Studi di Trieste, 34127 TRIESTE March 5, 2018 Vectors & Tensors, E. Nobile March 5, 2018 1 /
More informationIndex Notation for Vector Calculus
Index Notation for Vector Calculus by Ilan Ben-Yaacov and Francesc Roig Copyright c 2006 Index notation, also commonly known as subscript notation or tensor notation, is an extremely useful tool for performing
More informationThe practical revised simplex method (Part 2)
The practical revised simplex method (Part 2) Julian Hall School of Mathematics University of Edinburgh January 25th 2007 The practical revised simplex method Overview (Part 2) Practical implementation
More informationLecture Notes to Accompany. Scientific Computing An Introductory Survey. by Michael T. Heath. Chapter 2. Systems of Linear Equations
Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T. Heath Chapter 2 Systems of Linear Equations Copyright c 2001. Reproduction permitted only for noncommercial,
More informationNumerical Methods I Solving Square Linear Systems: GEM and LU factorization
Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,
More informationMTH 464: Computational Linear Algebra
MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University February 6, 2018 Linear Algebra (MTH
More informationV C V L T I 0 C V B 1 V T 0 I. l nk
Multifrontal Method Kailai Xu September 16, 2017 Main observation. Consider the LDL T decomposition of a SPD matrix [ ] [ ] [ ] [ ] B V T L 0 I 0 L T L A = = 1 V T V C V L T I 0 C V B 1 V T, 0 I where
More informationAMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems
AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given
More informationScientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1
Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û
More informationLecture Note 2: The Gaussian Elimination and LU Decomposition
MATH 5330: Computational Methods of Linear Algebra Lecture Note 2: The Gaussian Elimination and LU Decomposition The Gaussian elimination Xianyi Zeng Department of Mathematical Sciences, UTEP The method
More informationSolution of Linear Equations
Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass
More informationNumerical Linear Algebra
Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one
More informationSparse BLAS-3 Reduction
Sparse BLAS-3 Reduction to Banded Upper Triangular (Spar3Bnd) Gary Howell, HPC/OIT NC State University gary howell@ncsu.edu Sparse BLAS-3 Reduction p.1/27 Acknowledgements James Demmel, Gene Golub, Franc
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationIntroduction. Vectors and Matrices. Vectors [1] Vectors [2]
Introduction Vectors and Matrices Dr. TGI Fernando 1 2 Data is frequently arranged in arrays, that is, sets whose elements are indexed by one or more subscripts. Vector - one dimensional array Matrix -
More informationSolving Linear Systems of Equations
November 6, 2013 Introduction The type of problems that we have to solve are: Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. a 1N a NN x = x 1 x 2. x N B = b 1 b 2. b N To find A 1 (inverse
More informationScientific Computing: An Introductory Survey
Scientific Computing: An Introductory Survey Chapter 2 Systems of Linear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction
More informationChapter 1 Matrices and Systems of Equations
Chapter 1 Matrices and Systems of Equations System of Linear Equations 1. A linear equation in n unknowns is an equation of the form n i=1 a i x i = b where a 1,..., a n, b R and x 1,..., x n are variables.
More informationNumerical Analysis: Solving Systems of Linear Equations
Numerical Analysis: Solving Systems of Linear Equations Mirko Navara http://cmpfelkcvutcz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office
More informationAx = b. Systems of Linear Equations. Lecture Notes to Accompany. Given m n matrix A and m-vector b, find unknown n-vector x satisfying
Lecture Notes to Accompany Scientific Computing An Introductory Survey Second Edition by Michael T Heath Chapter Systems of Linear Equations Systems of Linear Equations Given m n matrix A and m-vector
More informationSolving Linear Systems Using Gaussian Elimination. How can we solve
Solving Linear Systems Using Gaussian Elimination How can we solve? 1 Gaussian elimination Consider the general augmented system: Gaussian elimination Step 1: Eliminate first column below the main diagonal.
More informationSolving Linear Systems of Equations
1 Solving Linear Systems of Equations Many practical problems could be reduced to solving a linear system of equations formulated as Ax = b This chapter studies the computational issues about directly
More informationCME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6
CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently
More informationSolving Dense Linear Systems I
Solving Dense Linear Systems I Solving Ax = b is an important numerical method Triangular system: [ l11 l 21 if l 11, l 22 0, ] [ ] [ ] x1 b1 = l 22 x 2 b 2 x 1 = b 1 /l 11 x 2 = (b 2 l 21 x 1 )/l 22 Chih-Jen
More informationChapter 12 Block LU Factorization
Chapter 12 Block LU Factorization Block algorithms are advantageous for at least two important reasons. First, they work with blocks of data having b 2 elements, performing O(b 3 ) operations. The O(b)
More informationThis can be accomplished by left matrix multiplication as follows: I
1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method
More informationMath 471 (Numerical methods) Chapter 3 (second half). System of equations
Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular
More informationChapter 2. Solving Systems of Equations. 2.1 Gaussian elimination
Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix
More informationMatrix Arithmetic. j=1
An m n matrix is an array A = Matrix Arithmetic a 11 a 12 a 1n a 21 a 22 a 2n a m1 a m2 a mn of real numbers a ij An m n matrix has m rows and n columns a ij is the entry in the i-th row and j-th column
More informationGaussian Elimination without/with Pivoting and Cholesky Decomposition
Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination WITHOUT pivoting Notation: For a matrix A R n n we define for k {,,n} the leading principal submatrix a a k A
More informationMA2501 Numerical Methods Spring 2015
Norwegian University of Science and Technology Department of Mathematics MA2501 Numerical Methods Spring 2015 Solutions to exercise set 3 1 Attempt to verify experimentally the calculation from class that
More informationMatrix decompositions
Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers
More informationMATRICES. a m,1 a m,n A =
MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of
More informationMatrix decompositions
Matrix decompositions How can we solve Ax = b? 1 Linear algebra Typical linear system of equations : x 1 x +x = x 1 +x +9x = 0 x 1 +x x = The variables x 1, x, and x only appear as linear terms (no powers
More informationGaussian Elimination for Linear Systems
Gaussian Elimination for Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 3, 2011 1/56 Outline 1 Elementary matrices 2 LR-factorization 3 Gaussian elimination
More informationThe System of Linear Equations. Direct Methods. Xiaozhou Li.
1/16 The Direct Methods xiaozhouli@uestc.edu.cn http://xiaozhouli.com School of Mathematical Sciences University of Electronic Science and Technology of China Chengdu, China Does the LU factorization always
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More informationNumerical Linear Algebra
Numerical Linear Algebra By: David McQuilling; Jesus Caban Deng Li Jan.,31,006 CS51 Solving Linear Equations u + v = 8 4u + 9v = 1 A x b 4 9 u v = 8 1 Gaussian Elimination Start with the matrix representation
More information7. LU factorization. factor-solve method. LU factorization. solving Ax = b with A nonsingular. the inverse of a nonsingular matrix
EE507 - Computational Techniques for EE 7. LU factorization Jitkomut Songsiri factor-solve method LU factorization solving Ax = b with A nonsingular the inverse of a nonsingular matrix LU factorization
More information5 Solving Systems of Linear Equations
106 Systems of LE 5.1 Systems of Linear Equations 5 Solving Systems of Linear Equations 5.1 Systems of Linear Equations System of linear equations: a 11 x 1 + a 12 x 2 +... + a 1n x n = b 1 a 21 x 1 +
More informationBasic Concepts in Linear Algebra
Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear
More informationOn the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination
On the Skeel condition number, growth factor and pivoting strategies for Gaussian elimination J.M. Peña 1 Introduction Gaussian elimination (GE) with a given pivoting strategy, for nonsingular matrices
More informationMatrix Algebra & Elementary Matrices
Matrix lgebra & Elementary Matrices To add two matrices, they must have identical dimensions. To multiply them the number of columns of the first must equal the number of rows of the second. The laws below
More informationLecture Notes 1: Matrix Algebra Part C: Pivoting and Matrix Decomposition
University of Warwick, EC9A0 Maths for Economists Peter J. Hammond 1 of 46 Lecture Notes 1: Matrix Algebra Part C: Pivoting and Matrix Decomposition Peter J. Hammond Autumn 2012, revised Autumn 2014 University
More informationApril 26, Applied mathematics PhD candidate, physics MA UC Berkeley. Lecture 4/26/2013. Jed Duersch. Spd matrices. Cholesky decomposition
Applied mathematics PhD candidate, physics MA UC Berkeley April 26, 2013 UCB 1/19 Symmetric positive-definite I Definition A symmetric matrix A R n n is positive definite iff x T Ax > 0 holds x 0 R n.
More information