Parallel Scientific Computing
|
|
- Bartholomew Dixon
- 6 years ago
- Views:
Transcription
1 IV-1 Parallel Scientific Computing Matrix-vector multiplication. Matrix-matrix multiplication. Direct method for solving a linear equation. Gaussian Elimination. Iterative method for solving a linear equation. Jacobi, Gauss-Seidel. Sparse linear systems and differential equations.
2 IV- Matrix-Matrix Multiplication Problem: C = A B where A and B are n n matrices. Sequential code: for i = 1 to n do for j = 1 to n do sum = 0; for k = 1 to n do sum = sum+a[i,k] b[k,j]; c[i,j] = sum; endfor endfor endfor
3 IV-3 An example of A B A B 1 B B C = = =
4 IV-4 Task graph of C = A B Partitioned code: for i = 1 to n do T i : for j = 1 to n do sum = 0; for k = 1 to n do sum = sum+a[i,k] b[k,j]; endfor c[i,j] = sum; endfor endfor T i : Read row A i and matrix B. Write row C i Task graph: T 1 T T... 3 T n
5 IV-5 Task and data mapping for C = A B SPMD code: for i = 1 to n if proc map(i)=me do T i Data mapping: A is partitioned using rowwise block mapping C is partitioned using rowwise block mapping B is duplicated to all processors Changes in T i s code: a ik a local(i)k c ij c local(i)j
6 IV-6 Parallel SPMD code of C = A B for i = 1 to n do if proc map(i)=me do endfor for j = 1 to n do sum = 0; for k = 1 to n do endfor endfor endif sum = sum+a[local(i),k] b[k,j]; c[local(i), j] = sum;
7 IV-7 Parallel algorithm with 1D partitioning Partitioned code: for i = 1 to n do for j = 1 to n do T i,j : sum = 0; for k = 1 to n do sum = sum+a(i,k) b(k,j); Endfor c(i,j) = sum; Endfor Endfor Data access: Each task T i,j reads row A i and column B j to write data element c i,j.
8 Task graph: n independent tasks: T 1,1 T 1, T 1,n T,1 T, T,n T n,1 T n, T n,n IV-8 Mapping. Matrix A is partitioned using row-wise block mapping Matrix C is partitioned using row-wise block mapping Matrix B is partitioned using column-wise block mapping Task T i,j is mapped to the processor of row i in matrix A. Cluster 1: T 1,1 T 1, T 1,n Cluster : T,1 T, T,n Cluster n: T n,1 T n, T n,n
9 IV-9 Parallel algorithm: For j = 1 to n Broadcast column B j to all processors Do tasks T 1,j,T,j,,T n,j in parallel. Endfor Evaluation: Each multiplication or addition counts one time unit ω. Each task T i,j costs nω. Assume that each broadcast costs (α+βn)logp. PT = n ((α+βn)logp+ n p nω) j=1 = n(α+βn)logp+ n3 ω p.
10 IV-10 Gaussian Elimination -Direct Method for Solving Linear System- (1) 4x 1 9x +x 3 = () x 1 4x +4x 3 = 3 (3) x 1 +x +x 3 = 1 ()-(1)* 4 0.5x +3x 3 = (4) (3)-(1)* x + 5 x 3 = 3 (5) (5)-(4)*- 1 4x 3 = 5 4x 1 9x +x 3 = 1 x +3x 3 = 4x 3 = 5
11 IV-11 Backward substitution: x 3 = 5 8 = 1 4 x = 3x 3 1 x 1 = +9x x 3 4 = 3 4
12 IV-1 GE on Augmented Matrices Use an augmented matrix to express elimination process for solving Ax = b. Augmented matrix: (A b) ()=() (1) 4 (3)=(3) (1) 1 4 = / / Column n+1 of A stores column b!
13 IV-13 Gaussian Elimination Algorithm Forward Elimination For k = 1 to n 1 For i = k +1 to n a ik = a ik /a kk ; For j = k +1 to n+1 endfor endfor endfor a ij = a ij a ik a kj ; Loop k controls the elimination steps. Loop i controls i-th row accessing and loop j controls j-th column accessing.
14 IV-14 Backward Substitution Note that x i uses the space of a i,n+1. For i = n to 1 For j = i+1 to n x i = x i a i,j x j ; Endfor x i = x i /a i,i ; Endfor
15 IV-15 Algorithm Complexity Each division, multiplication, subtraction counts one time unit ω. Ignore loop overhead. #Operations in forward elimination: n 1 n k=1 i=k+1 1+ n j=k+1 +ω = n 1 n k=1 i=k+1 ((n k)+3)ω ω n 1 k=1 (n k) n3 3 ω #Operations in backward substitution: n (1+ k=1 n i=k+1 )ω ω n (n k) n ω k=1 Total #Operations: n3 3 ω. Total space: n double-precision numbers.
16 IV-16 Parallel Row-Oriented GE For k = 1 to n 1 For i = k +1 to n T i k : a ik = a ik /a kk For j = k +1 to n+1 a ij = a ij a ik a kj EndFor T i k : Read rows A k,a i Write row A i Dependence Graph 3 4 T T T T 1 n k=1 T 3 T 4 T n k= T 3 4 T 3 n k=3 Tn n 1 k=n 1
17 IV-17 Parallelism and Scheduling Parallelism: Tasks T k+1 k T k+ k... T n k are independent. Parallel Algorithm(Basic idea) For k = 1 to n 1 Do T k+1 k T k+ k... Tk n on p processors. in parallel
18 IV-18 Task Mapping Define n clusters: C 1 C C 3 C n φ T 1 T 1 3 T n 1 T 3... T n.. n p procs T n-1 Cluster T1 C Cluster T1 3T3 C 3... Map n clusters to p processors: C k = proc_map(k) cyclic block
19 IV-19 Block vs. Cyclic Mapping If block mapping is used Profile the computation load of C,C 3,...,C n. Load... cluster C C C C 3 4 n Then Load(P 0 ) Load(P 1 )... Load(P n 1 ) Load is NOT balanced among processors! If cyclic mapping is used. Load is balanced.
20 Parallel Algorithm: Proc 0 broadcasts Row 1 For k = 1 to n 1 Do T k+1 k...tk n in parallel (Tk i proc_map(i)). Broadcast row k +1. endfor IV-0 SPMD Code: me=mynode(); For i = 1 to n if proc map(i)==me, initialize Row i; If proc map(1)==me, broadcast Row 1 else receive it; For k = 1 to n 1 For i = k +1 to n Ifproc map(i)==me, dot i k If proc map(k+1)==me, then broadcast Row k +1 else receive it.
21 IV-1 Column-Oriented GE Interchange loops i and j of the row-oriented GE. For k = 1 to n 1 For i = k +1 to n a ik = a ik /a kk EndFor For j = k +1 to n+1 For i = k +1 to n a ij = a ij a ik a kj EndFor EndFor EndFor
22 Impact on data accessing patterns IV- Example ()=() (1) 4 (3)=(3) (1) 1 4 = Data access (writing) sequence for row-oriented GE: Data writing sequence for column-oriented GE:
23 IV-3 Column-oriented backward substitution. Interchange loops i and j in the row-oriented backward substitution code. For j = n to 1 x j = x j /a j,j ; For i = j 1 to 1 x i = x i a i,j x j ; Endfor EndFor For example, given: 4x 1 9x +x 3 = 0.5x +3x 3 = 4x 3 = 5.
24 IV-4 The row-oriented algorithm performs: x 3 = 5 8 x = 3x 3 x = x 0.5 x 1 = +9x x 1 = x 1 x 3 x 1 = x 1 4. The column-oriented algorithm performs: x 3 = 5 8 x = 3x 3 x 1 = x 3 x = x 0.5 x 1 = x 1 +9x x 1 = x 1 4.
25 IV-5 Parallel Column-Oriented GE Partitioned code: For k = 1 to n 1 T k k : For i = k +1 to n a ik = a ik /a kk For j = k +1 to n+1 T j k : For i = k +1 to n a ij = a ij a ik a kj
26 IV-6 Task graph: T 1 1 T 1 T 1 3 T n+1 k=1 T 1 T T 3 4 n k= T T T 3 3 T n+1 T 3 k=3 Tn+1 n 1 k=n 1 Schedule:? SPMD code:?
27 IV-7 Column-oriented backward substitution Partitioning: For j = n to 1 Sj x x j = x j /a j,j ; For i = j 1 to 1 x i = x i a i,j x j ; Endfor EndFor Dependence: S x n S x n 1 Sx 1.
28 IV-8 Parallel Algorithm: Execute all these tasks (Sj x, j = n,,1) gradually on the processor that owns x (column n+1). For j = n to 1 If owner(column x)==me then Receive column j if not available. Do Sj x. Else If owner(column j)==me, send column j to the owner of column x. EndFor
29 IV-9 Problems with the GE Method Problem 1: a k,k = 0. (1) 0+x +x 3 = () 3x 1 +x 3x 3 = (3) x 1 +5x x 3 = x = 5 Using Gaussian elimination: Eq() (1) 3 0 Eq(3) (1) 1 0 Solution: At stage k, interchange rows such that a k,k is the maximum in the lower portion of the column k.
30 IV-30 Gaussian Elimination with Pivoting Row-oriented Forward Elimination For k = 1 to n 1 Find m such that a m,k = max i k { a i,k }; If a m,k = 0, No unique solution, stop; Swap row(k) with row(m); For i = k +1 to n a ik = a ik /a kk ; For j = k +1 to n endfor endfor endfor a ij = a ij a ik a kj ; b i = b i a ik b k ;
31 An example of GE with Pivoting IV (1) () = (3) (1) 1 3 = () (3) = (3) () 3 13 = x 1 = 1 x = 1 x 3 = 1
32 IV-3 Column-Oriented GE with Pivoting For k = 1 to n 1 Find m such that a m,k = max i k { a i,k }; If a m,k = 0, No unique solution, stop. Swap row(k) with row(m); For i = k +1 to n a ik = a ik /a kk EndFor For j = k +1 to n+1 For i = k +1 to n a ij = a ij a ik a kj EndFor EndFor EndFor
33 IV-33 Parallel column-oriented GE with pivoting Partitioned forward elimination: For k = 1 to n 1 P k k Find m such that a m,k = max i k { a i,k }; If a m,k = 0, No unique solution, stop. For j = k to n+1 S j k : Swap a k,j with a m,j ; Endfor T k k : For i = k +1 to n a i,k = a i,k /a k,k endfor
34 IV-34 For j = k +1 to n+1 T j k For i = k+1 to n a i,j = a i,j a i,k a k,j endfor endfor
35 IV-35 Dependence structure for iteration k P k k Find the maximum element. Broadcast swapping positions S k k S k k+1... S k n+1 Swap each column T k k T k k+1... T k n+1 Scaling column k Broadcast column k updating columns k+1,k+,...,n+1
36 IV-36 Combining messages and merging tasks Define task U k k as performing Pk k, Sk k, and Tk k. Define task U j k as performing Sj k, and Tj k (k +1 j n+1). U k k P k k S k k T k k Find the maximum element. Swap column k. Scaling column k Broadcast swapping positions and column k. S k k+1 T k k+1 U k k+1... n+1 S k... n+1 T k Uk n+1 Swap column k+1,k+,...,n+1 updating columns k+1,k+,...,n+1
37 IV-37 Parallel algorithm for pivoting For k = 1 to n 1 The owner of column k does Uk k and broadcasts the swapping positions and column k. Do U k+1 k...uk n in parallel endfor
38 IV-38 Iterative Methods for Solving Ax = b Ex: (1) 6x 1 x +x 3 = 11 () x 1 +7x +x 3 = 5 (3) x 1 +x 5x 3 = -1 = x 1 = ( x +x 3 ) x = ( x 1 +x 3 ) x 3 = (x 1 +x ) = x (k+1) 1 = 1 6 (11 ( x(k) +x (k) 3 )) x (k+1) = 1 7 (5 ( x(k) 1 +x (k) 3 )) x (k+1) 3 = 1 5 ( 1 (x(k) 1 +x (k) ))
39 IV-39 Initial Approximation: x 1 = 0,x = 0,x 3 = 0 Iter x x x Stop when x (k+1) x (k) < 10 4 Need to define norm x (k+1) x (k).
40 IV-40 Iterative methods in a matrix format x 1 x x 3 k+1 = x 1 x x 3 k General iterative method: Assign an initial value to x (0) k=0 Do x (k+1) = H x (k) +d until x (k+1) x (k) < ε
41 IV-41 Norm of a Vector Given x = (x 1,x, x n ): x 1 = n i=1 x i x = xi x = max x i Example: x = ( 1,1,) x 1 = 4 x = 1+1+ = 6 x = Applications: Error ε
42 IV-4 Jacobi Method for Ax = b x k+1 i = 1 a ii (b i j i a ij x k j) i = 1, n Example: (1) 6x 1 x +x 3 = 11 () x 1 +7x +x 3 = 5 (3) x 1 +x 5x 3 = -1 = x 1 = ( x +x 3 ) x = ( x 1 +x 3 ) x 3 = (x 1 +x )
43 IV-43 Jacobi method in a matrix-vector form x 1 x x 3 k+1 = x 1 x x 3 k
44 IV-44 Parallel Jacobi Method or in general x k+1 = D 1 Bx k +D 1 b x k+1 = Hx k +d. Parallel solution: Distribute rows of H to processors. Perform computation based on owner-computes rule. Perform all-all broadcasting after each iteration.
45 IV-45 If the iterative matrix is sparse If it contains a lot of zeros, the code design should take advantage of this: Not store too many known zeros. Code should explicitly skip those operations applied to zero elements. Example: y 0 = y n+1 = 0. y 0 y 1 +y = h y 1 y +y 3 = h. y n 1 y n +y n+1 = h
46 IV-46 This set of equations can be rewritten as: y 1 y. y n 1 y n = h h. h h The Jacobi method in a matrix format (right side): y 1 y. y n 1 y n k 0.5 h h. h h Too time and space consuming if you multiply using the entire iterative matrix!
47 IV-47 Correct solution: write the Jacobi method as: Repeat For i= 1 to n y new i Endfor = 0.5(y old i 1 +yold i+1 h ) Until y new y old < ε
48 IV-48 Gauss-Seidel Method Utilize new solutions as soon as they are available. (1) 6x 1 x +x 3 = 11 () x 1 +7x +x 3 = 5 (3) x 1 +x 5x 3 = -1 = Jacobi method. x k+1 1 = 1 6 (11 ( xk +xk 3 )) x k+1 = 1 7 (5 ( xk 1 +xk 3 )) x k+1 3 = 1 5 ( 1 (xk 1 +xk )) = Gauss-Seidel method. x k = 6 (11 ( xk +xk 3 )) x k+1 = 1 7 (5 ( xk+1 1 +x k 3 )) x k+1 3 = 1 5 ( 1 (xk+1 1 +x k+1 ))
49 IV-49 ε = x x x It converges faster than Jacobi s method.
COURSE Numerical methods for solving linear systems. Practical solving of many problems eventually leads to solving linear systems.
COURSE 9 4 Numerical methods for solving linear systems Practical solving of many problems eventually leads to solving linear systems Classification of the methods: - direct methods - with low number of
More informationSolution of Linear Systems
Solution of Linear Systems Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico May 12, 2016 CPD (DEI / IST) Parallel and Distributed Computing
More informationThe Solution of Linear Systems AX = B
Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has
More informationLINEAR SYSTEMS (11) Intensive Computation
LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY
More informationSolving Linear Systems Using Gaussian Elimination. How can we solve
Solving Linear Systems Using Gaussian Elimination How can we solve? 1 Gaussian elimination Consider the general augmented system: Gaussian elimination Step 1: Eliminate first column below the main diagonal.
More informationCS412: Lecture #17. Mridul Aanjaneya. March 19, 2015
CS: Lecture #7 Mridul Aanjaneya March 9, 5 Solving linear systems of equations Consider a lower triangular matrix L: l l l L = l 3 l 3 l 33 l n l nn A procedure similar to that for upper triangular systems
More information2.1 Gaussian Elimination
2. Gaussian Elimination A common problem encountered in numerical models is the one in which there are n equations and n unknowns. The following is a description of the Gaussian elimination method for
More informationParallel Programming. Parallel algorithms Linear systems solvers
Parallel Programming Parallel algorithms Linear systems solvers Terminology System of linear equations Solve Ax = b for x Special matrices Upper triangular Lower triangular Diagonally dominant Symmetric
More informationCS475: Linear Equations Gaussian Elimination LU Decomposition Wim Bohm Colorado State University
CS475: Linear Equations Gaussian Elimination LU Decomposition Wim Bohm Colorado State University Except as otherwise noted, the content of this presentation is licensed under the Creative Commons Attribution
More informationLinear Algebraic Equations
Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff
More informationSolving Dense Linear Systems I
Solving Dense Linear Systems I Solving Ax = b is an important numerical method Triangular system: [ l11 l 21 if l 11, l 22 0, ] [ ] [ ] x1 b1 = l 22 x 2 b 2 x 1 = b 1 /l 11 x 2 = (b 2 l 21 x 1 )/l 22 Chih-Jen
More informationToday s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn
Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear
More informationA Review of Matrix Analysis
Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value
More informationLinear Algebraic Equations
Linear Algebraic Equations Linear Equations: a + a + a + a +... + a = c 11 1 12 2 13 3 14 4 1n n 1 a + a + a + a +... + a = c 21 2 2 23 3 24 4 2n n 2 a + a + a + a +... + a = c 31 1 32 2 33 3 34 4 3n n
More informationDirect Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le
Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization
More informationExample: Current in an Electrical Circuit. Solving Linear Systems:Direct Methods. Linear Systems of Equations. Solving Linear Systems: Direct Methods
Example: Current in an Electrical Circuit Solving Linear Systems:Direct Methods A number of engineering problems or models can be formulated in terms of systems of equations Examples: Electrical Circuit
More informationLU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark
DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline
More informationComputational Linear Algebra
Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.
More information5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns
5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns (1) possesses the solution and provided that.. The numerators and denominators are recognized
More informationMath 471 (Numerical methods) Chapter 3 (second half). System of equations
Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular
More information6. Iterative Methods for Linear Systems. The stepwise approach to the solution...
6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse
More informationReview of matrices. Let m, n IN. A rectangle of numbers written like A =
Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an
More informationComputational Methods. Systems of Linear Equations
Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations
More informationCSE 160 Lecture 13. Numerical Linear Algebra
CSE 16 Lecture 13 Numerical Linear Algebra Announcements Section will be held on Friday as announced on Moodle Midterm Return 213 Scott B Baden / CSE 16 / Fall 213 2 Today s lecture Gaussian Elimination
More informationSOLVING LINEAR SYSTEMS
SOLVING LINEAR SYSTEMS We want to solve the linear system a, x + + a,n x n = b a n, x + + a n,n x n = b n This will be done by the method used in beginning algebra, by successively eliminating unknowns
More informationChapter 2. Solving Systems of Equations. 2.1 Gaussian elimination
Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix
More information1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )
Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical
More informationComputation of the mtx-vec product based on storage scheme on vector CPUs
BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix Computation of the mtx-vec product based on storage scheme on
More informationMath 552 Scientific Computing II Spring SOLUTIONS: Homework Set 1
Math 552 Scientific Computing II Spring 21 SOLUTIONS: Homework Set 1 ( ) a b 1 Let A be the 2 2 matrix A = By hand, use Gaussian elimination with back c d substitution to obtain A 1 by solving the two
More informationGaussian Elimination and Back Substitution
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving
More informationAMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems
AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given
More informationThe purpose of computing is insight, not numbers. Richard Wesley Hamming
Systems of Linear Equations The purpose of computing is insight, not numbers. Richard Wesley Hamming Fall 2010 1 Topics to Be Discussed This is a long unit and will include the following important topics:
More informationDEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix to upper-triangular
form) Given: matrix C = (c i,j ) n,m i,j=1 ODE and num math: Linear algebra (N) [lectures] c phabala 2016 DEN: Linear algebra numerical view (GEM: Gauss elimination method for reducing a full rank matrix
More informationSolving Linear Systems of Equations
1 Solving Linear Systems of Equations Many practical problems could be reduced to solving a linear system of equations formulated as Ax = b This chapter studies the computational issues about directly
More informationJACOBI S ITERATION METHOD
ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes
More informationNext topics: Solving systems of linear equations
Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:
More informationDense LU factorization and its error analysis
Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,
More informationLinear System of Equations
Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F.
More information5. Direct Methods for Solving Systems of Linear Equations. They are all over the place...
5 Direct Methods for Solving Systems of Linear Equations They are all over the place Miriam Mehl: 5 Direct Methods for Solving Systems of Linear Equations They are all over the place, December 13, 2012
More informationLinear Systems of n equations for n unknowns
Linear Systems of n equations for n unknowns In many application problems we want to find n unknowns, and we have n linear equations Example: Find x,x,x such that the following three equations hold: x
More informationBLAS: Basic Linear Algebra Subroutines Analysis of the Matrix-Vector-Product Analysis of Matrix-Matrix Product
Level-1 BLAS: SAXPY BLAS-Notation: S single precision (D for double, C for complex) A α scalar X vector P plus operation Y vector SAXPY: y = αx + y Vectorization of SAXPY (αx + y) by pipelining: page 8
More informationCS 323: Numerical Analysis and Computing
CS 323: Numerical Analysis and Computing MIDTERM #1 Instructions: This is an open notes exam, i.e., you are allowed to consult any textbook, your class notes, homeworks, or any of the handouts from us.
More informationChapter 9: Gaussian Elimination
Uchechukwu Ofoegbu Temple University Chapter 9: Gaussian Elimination Graphical Method The solution of a small set of simultaneous equations, can be obtained by graphing them and determining the location
More informationChapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations
Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2
More informationGaussian Elimination for Linear Systems
Gaussian Elimination for Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 3, 2011 1/56 Outline 1 Elementary matrices 2 LR-factorization 3 Gaussian elimination
More informationParallel LU Decomposition (PSC 2.3) Lecture 2.3 Parallel LU
Parallel LU Decomposition (PSC 2.3) 1 / 20 Designing a parallel algorithm Main question: how to distribute the data? What data? The matrix A and the permutation π. Data distribution + sequential algorithm
More informationChapter 2. Solving Systems of Equations. 2.1 Gaussian elimination
Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix
More informationProcess Model Formulation and Solution, 3E4
Process Model Formulation and Solution, 3E4 Section B: Linear Algebraic Equations Instructor: Kevin Dunn dunnkg@mcmasterca Department of Chemical Engineering Course notes: Dr Benoît Chachuat 06 October
More informationMTH 464: Computational Linear Algebra
MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University February 6, 2018 Linear Algebra (MTH
More informationDirect Methods for Solving Linear Systems. Matrix Factorization
Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011
More information5 Solving Systems of Linear Equations
106 Systems of LE 5.1 Systems of Linear Equations 5 Solving Systems of Linear Equations 5.1 Systems of Linear Equations System of linear equations: a 11 x 1 + a 12 x 2 +... + a 1n x n = b 1 a 21 x 1 +
More informationIterative Methods. Splitting Methods
Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition
More informationLU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU
LU Factorization To further improve the efficiency of solving linear systems Factorizations of matrix A : LU and QR LU Factorization Methods: Using basic Gaussian Elimination (GE) Factorization of Tridiagonal
More informationAck: 1. LD Garcia, MTH 199, Sam Houston State University 2. Linear Algebra and Its Applications - Gilbert Strang
Gaussian Elimination CS6015 : Linear Algebra Ack: 1. LD Garcia, MTH 199, Sam Houston State University 2. Linear Algebra and Its Applications - Gilbert Strang The Gaussian Elimination Method The Gaussian
More informationPractical Linear Algebra: A Geometry Toolbox
Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla
More informationNumerical Solution Techniques in Mechanical and Aerospace Engineering
Numerical Solution Techniques in Mechanical and Aerospace Engineering Chunlei Liang LECTURE 3 Solvers of linear algebraic equations 3.1. Outline of Lecture Finite-difference method for a 2D elliptic PDE
More informationLinear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4
Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix
More informationLU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b
AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization
More informationSection Gaussian Elimination
Section. - Gaussian Elimination A matrix is said to be in row echelon form (REF) if it has the following properties:. The first nonzero entry in any row is a. We call this a leading one or pivot one..
More informationNumerical Methods Lecture 2 Simultaneous Equations
CGN 42 - Computer Methods Numerical Methods Lecture 2 Simultaneous Equations Topics: matrix operations solving systems of equations Matrix operations: Adding / subtracting Transpose Multiplication Adding
More informationNUMERICAL MATHEMATICS & COMPUTING 7th Edition
NUMERICAL MATHEMATICS & COMPUTING 7th Edition Ward Cheney/David Kincaid c UT Austin Engage Learning: Thomson-Brooks/Cole wwwengagecom wwwmautexasedu/cna/nmc6 October 16, 2011 Ward Cheney/David Kincaid
More informationSection 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.
Section 9.2: Matrices Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns. That is, a 11 a 12 a 1n a 21 a 22 a 2n A =...... a m1 a m2 a mn A
More information10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method
54 CHAPTER 10 NUMERICAL METHODS 10. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after
More informationPowerPoints organized by Dr. Michael R. Gustafson II, Duke University
Part 3 Chapter 10 LU Factorization PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
More information1300 Linear Algebra and Vector Geometry
1300 Linear Algebra and Vector Geometry R. Craigen Office: MH 523 Email: craigenr@umanitoba.ca May-June 2017 Introduction: linear equations Read 1.1 (in the text that is!) Go to course, class webpages.
More informationPOLI270 - Linear Algebra
POLI7 - Linear Algebra Septemer 8th Basics a x + a x +... + a n x n b () is the linear form where a, b are parameters and x n are variables. For a given equation such as x +x you only need a variable and
More informationNumerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD
Mathematical Question we are interested in answering numerically How to solve the following linear system for x Ax = b? where A is an n n invertible matrix and b is vector of length n. Notation: x denote
More informationIntroduction to PDEs and Numerical Methods Lecture 7. Solving linear systems
Platzhalter für Bild, Bild auf Titelfolie hinter das Logo einsetzen Introduction to PDEs and Numerical Methods Lecture 7. Solving linear systems Dr. Noemi Friedman, 09.2.205. Reminder: Instationary heat
More informationNumerical Analysis Fall. Gauss Elimination
Numerical Analysis 2015 Fall Gauss Elimination Solving systems m g g m m g x x x k k k k k k k k k 3 2 1 3 2 1 3 3 3 2 3 2 2 2 1 0 0 Graphical Method For small sets of simultaneous equations, graphing
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More informationPivoting. Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3
Pivoting Reading: GV96 Section 3.4, Stew98 Chapter 3: 1.3 In the previous discussions we have assumed that the LU factorization of A existed and the various versions could compute it in a stable manner.
More informationMAC1105-College Algebra. Chapter 5-Systems of Equations & Matrices
MAC05-College Algebra Chapter 5-Systems of Equations & Matrices 5. Systems of Equations in Two Variables Solving Systems of Two Linear Equations/ Two-Variable Linear Equations A system of equations is
More informationSolving linear equations with Gaussian Elimination (I)
Term Projects Solving linear equations with Gaussian Elimination The QR Algorithm for Symmetric Eigenvalue Problem The QR Algorithm for The SVD Quasi-Newton Methods Solving linear equations with Gaussian
More information1.5 Gaussian Elimination With Partial Pivoting.
Gaussian Elimination With Partial Pivoting In the previous section we discussed Gaussian elimination In that discussion we used equation to eliminate x from equations through n Then we used equation to
More informationMatrices and systems of linear equations
Matrices and systems of linear equations Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra by Goode and Annin Samy T.
More informationCOURSE Iterative methods for solving linear systems
COURSE 0 4.3. Iterative methods for solving linear systems Because of round-off errors, direct methods become less efficient than iterative methods for large systems (>00 000 variables). An iterative scheme
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra)
AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 12: Gaussian Elimination and LU Factorization Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 10 Gaussian Elimination
More informationLecture 7. Gaussian Elimination with Pivoting. David Semeraro. University of Illinois at Urbana-Champaign. February 11, 2014
Lecture 7 Gaussian Elimination with Pivoting David Semeraro University of Illinois at Urbana-Champaign February 11, 2014 David Semeraro (NCSA) CS 357 February 11, 2014 1 / 41 Naive Gaussian Elimination
More informationSolving Linear Systems
Solving Linear Systems Iterative Solutions Methods Philippe B. Laval KSU Fall 207 Philippe B. Laval (KSU) Linear Systems Fall 207 / 2 Introduction We continue looking how to solve linear systems of the
More informationAMS 147 Computational Methods and Applications Lecture 17 Copyright by Hongyun Wang, UCSC
Lecture 17 Copyright by Hongyun Wang, UCSC Recap: Solving linear system A x = b Suppose we are given the decomposition, A = L U. We solve (LU) x = b in 2 steps: *) Solve L y = b using the forward substitution
More informationChapter 4. Solving Systems of Equations. Chapter 4
Solving Systems of Equations 3 Scenarios for Solutions There are three general situations we may find ourselves in when attempting to solve systems of equations: 1 The system could have one unique solution.
More informationLU Factorization a 11 a 1 a 1n A = a 1 a a n (b) a n1 a n a nn L = l l 1 l ln1 ln 1 75 U = u 11 u 1 u 1n 0 u u n 0 u n...
.. Factorizations Reading: Trefethen and Bau (1997), Lecture 0 Solve the n n linear system by Gaussian elimination Ax = b (1) { Gaussian elimination is a direct method The solution is found after a nite
More informationNumerical Analysis: Solving Systems of Linear Equations
Numerical Analysis: Solving Systems of Linear Equations Mirko Navara http://cmpfelkcvutcz/ navara/ Center for Machine Perception, Department of Cybernetics, FEE, CTU Karlovo náměstí, building G, office
More information1111: Linear Algebra I
1111: Linear Algebra I Dr. Vladimir Dotsenko (Vlad) Michaelmas Term 2015 Dr. Vladimir Dotsenko (Vlad) 1111: Linear Algebra I Michaelmas Term 2015 1 / 15 From equations to matrices For example, if we consider
More informationCME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.
CME342 Parallel Methods in Numerical Analysis Matrix Computation: Iterative Methods II Outline: CG & its parallelization. Sparse Matrix-vector Multiplication. 1 Basic iterative methods: Ax = b r = b Ax
More informationOverview: Synchronous Computations
Overview: Synchronous Computations barriers: linear, tree-based and butterfly degrees of synchronization synchronous example 1: Jacobi Iterations serial and parallel code, performance analysis synchronous
More informationGauss-Seidel method. Dr. Motilal Panigrahi. Dr. Motilal Panigrahi, Nirma University
Gauss-Seidel method Dr. Motilal Panigrahi Solving system of linear equations We discussed Gaussian elimination with partial pivoting Gaussian elimination was an exact method or closed method Now we will
More informationChapter 7 Iterative Techniques in Matrix Algebra
Chapter 7 Iterative Techniques in Matrix Algebra Per-Olof Persson persson@berkeley.edu Department of Mathematics University of California, Berkeley Math 128B Numerical Analysis Vector Norms Definition
More informationSolving PDEs with CUDA Jonathan Cohen
Solving PDEs with CUDA Jonathan Cohen jocohen@nvidia.com NVIDIA Research PDEs (Partial Differential Equations) Big topic Some common strategies Focus on one type of PDE in this talk Poisson Equation Linear
More information. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in
Vectors and Matrices Continued Remember that our goal is to write a system of algebraic equations as a matrix equation. Suppose we have the n linear algebraic equations a x + a 2 x 2 + a n x n = b a 2
More information4.2 Floating-Point Numbers
101 Approximation 4.2 Floating-Point Numbers 4.2 Floating-Point Numbers The number 3.1416 in scientific notation is 0.31416 10 1 or (as computer output) -0.31416E01..31416 10 1 exponent sign mantissa base
More informationScientific Computing WS 2018/2019. Lecture 9. Jürgen Fuhrmann Lecture 9 Slide 1
Scientific Computing WS 2018/2019 Lecture 9 Jürgen Fuhrmann juergen.fuhrmann@wias-berlin.de Lecture 9 Slide 1 Lecture 9 Slide 2 Simple iteration with preconditioning Idea: Aû = b iterative scheme û = û
More informationClass Notes: Solving Simultaneous Linear Equations by Gaussian Elimination. Consider a set of simultaneous linear equations:
METR/OCN 465: Computer Programming with Applications in Meteorology and Oceanography Dr Dave Dempsey Dept of Geosciences, SFSU Class Notes: Solving Simultaneous Linear Equations by Gaussian Elimination
More informationLecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)
Math 59 Lecture 2 (Tue Mar 5) Gaussian elimination and LU factorization (II) 2 Gaussian elimination - LU factorization For a general n n matrix A the Gaussian elimination produces an LU factorization if
More informationCHAPTER 6. Direct Methods for Solving Linear Systems
CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to
More informationPH1105 Lecture Notes on Linear Algebra.
PH05 Lecture Notes on Linear Algebra Joe Ó hógáin E-mail: johog@mathstcdie Main Text: Calculus for the Life Sciences by Bittenger, Brand and Quintanilla Other Text: Linear Algebra by Anton and Rorres Matrices
More informationEXAMPLES OF CLASSICAL ITERATIVE METHODS
EXAMPLES OF CLASSICAL ITERATIVE METHODS In these lecture notes we revisit a few classical fixpoint iterations for the solution of the linear systems of equations. We focus on the algebraic and algorithmic
More informationHomework 6 Solutions
Homeork 6 Solutions Igor Yanovsky (Math 151B TA) Section 114, Problem 1: For the boundary-value problem y (y ) y + log x, 1 x, y(1) 0, y() log, (1) rite the nonlinear system and formulas for Neton s method
More informationMath/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer Homework 3 Due: Tuesday, July 3, 2018
Math/Phys/Engr 428, Math 529/Phys 528 Numerical Methods - Summer 28. (Vector and Matrix Norms) Homework 3 Due: Tuesday, July 3, 28 Show that the l vector norm satisfies the three properties (a) x for x
More informationEBG # 3 Using Gaussian Elimination (Echelon Form) Gaussian Elimination: 0s below the main diagonal
EBG # 3 Using Gaussian Elimination (Echelon Form) Gaussian Elimination: 0s below the main diagonal [ x y Augmented matrix: 1 1 17 4 2 48 (Replacement) Replace a row by the sum of itself and a multiple
More information