Parallel Programming. Parallel algorithms Linear systems solvers
|
|
- Sophie Gaines
- 5 years ago
- Views:
Transcription
1 Parallel Programming Parallel algorithms Linear systems solvers
2 Terminology System of linear equations Solve Ax = b for x Special matrices Upper triangular Lower triangular Diagonally dominant Symmetric 2010@FEUP Linear Systems Solvers 2
3 Upper Triangular @FEUP Linear Systems Solvers 3
4 Lower Triangular @FEUP Linear Systems Solvers 4
5 Diagonally Dominant @FEUP Linear Systems Solvers 5
6 Symmetric @FEUP Linear Systems Solvers 6
7 Back Substitution Used to solve upper triangular system Tx = b for x Methodology: one element of x can be immediately computed Use this value to simplify system, revealing another element that can be immediately computed Repeat 2010@FEUP Linear Systems Solvers 7
8 Back Substitution 1x 0 +1x 1 1x 2 +4x 3 = 8 2x 1 3x 2 +1x 3 = 5 2x 2 3x 3 = 0 2x 3 = @FEUP Linear Systems Solvers 8
9 Back Substitution 1x 0 +1x 1 1x 2 +4x 3 = 8 2x 1 3x 2 +1x 3 = 5 2x 2 3x 3 = 0 x 3 = 2 2x 3 = @FEUP Linear Systems Solvers 9
10 Back Substitution 1x 0 +1x 1 1x 2 = 0 2x 1 3x 2 = 3 2x 2 = 6 2x 3 = @FEUP Linear Systems Solvers 10
11 Back Substitution 1x 0 +1x 1 1x 2 = 0 2x 1 3x 2 = 3 x 2 = 3 2x 2 = 6 2x 3 = @FEUP Linear Systems Solvers 11
12 Back Substitution 1x 0 +1x 1 = 3 2x 1 = 12 2x 2 = 6 2x 3 = @FEUP Linear Systems Solvers 12
13 Back Substitution 1x 0 +1x 1 = 3 x 1 = 6 2x 1 = 12 2x 2 = 6 2x 3 = @FEUP Linear Systems Solvers 13
14 Back Substitution 1x 0 = 9 2x 1 = 12 2x 2 = 6 2x 3 = @FEUP Linear Systems Solvers 14
15 Back Substitution x 0 = 9 1x 0 = 9 2x 1 = 12 2x 2 = 6 2x 3 = @FEUP Linear Systems Solvers 15
16 Pseudocode for i n 1 down to 0 do x [ i ] b [ i ] / a [ i, i ] for j 0 to i 1 do b [ j ] b [ j ] x [ i ] a [ j, i ] endfor endfor Time complexity: (n 2 ) 2010@FEUP Linear Systems Solvers 16
17 Data Dependence Diagram We cannot execute the outer loop in parallel. We can execute the inner loop in parallel. Linear Systems Solvers 17
18 Row-oriented Algorithm Associate primitive task with each row of A and corresponding elements of x and b During iteration i, task i must compute x i and broadcast its value During iteration i, the task associated with row j computes new value of b j Agglomerate using rowwise interleaved striped decomposition 2010@FEUP Linear Systems Solvers 18
19 Interleaved Decompositions Rowwise interleaved striped decomposition Columnwise interleaved striped decomposition Process with rank r takes rows (columns) where i % p == r 2010@FEUP Linear Systems Solvers 19
20 Complexity Analysis Each process performs an average of n / (2p) iterations of loop j for each i iteration A total of n i iterations in all Computational complexity: (n 2 /p) One broadcast per i iteration Communication complexity: (n log p) 2010@FEUP Linear Systems Solvers 20
21 Gaussian Elimination Used to solve Ax = b when A is dense Reduces Ax = b to upper triangular system Tx = c Back substitution can then solve Tx = c for x 2010@FEUP Linear Systems Solvers 21
22 Gaussian Elimination 4x 0 +6x 1 +2x 2 2x 3 = 8 2x 0 +5x 2 2x 3 = 4 4x 0 3x 1 5x 2 +4x 3 = 1 8x 0 +18x 1 2x 2 +3x 3 = @FEUP Linear Systems Solvers 22
23 Gaussian Elimination 4x 0 +6x 1 +2x 2 2x 3 = 8 3x 1 +4x 2 1x 3 = 0 +3x 1 3x 2 +2x 3 = 9 +6x 1 6x 2 +7x 3 = @FEUP Linear Systems Solvers 23
24 Gaussian Elimination 4x 0 +6x 1 +2x 2 2x 3 = 8 3x 1 +4x 2 1x 3 = 0 1x 2 +1x 3 = 9 2x 2 +5x 3 = @FEUP Linear Systems Solvers 24
25 Gaussian Elimination 4x 0 +6x 1 +2x 2 2x 3 = 8 3x 1 +4x 2 1x 3 = 0 1x 2 +1x 3 = 9 3x 3 = @FEUP Linear Systems Solvers 25
26 Iteration of Gaussian Elimination i Elements that will not be changed Pivot row Elements already driven to 0 Elements that will be changed i 2010@FEUP Linear Systems Solvers 26
27 Numerical Stability Issues If pivot element close to zero, significant roundoff errors can result Gaussian elimination with partial pivoting eliminates this problem In step i we search rows i through n-1 for the row whose column i element has the largest absolute value Swap (pivot) this row with row i 2010@FEUP Linear Systems Solvers 27
28 Gauss elimination algorithm for i=0 to n-1 loc[i] = i for i=0 to n-1 magnitude = 0 for j=i to n-1 if a[ loc[ j ], i ] > magnitude magnitude = a[ loc[ j ], i ] pivot = j exchange( loc[ i ], loc[ pivot ] ) //initialize pivot array // for each row // find pivot for j=i+1 to n-1 // reduce each row below i factor = a[ loc[ j ], i ] / a[ loc[ i ], i ] for k=i+1 to n // reduce each element after i a[ loc[ j ], k ] = a[ loc[ j ], k ] - a[ loc[ i ], k ] * factor for i=n-1 downto 0 // back substitution x[ i ] = a[ loc[ i ], n] / a[ loc[ i ], i ] for j=0 to i-1 a[ loc[ j ], n ] = a[ loc[ j ], n ] x[ i ] * a[ loc[ j ], i ] 2010@FEUP Linear Systems Solvers 28
29 Row-oriented Parallel Algorithm Associate primitive task with each row of A and corresponding elements of x and b A kind of reduction needed to find the identity of the pivot row (called tournament) Tournament: want to determine identity of row with largest value, rather than the largest value itself Could be done with two all-reductions MPI provides a simpler, faster mechanism 2010@FEUP Linear Systems Solvers 29
30 MPI_MAXLOC, MPI_MINLOC MPI provides reduction operators MPI_MAXLOC, MPI_MINLOC Provide datatype representing a (value, index) pair 2010@FEUP Linear Systems Solvers 30
31 MPI (value,index) Datatypes MPI_Datatype MPI_2INT MPI_DOUBLE_INT MPI_FLOAT_INT MPI_LONG_INT MPI_LONG_DOUBLE_INT MPI_SHORT_INT Meaning Two ints A double followed by an int A float followed by an int A long followed by an int A long double followed by an int A short followed by an int 2010@FEUP Linear Systems Solvers 31
32 Example Use of MPI_MAXLOC struct { double value; int index; } local, global;... local.value = fabs(a[j][i]); local.index = j;... MPI_Allreduce (&local, &global, 1, MPI_DOUBLE_INT, MPI_MAXLOC, MPI_COMM_WORLD); 2010@FEUP Linear Systems Solvers 32
33 Second Communication per Iteration i k a[j][i] a[j][k] j a[picked][i] a[picked][k] picked 2010@FEUP Linear Systems Solvers 33
34 Communication Complexity Complexity of tournament: (log p) Complexity of broadcasting pivot row: (n log p) A total of n - 1 iterations Overall communication complexity: (n 2 log p) 2010@FEUP Linear Systems Solvers 34
35 Column-oriented Algorithm Associate a primitive task with each column of A and another primitive task for b During iteration i, task controlling column i determines pivot row and broadcasts its identity During iteration i, task controlling column i must also broadcast column i to other tasks Agglomerate tasks in an interleaved fashion to balance workloads Isoefficiency same as row-oriented algorithm 2010@FEUP Linear Systems Solvers 35
36 Comparison of Two Algorithms Both algorithms evenly divide workload Both algorithms do a broadcast each iteration Difference: identification of pivot row Row-oriented algorithm does search in parallel but requires all-reduce step Column-oriented algorithm does search sequentially but requires no communication Row-oriented superior when n relatively larger and p relatively smaller 2010@FEUP Linear Systems Solvers 36
37 Problems with these Algorithms They break parallel execution into computation and communication phases Processes not performing computations during the broadcast steps Time spent doing broadcasts is large enough to ensure poor scalability Linear Systems Solvers 37
38 Pipelined, Row-Oriented Algorithm Want to overlap communication time with computation time We could do this if we knew in advance the row used to reduce all the other rows Let s pivot columns instead of rows! In iteration i we can use row i to reduce the other rows. 2010@FEUP Linear Systems Solvers 38
39 Gauss elimination with column pivoting for i=0 to n-1 loc[i] = i for i=0 to n-1 magnitude = 0 for j=i to n-1 if a[ i, loc[ j ] ] > magnitude magnitude = a[ i, loc[ j ] ] pivot = j exchange( loc[ i ], loc[ pivot ] ) //initialize pivot array // for each row // find pivot inside the row for j=i+1 to n-1 // for each row j below i factor = a[ j, loc[ i ] ] / a[ i, loc[ i ] ] for k=i to n // reduce each column element including i a[ j, loc[ k ] ] = a[ j, loc[ k ] ] - a[ i, loc[ k ] ] * factor for i=n-1 downto 0 // back substitution x[ loc[ i ] ] = a[ i, n] / a[ i, loc[ i ] ] for j=0 to i-1 a[ j, n ] = a[ j, n ] x[ loc[ i ] ] * a[ j, loc[ i ] ] 2010@FEUP Linear Systems Solvers 39
40 Communication Pattern 0 Reducing Using Row 0 Row @FEUP Linear Systems Solvers 40
41 Communication Pattern 0 Reducing Using Row Reducing Using Row 0 Row @FEUP Linear Systems Solvers 41
42 Communication Pattern 0 Reducing Using Row Reducing Using Row 0 Row 0 2 Reducing Using Row @FEUP Linear Systems Solvers 42
43 Communication Pattern 0 Reducing Using Row 0 3 Reducing Using Row 0 1 Reducing Using Row 0 2 Reducing Using Row @FEUP Linear Systems Solvers 43
44 Communication Pattern 0 Reducing Using 3 Row 0 Reducing Using 1 Row 0 2 Reducing Using Row @FEUP Linear Systems Solvers 44
45 Communication Pattern 0 Reducing Using 3 Row 0 Reducing Using 1 Row 1 Row 1 2 Reducing Using Row @FEUP Linear Systems Solvers 45
46 Communication Pattern 0 Reducing Using 3 Row 0 Reducing Using 1 Row 1 Row 1 2 Reducing Using Row @FEUP Linear Systems Solvers 46
47 Communication Pattern 0 Row 1 Reducing Using 3 Row 1 Reducing Using 1 Row 1 2 Reducing Using Row @FEUP Linear Systems Solvers 47
48 Communication Pattern 0 Reducing Using Row 1 3 Reducing Using Row 1 1 Reducing Using Row 1 2 Reducing Using Row @FEUP Linear Systems Solvers 48
49 Analysis Total computation time: (n 3 /p) Total message transmission time: (n 2 ) When n large enough, message transmission time completely overlapped by computation time Message start-up not overlapped: (n) Parallel overhead: (np) 2010@FEUP Linear Systems Solvers 49
50 Sparse Systems Gaussian elimination not well-suited for sparse systems Coefficient matrix gradually fills with nonzero elements Result Increases storage requirements Increases total operation count Linear Systems Solvers 50
51 Example of Fill Linear Systems Solvers 51
52 Iterative Methods Iterative method: algorithm that generates a series of approximations to solution s value Require less storage than direct methods Since they avoid computations on zero elements, they can save a lot of computations 2010@FEUP Linear Systems Solvers 52
53 Linear Systems Solvers 53 Jacobi Method i j k j j i i a k i x a b x i i ) (, 1 1, i j k j j i i a k i x a b x i i ) (, 1 1, Values of elements of vector x at iteration k+1 depend upon values of vector x at iteration k Gauss-Seidel method: Use latest version available of x i
54 Jacobi Method Iterations Start: x 0 = (0,0) 4 1 x Solution: x = (2,3) 3 4 x x 3 2 x x @FEUP Linear Systems Solvers 54
55 Rate of Convergence Even when Jacobi method and Gauss- Seidel methods converge on solution, rate of convergence often too slow to make them practical We will move on to an iterative method with much faster convergence Linear Systems Solvers 55
56 Conjugate Gradient Method A is positive definite if for every nonzero vector x and its transpose x T, the product x T Ax > 0 If A is symmetric and positive definite, then the function q( x ) 2 1 x T Ax x T b c has a unique minimizer that is the solution to Ax = b Conjugate gradient is an iterative method that solves Ax = b by minimizing q(x) 2010@FEUP Linear Systems Solvers 56
57 Conjugate Gradient Convergence x 2 x Finds value of n-dimensional solution in at most n iterations x @FEUP Linear Systems Solvers 57
58 Conjugate Gradient Computations Matrix-vector multiplication Inner product (dot product) Matrix-vector multiplication has higher time complexity Must modify previously developed algorithm to account for sparse matrices Linear Systems Solvers 58
59 Algorithm b, x, d, g, t - size n vectors (linear system Ax = b) for i=0 to n-1 d[ i ] = x[ i ] = 0 g[ i ] = - b[ i ] // initializations for j=1 to n // iterations d1 = Inner_Product(g, g) g = Matrix_Vector_Product(A, x) for i=0 to n-1 g[ i ] = g[ i ] b[ i ] n1 = Inner_Product(g, g) if n1 < e break for i=0 to n-1 d[ i ] = - g[ i ] + (n1 / d1) * d[ i ] n2 = Inner_Product(d, g) t = Matrix_Vector_Product(A, d) d2 = Inner_Product(d, t) s = - n2 / d2 for i=0 to n-1 x[ i ] = x[ i ] + s * d[ i ] g k = Ax k-1 b d k = -g k + (g kt g k / g k-1t g k-1 )*d k-1 s k = - d k-1t g k / d kt Ad k x k = x k-1 + s k d k 2010@FEUP Linear Systems Solvers 59
60 Rowwise Block Striped Decomposition of a Symmetrically Banded Matrix Decomposition Matrix (a) (b) 2010@FEUP Linear Systems Solvers 60
61 Representation of Vectors Replicate vectors Need all-gather step after matrix-vector multiply Inner product has time complexity (n) Block decomposition of vectors Need all-gather step before matrix-vector multiply Inner product has time complexity (n/p + log p) 2010@FEUP Linear Systems Solvers 61
62 Summary Solving systems of linear equations Direct methods Iterative methods Parallel designs for Back substitution Gaussian elimination Conjugate gradient method Linear Systems Solvers 62
Solution of Linear Systems
Solution of Linear Systems Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico May 12, 2016 CPD (DEI / IST) Parallel and Distributed Computing
More informationParallel Scientific Computing
IV-1 Parallel Scientific Computing Matrix-vector multiplication. Matrix-matrix multiplication. Direct method for solving a linear equation. Gaussian Elimination. Iterative method for solving a linear equation.
More informationReview of matrices. Let m, n IN. A rectangle of numbers written like A =
Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an
More informationNext topics: Solving systems of linear equations
Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:
More informationContents. Preface... xi. Introduction...
Contents Preface... xi Introduction... xv Chapter 1. Computer Architectures... 1 1.1. Different types of parallelism... 1 1.1.1. Overlap, concurrency and parallelism... 1 1.1.2. Temporal and spatial parallelism
More informationBLAS: Basic Linear Algebra Subroutines Analysis of the Matrix-Vector-Product Analysis of Matrix-Matrix Product
Level-1 BLAS: SAXPY BLAS-Notation: S single precision (D for double, C for complex) A α scalar X vector P plus operation Y vector SAXPY: y = αx + y Vectorization of SAXPY (αx + y) by pipelining: page 8
More informationToday s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn
Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear
More informationMatrices and systems of linear equations
Matrices and systems of linear equations Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra by Goode and Annin Samy T.
More informationLINEAR SYSTEMS (11) Intensive Computation
LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY
More informationIterative Methods. Splitting Methods
Iterative Methods Splitting Methods 1 Direct Methods Solving Ax = b using direct methods. Gaussian elimination (using LU decomposition) Variants of LU, including Crout and Doolittle Other decomposition
More informationComputation of the mtx-vec product based on storage scheme on vector CPUs
BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix Computation of the mtx-vec product based on storage scheme on
More information5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns
5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns (1) possesses the solution and provided that.. The numerators and denominators are recognized
More informationMATH 3511 Lecture 1. Solving Linear Systems 1
MATH 3511 Lecture 1 Solving Linear Systems 1 Dmitriy Leykekhman Spring 2012 Goals Review of basic linear algebra Solution of simple linear systems Gaussian elimination D Leykekhman - MATH 3511 Introduction
More informationParallel programming using MPI. Analysis and optimization. Bhupender Thakur, Jim Lupo, Le Yan, Alex Pacheco
Parallel programming using MPI Analysis and optimization Bhupender Thakur, Jim Lupo, Le Yan, Alex Pacheco Outline l Parallel programming: Basic definitions l Choosing right algorithms: Optimal serial and
More informationIntroduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Bastian Steder
Introduction to Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Bastian Steder Reference Book Thrun, Burgard, and Fox: Probabilistic Robotics Vectors Arrays of numbers Vectors represent
More informationCME342 Parallel Methods in Numerical Analysis. Matrix Computation: Iterative Methods II. Sparse Matrix-vector Multiplication.
CME342 Parallel Methods in Numerical Analysis Matrix Computation: Iterative Methods II Outline: CG & its parallelization. Sparse Matrix-vector Multiplication. 1 Basic iterative methods: Ax = b r = b Ax
More informationMobile Robotics 1. A Compact Course on Linear Algebra. Giorgio Grisetti
Mobile Robotics 1 A Compact Course on Linear Algebra Giorgio Grisetti SA-1 Vectors Arrays of numbers They represent a point in a n dimensional space 2 Vectors: Scalar Product Scalar-Vector Product Changes
More informationThe Sieve of Erastothenes
The Sieve of Erastothenes Parallel and Distributed Computing Department of Computer Science and Engineering (DEI) Instituto Superior Técnico October 25, 2010 José Monteiro (DEI / IST) Parallel and Distributed
More informationTMA4125 Matematikk 4N Spring 2017
Norwegian University of Science and Technology Institutt for matematiske fag TMA15 Matematikk N Spring 17 Solutions to exercise set 1 1 We begin by writing the system as the augmented matrix.139.38.3 6.
More informationSolving Consistent Linear Systems
Solving Consistent Linear Systems Matrix Notation An augmented matrix of a system consists of the coefficient matrix with an added column containing the constants from the right sides of the equations.
More informationLU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark
DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline
More informationModelling and implementation of algorithms in applied mathematics using MPI
Modelling and implementation of algorithms in applied mathematics using MPI Lecture 3: Linear Systems: Simple Iterative Methods and their parallelization, Programming MPI G. Rapin Brazil March 2011 Outline
More informationCS 323: Numerical Analysis and Computing
CS 323: Numerical Analysis and Computing MIDTERM #1 Instructions: This is an open notes exam, i.e., you are allowed to consult any textbook, your class notes, homeworks, or any of the handouts from us.
More information1.Chapter Objectives
LU Factorization INDEX 1.Chapter objectives 2.Overview of LU factorization 2.1GAUSS ELIMINATION AS LU FACTORIZATION 2.2LU Factorization with Pivoting 2.3 MATLAB Function: lu 3. CHOLESKY FACTORIZATION 3.1
More informationLU Factorization a 11 a 1 a 1n A = a 1 a a n (b) a n1 a n a nn L = l l 1 l ln1 ln 1 75 U = u 11 u 1 u 1n 0 u u n 0 u n...
.. Factorizations Reading: Trefethen and Bau (1997), Lecture 0 Solve the n n linear system by Gaussian elimination Ax = b (1) { Gaussian elimination is a direct method The solution is found after a nite
More informationSolving Linear Systems of Equations
November 6, 2013 Introduction The type of problems that we have to solve are: Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. a 1N a NN x = x 1 x 2. x N B = b 1 b 2. b N To find A 1 (inverse
More informationLinear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4
Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix
More informationNumerical Analysis Fall. Gauss Elimination
Numerical Analysis 2015 Fall Gauss Elimination Solving systems m g g m m g x x x k k k k k k k k k 3 2 1 3 2 1 3 3 3 2 3 2 2 2 1 0 0 Graphical Method For small sets of simultaneous equations, graphing
More informationDirect Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le
Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization
More informationNumerical Methods I: Numerical linear algebra
1/3 Numerical Methods I: Numerical linear algebra Georg Stadler Courant Institute, NYU stadler@cimsnyuedu September 1, 017 /3 We study the solution of linear systems of the form Ax = b with A R n n, x,
More informationLinear System of Equations
Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F.
More informationLinear Algebraic Equations
Linear Algebraic Equations Linear Equations: a + a + a + a +... + a = c 11 1 12 2 13 3 14 4 1n n 1 a + a + a + a +... + a = c 21 2 2 23 3 24 4 2n n 2 a + a + a + a +... + a = c 31 1 32 2 33 3 34 4 3n n
More informationAMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)
AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 19: Computing the SVD; Sparse Linear Systems Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical
More information9.1 Preconditioned Krylov Subspace Methods
Chapter 9 PRECONDITIONING 9.1 Preconditioned Krylov Subspace Methods 9.2 Preconditioned Conjugate Gradient 9.3 Preconditioned Generalized Minimal Residual 9.4 Relaxation Method Preconditioners 9.5 Incomplete
More informationIntroduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Diego Tipaldi, Luciano Spinello
Introduction to Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Cyrill Stachniss, Maren Bennewitz, Diego Tipaldi, Luciano Spinello Vectors Arrays of numbers Vectors represent a point
More informationLecture 18 Classical Iterative Methods
Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,
More informationAPPLIED NUMERICAL LINEAR ALGEBRA
APPLIED NUMERICAL LINEAR ALGEBRA James W. Demmel University of California Berkeley, California Society for Industrial and Applied Mathematics Philadelphia Contents Preface 1 Introduction 1 1.1 Basic Notation
More informationNumerical Linear Algebra Primer. Ryan Tibshirani Convex Optimization /36-725
Numerical Linear Algebra Primer Ryan Tibshirani Convex Optimization 10-725/36-725 Last time: proximal gradient descent Consider the problem min g(x) + h(x) with g, h convex, g differentiable, and h simple
More informationA Review of Matrix Analysis
Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value
More informationChapter 7. Tridiagonal linear systems. Solving tridiagonal systems of equations. and subdiagonal. E.g. a 21 a 22 a A =
Chapter 7 Tridiagonal linear systems The solution of linear systems of equations is one of the most important areas of computational mathematics. A complete treatment is impossible here but we will discuss
More informationNumerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD
Mathematical Question we are interested in answering numerically How to solve the following linear system for x Ax = b? where A is an n n invertible matrix and b is vector of length n. Notation: x denote
More informationLinear Equations and Matrix
1/60 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Gaussian Elimination 2/60 Alpha Go Linear algebra begins with a system of linear
More informationCS475: Linear Equations Gaussian Elimination LU Decomposition Wim Bohm Colorado State University
CS475: Linear Equations Gaussian Elimination LU Decomposition Wim Bohm Colorado State University Except as otherwise noted, the content of this presentation is licensed under the Creative Commons Attribution
More informationCourse Notes: Week 1
Course Notes: Week 1 Math 270C: Applied Numerical Linear Algebra 1 Lecture 1: Introduction (3/28/11) We will focus on iterative methods for solving linear systems of equations (and some discussion of eigenvalues
More informationLU Factorization. LU Decomposition. LU Decomposition. LU Decomposition: Motivation A = LU
LU Factorization To further improve the efficiency of solving linear systems Factorizations of matrix A : LU and QR LU Factorization Methods: Using basic Gaussian Elimination (GE) Factorization of Tridiagonal
More informationCPE 310: Numerical Analysis for Engineers
CPE 310: Numerical Analysis for Engineers Chapter 2: Solving Sets of Equations Ahmed Tamrawi Copyright notice: care has been taken to use only those web images deemed by the instructor to be in the public
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More information6. Iterative Methods for Linear Systems. The stepwise approach to the solution...
6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse
More informationExample: Current in an Electrical Circuit. Solving Linear Systems:Direct Methods. Linear Systems of Equations. Solving Linear Systems: Direct Methods
Example: Current in an Electrical Circuit Solving Linear Systems:Direct Methods A number of engineering problems or models can be formulated in terms of systems of equations Examples: Electrical Circuit
More informationLinear Algebra. Carleton DeTar February 27, 2017
Linear Algebra Carleton DeTar detar@physics.utah.edu February 27, 2017 This document provides some background for various course topics in linear algebra: solving linear systems, determinants, and finding
More informationANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3
ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any
More informationNumerical Methods I Non-Square and Sparse Linear Systems
Numerical Methods I Non-Square and Sparse Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 25th, 2014 A. Donev (Courant
More informationLecture 4: Linear Algebra 1
Lecture 4: Linear Algebra 1 Sourendu Gupta TIFR Graduate School Computational Physics 1 February 12, 2010 c : Sourendu Gupta (TIFR) Lecture 4: Linear Algebra 1 CP 1 1 / 26 Outline 1 Linear problems Motivation
More informationIntroduction to Mobile Robotics Compact Course on Linear Algebra. Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz
Introduction to Mobile Robotics Compact Course on Linear Algebra Wolfram Burgard, Cyrill Stachniss, Kai Arras, Maren Bennewitz Vectors Arrays of numbers Vectors represent a point in a n dimensional space
More informationComputational Methods. Systems of Linear Equations
Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations
More informationLinear Algebraic Equations
Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff
More informationJACOBI S ITERATION METHOD
ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes
More informationComputational Techniques Prof. Sreenivas Jayanthi. Department of Chemical Engineering Indian institute of Technology, Madras
Computational Techniques Prof. Sreenivas Jayanthi. Department of Chemical Engineering Indian institute of Technology, Madras Module No. # 05 Lecture No. # 24 Gauss-Jordan method L U decomposition method
More informationTopics. Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij
Topics Vectors (column matrices): Vector addition and scalar multiplication The matrix of a linear function y Ax The elements of a matrix A : A ij or a ij lives in row i and column j Definition of a matrix
More informationDepartment of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in. NUMERICAL ANALYSIS Spring 2015
Department of Mathematics California State University, Los Angeles Master s Degree Comprehensive Examination in NUMERICAL ANALYSIS Spring 2015 Instructions: Do exactly two problems from Part A AND two
More informationChapter 2. Solving Systems of Equations. 2.1 Gaussian elimination
Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix
More informationParallel Programming in C with MPI and OpenMP
Parallel Programming in C with MPI and OpenMP Michael J. Quinn Chapter 13 Finite Difference Methods Outline n Ordinary and partial differential equations n Finite difference methods n Vibrating string
More informationThe purpose of computing is insight, not numbers. Richard Wesley Hamming
Systems of Linear Equations The purpose of computing is insight, not numbers. Richard Wesley Hamming Fall 2010 1 Topics to Be Discussed This is a long unit and will include the following important topics:
More information1 Number Systems and Errors 1
Contents 1 Number Systems and Errors 1 1.1 Introduction................................ 1 1.2 Number Representation and Base of Numbers............. 1 1.2.1 Normalized Floating-point Representation...........
More informationApplied Numerical Linear Algebra. Lecture 8
Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ
More informationOptimized LU-decomposition with Full Pivot for Small Batched Matrices S3069
Optimized LU-decomposition with Full Pivot for Small Batched Matrices S369 Ian Wainwright High Performance Consulting Sweden ian.wainwright@hpcsweden.se Based on work for GTC 212: 1x speed-up vs multi-threaded
More informationChapter 4. Solving Systems of Equations. Chapter 4
Solving Systems of Equations 3 Scenarios for Solutions There are three general situations we may find ourselves in when attempting to solve systems of equations: 1 The system could have one unique solution.
More information1.5 Gaussian Elimination With Partial Pivoting.
Gaussian Elimination With Partial Pivoting In the previous section we discussed Gaussian elimination In that discussion we used equation to eliminate x from equations through n Then we used equation to
More informationChapter 9: Gaussian Elimination
Uchechukwu Ofoegbu Temple University Chapter 9: Gaussian Elimination Graphical Method The solution of a small set of simultaneous equations, can be obtained by graphing them and determining the location
More informationNumerical Solution Techniques in Mechanical and Aerospace Engineering
Numerical Solution Techniques in Mechanical and Aerospace Engineering Chunlei Liang LECTURE 3 Solvers of linear algebraic equations 3.1. Outline of Lecture Finite-difference method for a 2D elliptic PDE
More informationEAD 115. Numerical Solution of Engineering and Scientific Problems. David M. Rocke Department of Applied Science
EAD 115 Numerical Solution of Engineering and Scientific Problems David M. Rocke Department of Applied Science Taylor s Theorem Can often approximate a function by a polynomial The error in the approximation
More informationPowerPoints organized by Dr. Michael R. Gustafson II, Duke University
Part 3 Chapter 10 LU Factorization PowerPoints organized by Dr. Michael R. Gustafson II, Duke University All images copyright The McGraw-Hill Companies, Inc. Permission required for reproduction or display.
More informationCSE 160 Lecture 13. Numerical Linear Algebra
CSE 16 Lecture 13 Numerical Linear Algebra Announcements Section will be held on Friday as announced on Moodle Midterm Return 213 Scott B Baden / CSE 16 / Fall 213 2 Today s lecture Gaussian Elimination
More informationMath 471 (Numerical methods) Chapter 3 (second half). System of equations
Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular
More informationMaths for Signals and Systems Linear Algebra for Engineering Applications
Maths for Signals and Systems Linear Algebra for Engineering Applications Lectures 1-2, Tuesday 11 th October 2016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE LONDON
More informationDense LU factorization and its error analysis
Dense LU factorization and its error analysis Laura Grigori INRIA and LJLL, UPMC February 2016 Plan Basis of floating point arithmetic and stability analysis Notation, results, proofs taken from [N.J.Higham,
More informationBoundary Value Problems - Solving 3-D Finite-Difference problems Jacob White
Introduction to Simulation - Lecture 2 Boundary Value Problems - Solving 3-D Finite-Difference problems Jacob White Thanks to Deepak Ramaswamy, Michal Rewienski, and Karen Veroy Outline Reminder about
More informationMATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS. Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year
1 MATHEMATICS FOR COMPUTER VISION WEEK 2 LINEAR SYSTEMS Dr Fabio Cuzzolin MSc in Computer Vision Oxford Brookes University Year 2013-14 OUTLINE OF WEEK 2 Linear Systems and solutions Systems of linear
More informationScientific Computing
Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting
More informationParallel Numerical Algorithms
Parallel Numerical Algorithms Chapter 13 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign CS 554 / CSE 512 Michael T. Heath Parallel Numerical Algorithms
More informationProcess Model Formulation and Solution, 3E4
Process Model Formulation and Solution, 3E4 Section B: Linear Algebraic Equations Instructor: Kevin Dunn dunnkg@mcmasterca Department of Chemical Engineering Course notes: Dr Benoît Chachuat 06 October
More informationThe Conjugate Gradient Method
The Conjugate Gradient Method Classical Iterations We have a problem, We assume that the matrix comes from a discretization of a PDE. The best and most popular model problem is, The matrix will be as large
More information5.1 Banded Storage. u = temperature. The five-point difference operator. uh (x, y + h) 2u h (x, y)+u h (x, y h) uh (x + h, y) 2u h (x, y)+u h (x h, y)
5.1 Banded Storage u = temperature u= u h temperature at gridpoints u h = 1 u= Laplace s equation u= h u = u h = grid size u=1 The five-point difference operator 1 u h =1 uh (x + h, y) 2u h (x, y)+u h
More informationApplied Linear Algebra in Geoscience Using MATLAB
Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in
More informationThe amount of work to construct each new guess from the previous one should be a small multiple of the number of nonzeros in A.
AMSC/CMSC 661 Scientific Computing II Spring 2005 Solution of Sparse Linear Systems Part 2: Iterative methods Dianne P. O Leary c 2005 Solving Sparse Linear Systems: Iterative methods The plan: Iterative
More informationComputational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science
Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Recap: A N N matrix A has an eigenvector x (non-zero) with corresponding
More informationNumerical Methods I Solving Square Linear Systems: GEM and LU factorization
Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,
More informationScalable Non-blocking Preconditioned Conjugate Gradient Methods
Scalable Non-blocking Preconditioned Conjugate Gradient Methods Paul Eller and William Gropp University of Illinois at Urbana-Champaign Department of Computer Science Supercomputing 16 Paul Eller and William
More informationNumerical Methods Lecture 2 Simultaneous Equations
Numerical Methods Lecture 2 Simultaneous Equations Topics: matrix operations solving systems of equations pages 58-62 are a repeat of matrix notes. New material begins on page 63. Matrix operations: Mathcad
More information6: Introduction to Elimination. Lecture
Math 2270 - Lecture 6: Introduction to Elimination Dylan Zwick Fall 2012 This lecture covers section 2.2 of the textbook. 1 Elimination for Equations Involving Two Vari ables We re going to start our discussion
More informationMAC1105-College Algebra. Chapter 5-Systems of Equations & Matrices
MAC05-College Algebra Chapter 5-Systems of Equations & Matrices 5. Systems of Equations in Two Variables Solving Systems of Two Linear Equations/ Two-Variable Linear Equations A system of equations is
More information1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )
Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical
More informationPower System Analysis Prof. A. K. Sinha Department of Electrical Engineering Indian Institute of Technology, Kharagpur. Lecture - 21 Power Flow VI
Power System Analysis Prof. A. K. Sinha Department of Electrical Engineering Indian Institute of Technology, Kharagpur Lecture - 21 Power Flow VI (Refer Slide Time: 00:57) Welcome to lesson 21. In this
More informationAn introduction to parallel algorithms
An introduction to parallel algorithms Knut Mørken Department of Informatics Centre of Mathematics for Applications University of Oslo Winter School on Parallel Computing Geilo January 20 25, 2008 1/26
More informationWe use the overhead arrow to denote a column vector, i.e., a number with a direction. For example, in three-space, we write
1 MATH FACTS 11 Vectors 111 Definition We use the overhead arrow to denote a column vector, ie, a number with a direction For example, in three-space, we write The elements of a vector have a graphical
More informationLinear Algebra. PHY 604: Computational Methods in Physics and Astrophysics II
Linear Algebra Numerical Linear Algebra We've now seen several places where solving linear systems comes into play Implicit ODE integration Cubic spline interpolation We'll see many more, including Solving
More informationLinear Equation Systems Direct Methods
Linear Equation Systems Direct Methods Content Solution of Systems of Linear Equations Systems of Equations Inverse of a Matrix Cramer s Rule Gauss Jordan Method Montante Method Solution of Systems of
More informationLab 1: Iterative Methods for Solving Linear Systems
Lab 1: Iterative Methods for Solving Linear Systems January 22, 2017 Introduction Many real world applications require the solution to very large and sparse linear systems where direct methods such as
More informationNumerical Methods Lecture 2 Simultaneous Equations
CGN 42 - Computer Methods Numerical Methods Lecture 2 Simultaneous Equations Topics: matrix operations solving systems of equations Matrix operations: Adding / subtracting Transpose Multiplication Adding
More informationGaussian Elimination without/with Pivoting and Cholesky Decomposition
Gaussian Elimination without/with Pivoting and Cholesky Decomposition Gaussian Elimination WITHOUT pivoting Notation: For a matrix A R n n we define for k {,,n} the leading principal submatrix a a k A
More information