An introduction to parallel algorithms

Size: px
Start display at page:

Download "An introduction to parallel algorithms"

Transcription

1 An introduction to parallel algorithms Knut Mørken Department of Informatics Centre of Mathematics for Applications University of Oslo Winter School on Parallel Computing Geilo January 20 25, /26

2 1 Introduction 2 Parallel addition 3 Computing the dot product in parallel 4 Parallel matrix multiplication 5 Solving linear systems of equations in parallel 6 Conclusion 2/26

3 Adding n numbers Suppose we have n real numbers (a i ) n i=1 their sum s. In mathematics n s = a i. i=1 and want to compute 3/26

4 Adding n numbers Suppose we have n real numbers (a i ) n i=1 their sum s. In mathematics n s = a i. i=1 and want to compute On a computer s = 0; for i = 1, 2,..., n s = s + a i ; 3/26

5 Algorithm An algorithm is a precise prescription of how to accomplish a task. 4/26

6 Algorithm An algorithm is a precise prescription of how to accomplish a task. Two important issues determine the character of an algorithm: Which operations are available to us? 4/26

7 Algorithm An algorithm is a precise prescription of how to accomplish a task. Two important issues determine the character of an algorithm: Which operations are available to us? In which order can the operations be performed? 4/26

8 Algorithm An algorithm is a precise prescription of how to accomplish a task. Two important issues determine the character of an algorithm: Which operations are available to us? In which order can the operations be performed? One at a time (sequentially). 4/26

9 Algorithm An algorithm is a precise prescription of how to accomplish a task. Two important issues determine the character of an algorithm: Which operations are available to us? In which order can the operations be performed? One at a time (sequentially). Several at once (in parallel). 4/26

10 Primitive operations We will assume that the basic operations available to us are The four arithmetic operations (with two arguments) Comparison of numbers (if-tests) Elementary mathematical functions (trigonometric, exponential, logarithms, roots,... ) 5/26

11 Primitive operations We will assume that the basic operations available to us are The four arithmetic operations (with two arguments) Comparison of numbers (if-tests) Elementary mathematical functions (trigonometric, exponential, logarithms, roots,... ) We will measure computing time (complexity) by counting the number of time steps necessary to complete all the arithmetic operations of an algorithm. 5/26

12 Order of operations In traditional (sequential) programming it is assumed that a computer can only perform one operation at a time. Adding n numbers s = 0; for i = 1, 2,..., n s = s + a i ; 6/26

13 Order of operations In traditional (sequential) programming it is assumed that a computer can only perform one operation at a time. Adding n numbers s = 0; for i = 1, 2,..., n s = s + a i ; Mathematically, however, many problems exhibit considerable freedom in the order in which operations are performed. 6/26

14 Order of operations We can add the n numbers in any order n a i = a π1 + a π2 + + a πn i=1 where (π 1,..., π n ) is any permutation of the integers 1,..., n. This can be exploited to speed up the addition if we have the possibility of performing several operations simultaneously. 7/26

15 Parallel addition Suppose that n = 10. Then we have s = a 1 + a 2 + a 3 + a 4 + a 5 + a 6 + a 7 + a 8 + a 9 + a 10, 8/26

16 Parallel addition Suppose that n = 10. Then we have s = a 1 + a }{{ 2 + a } 3 + a 4 + a }{{} 5 + a 6 + a }{{} 7 + a 8 + a }{{} 9 + a 10, }{{} a1 1 a2 1 a3 1 a4 1 a5 1 8/26

17 Parallel addition Suppose that n = 10. Then we have s = a 1 + a }{{ 2 + a } 3 + a 4 + a }{{} 5 + a 6 + a }{{} 7 + a 8 + a }{{} 9 + a 10, }{{} a1 1 a2 1 a3 1 a4 1 a5 1 = a a a a a 1 5, 8/26

18 Parallel addition Suppose that n = 10. Then we have s = a 1 + a }{{ 2 + a } 3 + a 4 + a }{{} 5 + a 6 + a }{{} 7 + a 8 + a }{{} 9 + a 10, }{{} a1 1 a2 1 a3 1 a4 1 a5 1 = a1 1 + a2 1 + a3 1 + a4 1 + a5 1, }{{}}{{}}{{} a1 2 a2 2 a3 2 8/26

19 Parallel addition Suppose that n = 10. Then we have s = a 1 + a }{{ 2 + a } 3 + a 4 + a }{{} 5 + a 6 + a }{{} 7 + a 8 + a }{{} 9 + a 10, }{{} a1 1 a2 1 a3 1 a4 1 a5 1 = a1 1 + a2 1 + a3 1 + a4 1 + a5 1, }{{}}{{}}{{} a1 2 a2 2 a3 2 = a a a 2 3, 8/26

20 Parallel addition Suppose that n = 10. Then we have s = a 1 + a }{{ 2 + a } 3 + a 4 + a }{{} 5 + a 6 + a }{{} 7 + a 8 + a }{{} 9 + a 10, }{{} a1 1 a2 1 a3 1 a4 1 a5 1 = a1 1 + a2 1 + a3 1 + a4 1 + a5 1, }{{}}{{}}{{} a1 2 a2 2 a3 2 = a1 2 + a2 2 + a3 2, }{{}}{{} a1 3 a2 3 8/26

21 Parallel addition Suppose that n = 10. Then we have s = a 1 + a }{{ 2 + a } 3 + a 4 + a }{{} 5 + a 6 + a }{{} 7 + a 8 + a }{{} 9 + a 10, }{{} a1 1 a2 1 a3 1 a4 1 a5 1 = a1 1 + a2 1 + a3 1 + a4 1 + a5 1, }{{}}{{}}{{} a1 2 a2 2 a3 2 = a1 2 + a2 2 + a3 2, }{{}}{{} a1 3 a2 3 = a a /26

22 Parallel addition This means that if we have 5 computing units at our disposal, we can add the 10 numbers in 4 time steps. In general, this technique allows us to add n numbers in log 2 n time steps if we have n/2 computing units. 9/26

23 Parallel addition This means that if we have 5 computing units at our disposal, we can add the 10 numbers in 4 time steps. In general, this technique allows us to add n numbers in log 2 n time steps if we have n/2 computing units. Adding n numbers a 0 = a; for j = 1, 2,..., log 2 n parallel: a j i/2 = aj 1 i 1 + aj 1 i, i = 2, 4, 6,..., a j 1 ; 9/26

24 Parallel addition This means that if we have 5 computing units at our disposal, we can add the 10 numbers in 4 time steps. In general, this technique allows us to add n numbers in log 2 n time steps if we have n/2 computing units. Adding n numbers a 0 = a; for j = 1, 2,..., log 2 n parallel: a j i/2 = aj 1 i 1 + aj 1 i, i = 2, 4, 6,..., a j 1 ; if a j 1 % 2 > 0 then a j 1 = aj 1 1 ; 9/26

25 Parallel addition There is a recursive version of the previous algorithm where each function call is given to a new processor. Adding n numbers (recursive version) sum(a, n) { if n > 1 then return(sum(a, 1, n//2) + sum(a, n//2 + 1, n)) else return(a 1 ); } 10/26

26 Parallel computing in practice Problem Add 1000 integers, each with 5 digits. 11/26

27 Parallel computing in practice Problem Add 1000 integers, each with 5 digits. Resources This group of people. 11/26

28 Parallel computing in practice Problem Add 1000 integers, each with 5 digits. Resources This group of people. Method 1 While more than one number: 1 I share out the numbers evenly, at least two numbers each 2 You add your numbers 3 You pass your results back to me 11/26

29 Dot product If a = (a 1, a 2,..., a n ) and b = (b 1, b 2,..., b n ) then n a b = a i b i. i=1 12/26

30 Dot product If a = (a 1, a 2,..., a n ) and b = (b 1, b 2,..., b n ) then a b = n a i b i. i=1 Provided we have n processors this can be computed in log 2 n + 1 time steps: Inner product 1 Compute the n products in parallel. 2 Compute the sum. 12/26

31 Matrix multiplication Let A R n,n and B R n,n. Then the product AB is a matrix in R n,n which is given by a 1 a AB = 2 ) (b 1 b 2 b n. a n a 1 b 1 a 1 b 2 a 1 b n a = 2 b 1 a 2 b 2 a 2 b n a n b 1 a n b 2 a n b n 13/26

32 Parallel matrix multiplication Recall that one dot product of length n can be computed in log 2 n + 1 time steps provided we have n processors at our disposal. 14/26

33 Parallel matrix multiplication Recall that one dot product of length n can be computed in log 2 n + 1 time steps provided we have n processors at our disposal. Since all the n 2 dot products in the matrix product are independent, we can also compute AB in log 2 n + 1 time steps, provided we may use n 3 processors. 14/26

34 Parallel matrix multiplication Alternative algorithm Give each of the n 2 dot products in a 1 b 1 a 1 b 2 a 1 b n a 2 b 1 a 2 b 2 a 2 b n..... a n b 1 a n b 2 a n b n to different processors. 15/26

35 Parallel matrix multiplication Alternative algorithm Give each of the n 2 dot products in a 1 b 1 a 1 b 2 a 1 b n a 2 b 1 a 2 b 2 a 2 b n a n b 1 a n b 2 a n b n to different processors. Accumulation of the dot product on one processor requires n time steps, provided we have n 2 processors. 15/26

36 Strassen s algorithm (sequential) The product of the two block matrices ( ) ( ) ( ) C 1,1 C 1,2 A1,1 A = 1,2 B1,1 B 1,2 C 2,1 C 2,2 A 2,1 A 2,2 B 2,1 B 2,2 16/26

37 Strassen s algorithm (sequential) The product of the two block matrices ( ) ( ) ( ) C 1,1 C 1,2 A1,1 A = 1,2 B1,1 B 1,2 C 2,1 C 2,2 A 2,1 A 2,2 B 2,1 B 2,2 can be computed by first calculating H 1 = A 1,1 (B 1,2 B 2,2 ), H 2 = A 2,2 (B 2,1 B 1,1 ), H 3 = (A 2,1 + A 2,2 )B 1,1, H 4 = a 2,2 (B 2,1 B 1,1 ), 16/26

38 Strassen s algorithm (sequential) The product of the two block matrices ( ) ( ) ( ) C 1,1 C 1,2 A1,1 A = 1,2 B1,1 B 1,2 C 2,1 C 2,2 A 2,1 A 2,2 B 2,1 B 2,2 can be computed by first calculating H 1 = A 1,1 (B 1,2 B 2,2 ), H 2 = A 2,2 (B 2,1 B 1,1 ), H 3 = (A 2,1 + A 2,2 )B 1,1, H 4 = a 2,2 (B 2,1 B 1,1 ), H 5 = (A 1,1 + A 2,2 )(B 1,1 + B 2,2 ), H 6 = (A 1,2 A 2,2 )(B 2,1 + B 2,2 ), H 7 = (A 1,1 A 2,1 )(B 1,1 + B 1,2 ). 16/26

39 Strassen s algorithm (sequential) The product of the two block matrices ( ) ( ) ( ) C 1,1 C 1,2 A1,1 A = 1,2 B1,1 B 1,2 C 2,1 C 2,2 A 2,1 A 2,2 B 2,1 B 2,2 can be computed by first calculating H 1 = A 1,1 (B 1,2 B 2,2 ), H 2 = A 2,2 (B 2,1 B 1,1 ), H 3 = (A 2,1 + A 2,2 )B 1,1, H 4 = a 2,2 (B 2,1 B 1,1 ), H 5 = (A 1,1 + A 2,2 )(B 1,1 + B 2,2 ), H 6 = (A 1,2 A 2,2 )(B 2,1 + B 2,2 ), H 7 = (A 1,1 A 2,1 )(B 1,1 + B 1,2 ). Then C 1,1 = H 4 + H 5 + H 6 H 2, C 1,2 = H 1,1 + H 1,2, C 2,1 = H 2,1 + H 2,2, C 2,2 = H 1 + H 1 H 3 H 7. 16/26

40 Strassen s algorithm Strassen s algorithm may be applied recursively to multiply together any two matrices. Theorem (Complexity of Strassen s algorithm) If A and B are n n matrices, then the (sequential) complexity of Strassen s algorithm is O(n 2.71 ). 17/26

41 Strassen s algorithm Strassen s algorithm may be applied recursively to multiply together any two matrices. Theorem (Complexity of Strassen s algorithm) If A and B are n n matrices, then the (sequential) complexity of Strassen s algorithm is O(n 2.71 ). More elaborate algorithms exist that have complexity O(n ) (Coppersmith-Winograd). 17/26

42 Complexity of matrix algorithms Theorem (Equivalence of matrix operations) Matrix multiplication, computation of determinants, matrix inversion and solution of a linear system of equations all have the same computational complexity (sequential and parallel). 18/26

43 Solving linear systems of equations The standard Gaussian elimination algorithm is a bit cumbersome to parallelise, we therefore consider a classical iterative algorithm instead. 19/26

44 Solving linear systems of equations Suppose we have the linear system of equations a 1,1 x 1 + a 1,2 x a 1,n x n = b 1, a 2,1 x 1 + a 2,2 x a 2,n x n = b 2,. a n,1 x 1 + a n,2 x a n,n x n = b n. 20/26

45 Solving linear systems of equations Suppose we have the linear system of equations a 1,1 x 1 + a 1,2 x a 1,n x n = b 1, a 2,1 x 1 + a 2,2 x a 2,n x n = b 2,. a n,1 x 1 + a n,2 x a n,n x n = b n. If a i,i 0 for i = 1,..., n, then we can solve equation i for x i, x 1 = (b 1 a 1,2 x 2 a 1,3 x 3 a 1,n x n )/a 1,1, 20/26

46 Solving linear systems of equations Suppose we have the linear system of equations a 1,1 x 1 + a 1,2 x a 1,n x n = b 1, a 2,1 x 1 + a 2,2 x a 2,n x n = b 2,. a n,1 x 1 + a n,2 x a n,n x n = b n. If a i,i 0 for i = 1,..., n, then we can solve equation i for x i, x 1 = (b 1 a 1,2 x 2 a 1,3 x 3 a 1,n x n )/a 1,1, x 2 = (b 2 a 2,1 x 1 a 2,3 x 3 a 2,n x n )/a 2,2,. 20/26

47 Solving linear systems of equations Suppose we have the linear system of equations a 1,1 x 1 + a 1,2 x a 1,n x n = b 1, a 2,1 x 1 + a 2,2 x a 2,n x n = b 2,. a n,1 x 1 + a n,2 x a n,n x n = b n. If a i,i 0 for i = 1,..., n, then we can solve equation i for x i, x 1 = (b 1 a 1,2 x 2 a 1,3 x 3 a 1,n x n )/a 1,1, x 2 = (b 2 a 2,1 x 1 a 2,3 x 3 a 2,n x n )/a 2,2,. x n = (b n a n,1 x 1 a n,2 x 2 a n,n 1 x n 1 )/a n,n. 20/26

48 Solving linear systems of equations x 1 = (b 1 a 1,2 x 2 a 1,3 x 3 a 1,n x n )/a 1,1 x 2 = (b 2 a 2,1 x 1 a 2,3 x 3 a 2,n x n )/a 2,2. x n = (b n a n,1 x 1 a n,2 x 2 a n,n 1 x n 1 )/a n,n If we choose an initial estimate x 0 for the solution, we can use these equations to compute a new estimate x 1, then x 2 etc. This is called Jacobi s method. 21/26

49 Solving linear systems of equations x 1 = (b 1 a 1,2 x 2 a 1,3 x 3 a 1,n x n )/a 1,1 x 2 = (b 2 a 2,1 x 1 a 2,3 x 3 a 2,n x n )/a 2,2. x n = (b n a n,1 x 1 a n,2 x 2 a n,n 1 x n 1 )/a n,n If we choose an initial estimate x 0 for the solution, we can use these equations to compute a new estimate x 1, then x 2 etc. This is called Jacobi s method. Under suitable conditions on the coefficient matrix these iterations will converge to the true solution. 21/26

50 Solving linear systems of equations x 1 = (b 1 a 1,2 x 2 a 1,3 x 3 a 1,n x n )/a 1,1 x 2 = (b 2 a 2,1 x 1 a 2,3 x 3 a 2,n x n )/a 2,2. x n = (b n a n,1 x 1 a n,2 x 2 a n,n 1 x n 1 )/a n,n If we choose an initial estimate x 0 for the solution, we can use these equations to compute a new estimate x 1, then x 2 etc. This is called Jacobi s method. Under suitable conditions on the coefficient matrix these iterations will converge to the true solution. Note that if the calculations are performed sequentially, we may make use of the new values of x 1, x 2,..., x i 1 when we compute x i ; this is called Gaus-Seidel iteration and converges faster than Jacobi iteration. 21/26

51 Jacobi s method in parallel Jacobi s method is perfect for parallel implementation: x 1 = (b 1 a 1,2 x 2 a 1,3 x 3 a 1,n x n )/a 1,1 x 2 = (b 2 a 2,1 x 1 a 2,3 x 3 a 2,n x n )/a 2,2. x n = (b n a n,1 x 1 a n,2 x 2 a n,n 1 x n 1 )/a n,n 22/26

52 Jacobi s method in parallel Jacobi s method is perfect for parallel implementation: x 1 = (b 1 a 1,2 x 2 a 1,3 x 3 a 1,n x n )/a 1,1 x 2 = (b 2 a 2,1 x 1 a 2,3 x 3 a 2,n x n )/a 2,2. x n = (b n a n,1 x 1 a n,2 x 2 a n,n 1 x n 1 )/a n,n Each of n processors is given the task of computing one x i. Requires n time steps per iteration. 22/26

53 Hybrid Jacobi/Gaus-Seidel Suppose the system has dimension and we only have 1000 processors. Assume that an initial solution x 0 is given. 23/26

54 Hybrid Jacobi/Gaus-Seidel Suppose the system has dimension and we only have 1000 processors. Assume that an initial solution x 0 is given. First iteration: 1 For i = 1, 2,..., 100: 23/26

55 Hybrid Jacobi/Gaus-Seidel Suppose the system has dimension and we only have 1000 processors. Assume that an initial solution x 0 is given. First iteration: 1 For i = 1, 2,..., 100: 1 Set j 1 = 1000i + 1 and j 2 = 1000(i + 1) 23/26

56 Hybrid Jacobi/Gaus-Seidel Suppose the system has dimension and we only have 1000 processors. Assume that an initial solution x 0 is given. First iteration: 1 For i = 1, 2,..., 100: 1 Set j 1 = 1000i + 1 and j 2 = 1000(i + 1) 2 Give equations j 1,..., j 2 to the processors 23/26

57 Hybrid Jacobi/Gaus-Seidel Suppose the system has dimension and we only have 1000 processors. Assume that an initial solution x 0 is given. First iteration: 1 For i = 1, 2,..., 100: 1 Set j 1 = 1000i + 1 and j 2 = 1000(i + 1) 2 Give equations j 1,..., j 2 to the processors 3 Compute x 1 j 1,..., x 1 j 2 from x 1 1,..., x 1 j 1 1, x 0 j 1,..., x 0 n 23/26

58 Hybrid Jacobi/Gaus-Seidel Suppose the system has dimension and we only have 1000 processors. Assume that an initial solution x 0 is given. First iteration: 1 For i = 1, 2,..., 100: 1 Set j 1 = 1000i + 1 and j 2 = 1000(i + 1) 2 Give equations j 1,..., j 2 to the processors 3 Compute x 1 j 1,..., x 1 j 2 from x 1 1,..., x 1 j 1 1, x 0 j 1,..., x 0 n 23/26

59 Hybrid Jacobi/Gaus-Seidel Suppose the system has dimension and we only have 1000 processors. Assume that an initial solution x 0 is given. First iteration: 1 For i = 1, 2,..., 100: 1 Set j 1 = 1000i + 1 and j 2 = 1000(i + 1) 2 Give equations j 1,..., j 2 to the processors 3 Compute x 1 j 1,..., x 1 j 2 from x 1 1,..., x 1 j 1 1, x 0 j 1,..., x 0 n Repeat until convergence. 23/26

60 Conclusion The number of time steps in many (mathematical) operations can be reduced considerably if multiple operations can be performed simultaneously. 24/26

61 Conclusion The number of time steps in many (mathematical) operations can be reduced considerably if multiple operations can be performed simultaneously. Often other algorithms than the traditional sequential ones are the most efficient. 24/26

62 Conclusion The number of time steps in many (mathematical) operations can be reduced considerably if multiple operations can be performed simultaneously. Often other algorithms than the traditional sequential ones are the most efficient. The actual choice of algorithm is usually highly dependent on the type of processor and how memory is organised. 24/26

63 Parallel computing in practice Problem Add 1000 integers, each with 5 digits. 25/26

64 Parallel computing in practice Problem Add 1000 integers, each with 5 digits. Resources This group of people. 25/26

65 Parallel computing in practice Problem Add 1000 integers, each with 5 digits. Resources This group of people. Method 1 While more than one number: 1 I share out the numbers evenly, at least two numbers each 2 You add your numbers 3 You pass your results back to me 25/26

66 Parallel computing in practice Method (intelligent) Assumption: You are arranged in a strict hierarchy 1 I share out the numbers evenly 26/26

67 Parallel computing in practice Method (intelligent) Assumption: You are arranged in a strict hierarchy 1 I share out the numbers evenly 2 While more than one active computer: 26/26

68 Parallel computing in practice Method (intelligent) Assumption: You are arranged in a strict hierarchy 1 I share out the numbers evenly 2 While more than one active computer: 1 You add your numbers 26/26

69 Parallel computing in practice Method (intelligent) Assumption: You are arranged in a strict hierarchy 1 I share out the numbers evenly 2 While more than one active computer: 1 You add your numbers 2 You pass your result to someone more important than you 26/26

70 Parallel computing in practice Method (intelligent) Assumption: You are arranged in a strict hierarchy 1 I share out the numbers evenly 2 While more than one active computer: 1 You add your numbers 2 You pass your result to someone more important than you 3 I receive the result from my top assistant 26/26

71 Parallel computing in practice Method (intelligent) Assumption: You are arranged in a strict hierarchy 1 I share out the numbers evenly 2 While more than one active computer: 1 You add your numbers 2 You pass your result to someone more important than you 3 I receive the result from my top assistant 26/26

72 Parallel computing in practice Method (intelligent) Assumption: You are arranged in a strict hierarchy 1 I share out the numbers evenly 2 While more than one active computer: 1 You add your numbers 2 You pass your result to someone more important than you 3 I receive the result from my top assistant The intelligence and communication skills of our processors are important, not just their computational skills. 26/26

Computational complexity and some Graph Theory

Computational complexity and some Graph Theory Graph Theory Lars Hellström January 22, 2014 Contents of todays lecture An important concern when choosing the algorithm to use for something (after basic requirements such as correctness and stability)

More information

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD

Numerical Analysis: Solutions of System of. Linear Equation. Natasha S. Sharma, PhD Mathematical Question we are interested in answering numerically How to solve the following linear system for x Ax = b? where A is an n n invertible matrix and b is vector of length n. Notation: x denote

More information

Analysis of Algorithm Efficiency. Dr. Yingwu Zhu

Analysis of Algorithm Efficiency. Dr. Yingwu Zhu Analysis of Algorithm Efficiency Dr. Yingwu Zhu Measure Algorithm Efficiency Time efficiency How fast the algorithm runs; amount of time required to accomplish the task Our focus! Space efficiency Amount

More information

Topic 17. Analysis of Algorithms

Topic 17. Analysis of Algorithms Topic 17 Analysis of Algorithms Analysis of Algorithms- Review Efficiency of an algorithm can be measured in terms of : Time complexity: a measure of the amount of time required to execute an algorithm

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns

5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns 5.7 Cramer's Rule 1. Using Determinants to Solve Systems Assumes the system of two equations in two unknowns (1) possesses the solution and provided that.. The numerators and denominators are recognized

More information

Compute the Fourier transform on the first register to get x {0,1} n x 0.

Compute the Fourier transform on the first register to get x {0,1} n x 0. CS 94 Recursive Fourier Sampling, Simon s Algorithm /5/009 Spring 009 Lecture 3 1 Review Recall that we can write any classical circuit x f(x) as a reversible circuit R f. We can view R f as a unitary

More information

Matrix Multiplication

Matrix Multiplication Matrix Multiplication Matrix Multiplication Matrix multiplication. Given two n-by-n matrices A and B, compute C = AB. n c ij = a ik b kj k=1 c 11 c 12 c 1n c 21 c 22 c 2n c n1 c n2 c nn = a 11 a 12 a 1n

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

THE PYTHAGOREAN THEOREM

THE PYTHAGOREAN THEOREM THE STORY SO FAR THE PYTHAGOREAN THEOREM USES OF THE PYTHAGOREAN THEOREM USES OF THE PYTHAGOREAN THEOREM SOLVE RIGHT TRIANGLE APPLICATIONS USES OF THE PYTHAGOREAN THEOREM SOLVE RIGHT TRIANGLE APPLICATIONS

More information

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method

10.2 ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS. The Jacobi Method 54 CHAPTER 10 NUMERICAL METHODS 10. ITERATIVE METHODS FOR SOLVING LINEAR SYSTEMS As a numerical technique, Gaussian elimination is rather unusual because it is direct. That is, a solution is obtained after

More information

Practicality of Large Scale Fast Matrix Multiplication

Practicality of Large Scale Fast Matrix Multiplication Practicality of Large Scale Fast Matrix Multiplication Grey Ballard, James Demmel, Olga Holtz, Benjamin Lipshitz and Oded Schwartz UC Berkeley IWASEP June 5, 2012 Napa Valley, CA Research supported by

More information

Integrated Math III. IM3.1.2 Use a graph to find the solution set of a pair of linear inequalities in two variables.

Integrated Math III. IM3.1.2 Use a graph to find the solution set of a pair of linear inequalities in two variables. Standard 1: Algebra and Functions Students solve inequalities, quadratic equations, and systems of equations. They graph polynomial, rational, algebraic, and piece-wise defined functions. They graph and

More information

CS Non-recursive and Recursive Algorithm Analysis

CS Non-recursive and Recursive Algorithm Analysis CS483-04 Non-recursive and Recursive Algorithm Analysis Instructor: Fei Li Room 443 ST II Office hours: Tue. & Thur. 4:30pm - 5:30pm or by appointments lifei@cs.gmu.edu with subject: CS483 http://www.cs.gmu.edu/

More information

Class Note #14. In this class, we studied an algorithm for integer multiplication, which. 2 ) to θ(n

Class Note #14. In this class, we studied an algorithm for integer multiplication, which. 2 ) to θ(n Class Note #14 Date: 03/01/2006 [Overall Information] In this class, we studied an algorithm for integer multiplication, which improved the running time from θ(n 2 ) to θ(n 1.59 ). We then used some of

More information

CMPSCI611: Three Divide-and-Conquer Examples Lecture 2

CMPSCI611: Three Divide-and-Conquer Examples Lecture 2 CMPSCI611: Three Divide-and-Conquer Examples Lecture 2 Last lecture we presented and analyzed Mergesort, a simple divide-and-conquer algorithm. We then stated and proved the Master Theorem, which gives

More information

Hani Mehrpouyan, California State University, Bakersfield. Signals and Systems

Hani Mehrpouyan, California State University, Bakersfield. Signals and Systems Hani Mehrpouyan, Department of Electrical and Computer Engineering, Lecture 26 (LU Factorization) May 30 th, 2013 The material in these lectures is partly taken from the books: Elementary Numerical Analysis,

More information

ALGEBRA (SMR Domain 1) Algebraic Structures (SMR 1.1)...1

ALGEBRA (SMR Domain 1) Algebraic Structures (SMR 1.1)...1 TABLE OF CONTENTS Page Numbers ALGEBRA (SMR Domain 1)...1 0001 Algebraic Structures (SMR 1.1)...1 Apply basic properties of real and complex numbers in constructing mathematical arguments (e.g., if a

More information

JACOBI S ITERATION METHOD

JACOBI S ITERATION METHOD ITERATION METHODS These are methods which compute a sequence of progressively accurate iterates to approximate the solution of Ax = b. We need such methods for solving many large linear systems. Sometimes

More information

Analysis of Algorithms

Analysis of Algorithms October 1, 2015 Analysis of Algorithms CS 141, Fall 2015 1 Analysis of Algorithms: Issues Correctness/Optimality Running time ( time complexity ) Memory requirements ( space complexity ) Power I/O utilization

More information

Numerical Solution Techniques in Mechanical and Aerospace Engineering

Numerical Solution Techniques in Mechanical and Aerospace Engineering Numerical Solution Techniques in Mechanical and Aerospace Engineering Chunlei Liang LECTURE 3 Solvers of linear algebraic equations 3.1. Outline of Lecture Finite-difference method for a 2D elliptic PDE

More information

Module 1: Analyzing the Efficiency of Algorithms

Module 1: Analyzing the Efficiency of Algorithms Module 1: Analyzing the Efficiency of Algorithms Dr. Natarajan Meghanathan Professor of Computer Science Jackson State University Jackson, MS 39217 E-mail: natarajan.meghanathan@jsums.edu What is an Algorithm?

More information

Lecture 2. Fundamentals of the Analysis of Algorithm Efficiency

Lecture 2. Fundamentals of the Analysis of Algorithm Efficiency Lecture 2 Fundamentals of the Analysis of Algorithm Efficiency 1 Lecture Contents 1. Analysis Framework 2. Asymptotic Notations and Basic Efficiency Classes 3. Mathematical Analysis of Nonrecursive Algorithms

More information

Ch 01. Analysis of Algorithms

Ch 01. Analysis of Algorithms Ch 01. Analysis of Algorithms Input Algorithm Output Acknowledgement: Parts of slides in this presentation come from the materials accompanying the textbook Algorithm Design and Applications, by M. T.

More information

Design and Analysis of Algorithms

Design and Analysis of Algorithms CSE 101, Winter 2018 Design and Analysis of Algorithms Lecture 4: Divide and Conquer (I) Class URL: http://vlsicad.ucsd.edu/courses/cse101-w18/ Divide and Conquer ( DQ ) First paradigm or framework DQ(S)

More information

On the Exponent of the All Pairs Shortest Path Problem

On the Exponent of the All Pairs Shortest Path Problem On the Exponent of the All Pairs Shortest Path Problem Noga Alon Department of Mathematics Sackler Faculty of Exact Sciences Tel Aviv University Zvi Galil Department of Computer Science Sackler Faculty

More information

Discrete Mathematics U. Waterloo ECE 103, Spring 2010 Ashwin Nayak May 17, 2010 Recursion

Discrete Mathematics U. Waterloo ECE 103, Spring 2010 Ashwin Nayak May 17, 2010 Recursion Discrete Mathematics U. Waterloo ECE 103, Spring 2010 Ashwin Nayak May 17, 2010 Recursion During the past week, we learnt about inductive reasoning, in which we broke down a problem of size n, into one

More information

A NOTE ON STRATEGY ELIMINATION IN BIMATRIX GAMES

A NOTE ON STRATEGY ELIMINATION IN BIMATRIX GAMES A NOTE ON STRATEGY ELIMINATION IN BIMATRIX GAMES Donald E. KNUTH Department of Computer Science. Standford University. Stanford2 CA 94305. USA Christos H. PAPADIMITRIOU Department of Computer Scrence and

More information

Parallel Programming. Parallel algorithms Linear systems solvers

Parallel Programming. Parallel algorithms Linear systems solvers Parallel Programming Parallel algorithms Linear systems solvers Terminology System of linear equations Solve Ax = b for x Special matrices Upper triangular Lower triangular Diagonally dominant Symmetric

More information

Ch01. Analysis of Algorithms

Ch01. Analysis of Algorithms Ch01. Analysis of Algorithms Input Algorithm Output Acknowledgement: Parts of slides in this presentation come from the materials accompanying the textbook Algorithm Design and Applications, by M. T. Goodrich

More information

1 Caveats of Parallel Algorithms

1 Caveats of Parallel Algorithms CME 323: Distriuted Algorithms and Optimization, Spring 2015 http://stanford.edu/ reza/dao. Instructor: Reza Zadeh, Matroid and Stanford. Lecture 1, 9/26/2015. Scried y Suhas Suresha, Pin Pin, Andreas

More information

Parallel Numerical Algorithms

Parallel Numerical Algorithms Parallel Numerical Algorithms Chapter 13 Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign CS 554 / CSE 512 Michael T. Heath Parallel Numerical Algorithms

More information

Algebra II. A2.1.1 Recognize and graph various types of functions, including polynomial, rational, and algebraic functions.

Algebra II. A2.1.1 Recognize and graph various types of functions, including polynomial, rational, and algebraic functions. Standard 1: Relations and Functions Students graph relations and functions and find zeros. They use function notation and combine functions by composition. They interpret functions in given situations.

More information

Pre AP Algebra. Mathematics Standards of Learning Curriculum Framework 2009: Pre AP Algebra

Pre AP Algebra. Mathematics Standards of Learning Curriculum Framework 2009: Pre AP Algebra Pre AP Algebra Mathematics Standards of Learning Curriculum Framework 2009: Pre AP Algebra 1 The content of the mathematics standards is intended to support the following five goals for students: becoming

More information

Parallel Scientific Computing

Parallel Scientific Computing IV-1 Parallel Scientific Computing Matrix-vector multiplication. Matrix-matrix multiplication. Direct method for solving a linear equation. Gaussian Elimination. Iterative method for solving a linear equation.

More information

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline

More information

6. Iterative Methods for Linear Systems. The stepwise approach to the solution...

6. Iterative Methods for Linear Systems. The stepwise approach to the solution... 6 Iterative Methods for Linear Systems The stepwise approach to the solution Miriam Mehl: 6 Iterative Methods for Linear Systems The stepwise approach to the solution, January 18, 2013 1 61 Large Sparse

More information

Linear Systems of n equations for n unknowns

Linear Systems of n equations for n unknowns Linear Systems of n equations for n unknowns In many application problems we want to find n unknowns, and we have n linear equations Example: Find x,x,x such that the following three equations hold: x

More information

Intermediate Math Circles February 14, 2018 Contest Prep: Number Theory

Intermediate Math Circles February 14, 2018 Contest Prep: Number Theory Intermediate Math Circles February 14, 2018 Contest Prep: Number Theory Part 1: Prime Factorization A prime number is an integer greater than 1 whose only positive divisors are 1 and itself. An integer

More information

Solving Linear Systems of Equations

Solving Linear Systems of Equations November 6, 2013 Introduction The type of problems that we have to solve are: Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. a 1N a NN x = x 1 x 2. x N B = b 1 b 2. b N To find A 1 (inverse

More information

Curriculum Catalog

Curriculum Catalog 2017-2018 Curriculum Catalog 2017 Glynlyon, Inc. Table of Contents ALGEBRA II COURSE OVERVIEW... 1 UNIT 1: SET, STRUCTURE, AND FUNCTION... 1 UNIT 2: NUMBERS, SENTENCES, AND PROBLEMS... 2 UNIT 3: LINEAR

More information

1 - Systems of Linear Equations

1 - Systems of Linear Equations 1 - Systems of Linear Equations 1.1 Introduction to Systems of Linear Equations Almost every problem in linear algebra will involve solving a system of equations. ü LINEAR EQUATIONS IN n VARIABLES We are

More information

CS 310 Advanced Data Structures and Algorithms

CS 310 Advanced Data Structures and Algorithms CS 310 Advanced Data Structures and Algorithms Runtime Analysis May 31, 2017 Tong Wang UMass Boston CS 310 May 31, 2017 1 / 37 Topics Weiss chapter 5 What is algorithm analysis Big O, big, big notations

More information

The practical revised simplex method (Part 2)

The practical revised simplex method (Part 2) The practical revised simplex method (Part 2) Julian Hall School of Mathematics University of Edinburgh January 25th 2007 The practical revised simplex method Overview (Part 2) Practical implementation

More information

Solving PDEs with CUDA Jonathan Cohen

Solving PDEs with CUDA Jonathan Cohen Solving PDEs with CUDA Jonathan Cohen jocohen@nvidia.com NVIDIA Research PDEs (Partial Differential Equations) Big topic Some common strategies Focus on one type of PDE in this talk Poisson Equation Linear

More information

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn

Today s class. Linear Algebraic Equations LU Decomposition. Numerical Methods, Fall 2011 Lecture 8. Prof. Jinbo Bi CSE, UConn Today s class Linear Algebraic Equations LU Decomposition 1 Linear Algebraic Equations Gaussian Elimination works well for solving linear systems of the form: AX = B What if you have to solve the linear

More information

Trinity Christian School Curriculum Guide

Trinity Christian School Curriculum Guide Course Title: Calculus Grade Taught: Twelfth Grade Credits: 1 credit Trinity Christian School Curriculum Guide A. Course Goals: 1. To provide students with a familiarity with the properties of linear,

More information

Ron Paul Curriculum Mathematics 8 Lesson List

Ron Paul Curriculum Mathematics 8 Lesson List Ron Paul Curriculum Mathematics 8 Lesson List 1 Introduction 2 Algebraic Addition 3 Algebraic Subtraction 4 Algebraic Multiplication 5 Week 1 Review 6 Algebraic Division 7 Powers and Exponents 8 Order

More information

MATRICES. a m,1 a m,n A =

MATRICES. a m,1 a m,n A = MATRICES Matrices are rectangular arrays of real or complex numbers With them, we define arithmetic operations that are generalizations of those for real and complex numbers The general form a matrix of

More information

Algebra 2 Curriculum Guide Lunenburg County Public Schools June 2014

Algebra 2 Curriculum Guide Lunenburg County Public Schools June 2014 Marking Period: 1 Days: 5 Reporting Category/Strand: Equations & Inequalities SOL AII.4a The student will solve, algebraically and graphically, a) absolute value equations and inequalities Graphing calculars

More information

Pre-Calculus and Trigonometry Capacity Matrix

Pre-Calculus and Trigonometry Capacity Matrix Review Polynomials A1.1.4 A1.2.5 Add, subtract, multiply and simplify polynomials and rational expressions Solve polynomial equations and equations involving rational expressions Review Chapter 1 and their

More information

Introduction to Algorithms

Introduction to Algorithms Lecture 1 Introduction to Algorithms 1.1 Overview The purpose of this lecture is to give a brief overview of the topic of Algorithms and the kind of thinking it involves: why we focus on the subjects that

More information

Algebra 2 Curriculum Map

Algebra 2 Curriculum Map Essential Questions for Chapters 1 and 2: 1. What Algebra I skills are required to be successful in this course? 2. How can equations and inequalities be used to model real world situations? 3. How would

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

Prentice Hall: Algebra 2 with Trigonometry 2006 Correlated to: California Mathematics Content Standards for Algebra II (Grades 9-12)

Prentice Hall: Algebra 2 with Trigonometry 2006 Correlated to: California Mathematics Content Standards for Algebra II (Grades 9-12) California Mathematics Content Standards for Algebra II (Grades 9-12) This discipline complements and expands the mathematical content and concepts of algebra I and geometry. Students who master algebra

More information

Lecture 6: Introducing Complexity

Lecture 6: Introducing Complexity COMP26120: Algorithms and Imperative Programming Lecture 6: Introducing Complexity Ian Pratt-Hartmann Room KB2.38: email: ipratt@cs.man.ac.uk 2015 16 You need this book: Make sure you use the up-to-date

More information

Course of Study For Math

Course of Study For Math Medina County Schools Course of Study For Math Algebra II (Cloverleaf) Algebra II Honors (Cloverleaf) 321 STANDARD 1: Number, Number Sense and Operations Students demonstrate number sense, including an

More information

Algorithm efficiency analysis

Algorithm efficiency analysis Algorithm efficiency analysis Mădălina Răschip, Cristian Gaţu Faculty of Computer Science Alexandru Ioan Cuza University of Iaşi, Romania DS 2017/2018 Content Algorithm efficiency analysis Recursive function

More information

Roundoff Analysis of Gaussian Elimination

Roundoff Analysis of Gaussian Elimination Jim Lambers MAT 60 Summer Session 2009-0 Lecture 5 Notes These notes correspond to Sections 33 and 34 in the text Roundoff Analysis of Gaussian Elimination In this section, we will perform a detailed error

More information

Data Structures and Algorithms Running time and growth functions January 18, 2018

Data Structures and Algorithms Running time and growth functions January 18, 2018 Data Structures and Algorithms Running time and growth functions January 18, 2018 Measuring Running Time of Algorithms One way to measure the running time of an algorithm is to implement it and then study

More information

Fast reversion of formal power series

Fast reversion of formal power series Fast reversion of formal power series Fredrik Johansson LFANT, INRIA Bordeaux RAIM, 2016-06-29, Banyuls-sur-mer 1 / 30 Reversion of power series F = exp(x) 1 = x + x 2 2! + x 3 3! + x 4 G = log(1 + x)

More information

Introduction. An Introduction to Algorithms and Data Structures

Introduction. An Introduction to Algorithms and Data Structures Introduction An Introduction to Algorithms and Data Structures Overview Aims This course is an introduction to the design, analysis and wide variety of algorithms (a topic often called Algorithmics ).

More information

Algebra II Standards of Learning Curriculum Guide

Algebra II Standards of Learning Curriculum Guide Expressions Operations AII.1 identify field properties, axioms of equality inequality, properties of order that are valid for the set of real numbers its subsets, complex numbers, matrices. be able to:

More information

Data Structures and Algorithms

Data Structures and Algorithms Data Structures and Algorithms Autumn 2018-2019 Outline 1 Algorithm Analysis (contd.) Outline Algorithm Analysis (contd.) 1 Algorithm Analysis (contd.) Growth Rates of Some Commonly Occurring Functions

More information

Section 6.2 Larger Systems of Linear Equations

Section 6.2 Larger Systems of Linear Equations Section 6.2 Larger Systems of Linear Equations Gaussian Elimination In general, to solve a system of linear equations using its augmented matrix, we use elementary row operations to arrive at a matrix

More information

4.5 Applications of Congruences

4.5 Applications of Congruences 4.5 Applications of Congruences 287 66. Find all solutions of the congruence x 2 16 (mod 105). [Hint: Find the solutions of this congruence modulo 3, modulo 5, and modulo 7, and then use the Chinese remainder

More information

Gauss-Jordan Row Reduction and Reduced Row Echelon Form

Gauss-Jordan Row Reduction and Reduced Row Echelon Form Gauss-Jordan Row Reduction and Reduced Row Echelon Form If we put the augmented matrix of a linear system in reduced row-echelon form, then we don t need to back-substitute to solve the system. To put

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

Divide and Conquer. Andreas Klappenecker. [based on slides by Prof. Welch]

Divide and Conquer. Andreas Klappenecker. [based on slides by Prof. Welch] Divide and Conquer Andreas Klappenecker [based on slides by Prof. Welch] Divide and Conquer Paradigm An important general technique for designing algorithms: divide problem into subproblems recursively

More information

correlated to the Washington D.C. Public Schools Learning Standards Algebra II

correlated to the Washington D.C. Public Schools Learning Standards Algebra II correlated to the Washington D.C. Public Schools Learning Standards Algebra II McDougal Littell Algebra 2 2007 correlated to the Washington DC Public Schools Learning Standards Algebra II NUMBER SENSE

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

2 - Strings and Binomial Coefficients

2 - Strings and Binomial Coefficients November 14, 2017 2 - Strings and Binomial Coefficients William T. Trotter trotter@math.gatech.edu Basic Definition Let n be a positive integer and let [n] = {1, 2,, n}. A sequence of length n such as

More information

EXAMPLES OF CLASSICAL ITERATIVE METHODS

EXAMPLES OF CLASSICAL ITERATIVE METHODS EXAMPLES OF CLASSICAL ITERATIVE METHODS In these lecture notes we revisit a few classical fixpoint iterations for the solution of the linear systems of equations. We focus on the algebraic and algorithmic

More information

Relationships Between Planes

Relationships Between Planes Relationships Between Planes Definition: consistent (system of equations) A system of equations is consistent if there exists one (or more than one) solution that satisfies the system. System 1: {, System

More information

(x 1 +x 2 )(x 1 x 2 )+(x 2 +x 3 )(x 2 x 3 )+(x 3 +x 1 )(x 3 x 1 ).

(x 1 +x 2 )(x 1 x 2 )+(x 2 +x 3 )(x 2 x 3 )+(x 3 +x 1 )(x 3 x 1 ). CMPSCI611: Verifying Polynomial Identities Lecture 13 Here is a problem that has a polynomial-time randomized solution, but so far no poly-time deterministic solution. Let F be any field and let Q(x 1,...,

More information

{ 0! = 1 n! = n(n 1)!, n 1. n! =

{ 0! = 1 n! = n(n 1)!, n 1. n! = Summations Question? What is the sum of the first 100 positive integers? Counting Question? In how many ways may the first three horses in a 10 horse race finish? Multiplication Principle: If an event

More information

DOC SOLVING SYSTEMS OF EQUATIONS BY SUBSTITUTION CALCULATOR

DOC SOLVING SYSTEMS OF EQUATIONS BY SUBSTITUTION CALCULATOR 02 July, 2018 DOC SOLVING SYSTEMS OF EQUATIONS BY SUBSTITUTION CALCULATOR Document Filetype: PDF 471.03 KB 0 DOC SOLVING SYSTEMS OF EQUATIONS BY SUBSTITUTION CALCULATOR In case you have to have assistance

More information

Observations Homework Checkpoint quizzes Chapter assessments (Possibly Projects) Blocks of Algebra

Observations Homework Checkpoint quizzes Chapter assessments (Possibly Projects) Blocks of Algebra September The Building Blocks of Algebra Rates, Patterns and Problem Solving Variables and Expressions The Commutative and Associative Properties The Distributive Property Equivalent Expressions Seeing

More information

Lecture 18 Classical Iterative Methods

Lecture 18 Classical Iterative Methods Lecture 18 Classical Iterative Methods MIT 18.335J / 6.337J Introduction to Numerical Methods Per-Olof Persson November 14, 2006 1 Iterative Methods for Linear Systems Direct methods for solving Ax = b,

More information

CHINO VALLEY UNIFIED SCHOOL DISTRICT INSTRUCTIONAL GUIDE ALGEBRA II

CHINO VALLEY UNIFIED SCHOOL DISTRICT INSTRUCTIONAL GUIDE ALGEBRA II CHINO VALLEY UNIFIED SCHOOL DISTRICT INSTRUCTIONAL GUIDE ALGEBRA II Course Number 5116 Department Mathematics Qualification Guidelines Successful completion of both semesters of Algebra 1 or Algebra 1

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis. Catie Baker Spring 2015

CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis. Catie Baker Spring 2015 CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis Catie Baker Spring 2015 Today Registration should be done. Homework 1 due 11:59pm next Wednesday, April 8 th. Review math

More information

CPSC 467: Cryptography and Computer Security

CPSC 467: Cryptography and Computer Security CPSC 467: Cryptography and Computer Security Michael J. Fischer Lecture 9 September 30, 2015 CPSC 467, Lecture 9 1/47 Fast Exponentiation Algorithms Number Theory Needed for RSA Elementary Number Theory

More information

! Break up problem into several parts. ! Solve each part recursively. ! Combine solutions to sub-problems into overall solution.

! Break up problem into several parts. ! Solve each part recursively. ! Combine solutions to sub-problems into overall solution. Divide-and-Conquer Chapter 5 Divide and Conquer Divide-and-conquer.! Break up problem into several parts.! Solve each part recursively.! Combine solutions to sub-problems into overall solution. Most common

More information

Parallelism and Machine Models

Parallelism and Machine Models Parallelism and Machine Models Andrew D Smith University of New Brunswick, Fredericton Faculty of Computer Science Overview Part 1: The Parallel Computation Thesis Part 2: Parallelism of Arithmetic RAMs

More information

Billstein, Libeskind and Lott, A problem solving approach to Mathematics for elementary school teachers.

Billstein, Libeskind and Lott, A problem solving approach to Mathematics for elementary school teachers. . Exploration with patterns. Solving problems by algebraic Method. The Geometer s Sketchpad 4. -dimentional geometry 5. Statistical investigations 6. Measures of Central tendency and Spread 7. Fundamental

More information

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form: 17 4 Determinants and the Inverse of a Square Matrix In this section, we are going to use our knowledge of determinants and their properties to derive an explicit formula for the inverse of a square matrix

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

Algorithms, Design and Analysis. Order of growth. Table 2.1. Big-oh. Asymptotic growth rate. Types of formulas for basic operation count

Algorithms, Design and Analysis. Order of growth. Table 2.1. Big-oh. Asymptotic growth rate. Types of formulas for basic operation count Types of formulas for basic operation count Exact formula e.g., C(n) = n(n-1)/2 Algorithms, Design and Analysis Big-Oh analysis, Brute Force, Divide and conquer intro Formula indicating order of growth

More information

MATRIX MULTIPLICATION AND INVERSION

MATRIX MULTIPLICATION AND INVERSION MATRIX MULTIPLICATION AND INVERSION MATH 196, SECTION 57 (VIPUL NAIK) Corresponding material in the book: Sections 2.3 and 2.4. Executive summary Note: The summary does not include some material from the

More information

Instructional Units Plan Algebra II

Instructional Units Plan Algebra II Instructional Units Plan Algebra II This set of plans presents the topics and selected for ACT s rigorous Algebra II course. The topics and standards are arranged in ten units by suggested instructional

More information

MA3025 Course Prerequisites

MA3025 Course Prerequisites MA3025 Course Prerequisites MA 3025 (4-1) MA3025 (4-1) Logic and Discrete Mathematics: Provides a rigorous foundation in logic and elementary discrete mathematics. Topics from logic include modeling English

More information

CS 2336 Discrete Mathematics

CS 2336 Discrete Mathematics CS 2336 Discrete Mathematics Lecture 8 Counting: Permutations and Combinations 1 Outline Definitions Permutation Combination Interesting Identities 2 Definitions Selection and arrangement of objects appear

More information

Module 1: Analyzing the Efficiency of Algorithms

Module 1: Analyzing the Efficiency of Algorithms Module 1: Analyzing the Efficiency of Algorithms Dr. Natarajan Meghanathan Associate Professor of Computer Science Jackson State University Jackson, MS 39217 E-mail: natarajan.meghanathan@jsums.edu Based

More information

Generalizing of ahigh Performance Parallel Strassen Implementation on Distributed Memory MIMD Architectures

Generalizing of ahigh Performance Parallel Strassen Implementation on Distributed Memory MIMD Architectures Generalizing of ahigh Performance Parallel Strassen Implementation on Distributed Memory MIMD Architectures Duc Kien Nguyen 1,IvanLavallee 2,Marc Bui 2 1 CHArt -Ecole Pratique des Hautes Etudes &UniversitéParis

More information

Direct Methods for solving Linear Equation Systems

Direct Methods for solving Linear Equation Systems REVIEW Lecture 5: Systems of Linear Equations Spring 2015 Lecture 6 Direct Methods for solving Linear Equation Systems Determinants and Cramer s Rule Gauss Elimination Algorithm Forward Elimination/Reduction

More information

A note on parallel and alternating time

A note on parallel and alternating time Journal of Complexity 23 (2007) 594 602 www.elsevier.com/locate/jco A note on parallel and alternating time Felipe Cucker a,,1, Irénée Briquel b a Department of Mathematics, City University of Hong Kong,

More information

The Research- Driven Solution to Raise the Quality of High School Core Courses. Algebra I I. Instructional Units Plan

The Research- Driven Solution to Raise the Quality of High School Core Courses. Algebra I I. Instructional Units Plan The Research- Driven Solution to Raise the Quality of High School Core Courses Algebra I I Instructional Units Plan Instructional Units Plan Algebra II This set of plans presents the topics and selected

More information

Lecture 2: January 18

Lecture 2: January 18 CS271 Randomness & Computation Spring 2018 Instructor: Alistair Sinclair Lecture 2: January 18 Disclaimer: These notes have not been subjected to the usual scrutiny accorded to formal publications. They

More information

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information