CS 4424 Matrix multiplication

Size: px
Start display at page:

Download "CS 4424 Matrix multiplication"

Transcription

1 CS 4424 Matrix multiplication 1

2 Reminder: matrix multiplication Matrix-matrix product. Starting from a 1,1 a 1,n A =.. and B = a n,1 a n,n b 1,1 b 1,n.., b n,1 b n,n we get AB by multiplying A by all columns of B (or all rows of A by B). Explicitly, AB = a 1,1 b 1,j + + a 1,n b n,j.. a n,1 b 1,j + + a n,n b n,j 2

3 2 2 matrix multiplication With we get A = a 1,1 a 1,2 and B = b 1,1 b 1,2, a 2,1 a 2,2 b 2,1 b 2,2 AB = a 1,1b 1,1 + a 1,2 b 2,1 a 1,1 b 1,2 + a 1,2 b 2,2. a 2,1 b 1,1 + a 2,2 b 2,1 a 2,1 b 1,2 + a 2,2 b 2,2 3

4 Naive algorithm for i = 1,..., n for j = 1,..., n c i,j = 0 for k = 1,..., n c i,j = c i,j + a i,k b k,j Total: n 3 mul, n 3 n 2 add. 4

5 Main results 1. One can multiply matrices of size n using O(n 3 ) operations (naive algorithm) O(n log 2 7 ) operations using Strassen s algorithm (1969) (... many improvements) O(n 2.39 ) operations using Coppersmith and Winograd s algorithm (1990). We let ω be any number such that matrix multiplication can be done in O(n ω ) operations, so that ω One can invert matrices (and do many other things) in O(n ω ). 5

6 Practical aspects Many of these algorithms offer no interest for practical computations. One uses the naive algorithm (or a couple of variants of it) for sizes up to 100. For large sizes, algorithms by Strassen (ω = 2.81) and Pan (ω = 2.77) are sometimes used. None of the other ones is useful (threshold too high). For matrices with double entries, optimizing data access is more important. 6

7 Strassen s algorithm Similar to Karatsuba s algorithm: find an improvement for a base case: 2 2 matrices use it recursively. For the 2 2 case, given A = a 1,1 a 1,2 and B = a 2,1 a 2,2 b 1,1 b 1,2, b 2,1 b 2,2 we compute 7 linear combinations of the a i,j and b i,j multiply them pairwise recombine the results 7

8 Formulas q 1 = (a 1,1 a 1,2 )b 2,2 q 2 = (a 2,1 a 2,2 )b 1,1 q 3 = a 2,2 (b 1,1 + b 2,1 ) q 4 = a 1,1 (b 1,2 + b 2,2 ) q 5 = (a 1,1 + a 2,2 )(b 2,2 b 1,1 ) q 6 = (a 1,1 + a 2,1 )(b 1,1 + b 1,2 ) q 7 = (a 1,2 + a 2,2 )(b 2,1 + b 2,2 ) and c 1,1 = q 1 q 3 q 5 + q 7 c 1,2 = q 4 q 1 c 2,1 = q 2 + q 3 c 2,2 = q 2 q 4 + q 5 + q 6 8

9 n n matrices Suppose that we have to multiply A and B in size n, with n = 2 k. We break them into blocks: A = A 1,1 A 1,2 A 2,1 A 2,2 and B = B 1,1 B 1,2 B 2,1 B 2,2 where A i,j et B i,j have size n/2 n/2. The formulas we used for the case 2 2 still work. They allow us to multiply A and B using 7 products in size n/2 O(n 2 ) extra operations. 9

10 Complexity analysis Let MM(n) be the cost of multiplication in size n. Then we have MM(n) 7MM(n/2) + λn 2 and so MM(n) Cn log(7) log(2) Cn Proof: master theorem. 10

11 Beyond Strassen More generally: if you find an algorithm that does k multiplications in size n then you can take ω = log(n)/ log(k) n = 2: k = 7 is optimal n = 3: k = 23 is known; k = 21 would improve ω n = 4: k = 49 n = 5: k = 100 is known; k = 91 would improve ω Remark for a given n and k, many attemps done to find algorithms by computer search. 11

12 Rectangular matrices 12

13 Notation Let s write n, m, p for the number of multiplications it takes to multiply matrices (n, m) (m, p). Prop. (we ve seen that before). If n, n, n = k then we can take Prop. If n, m, p = k then we can take ω = log(k) log(n). ω = 3 log(k) log(mnp). 13

14 Steps of the proof 1. (block matrices) mm, nn, pp m, n, p m, n, p. 2. (permutations) m, n, p = n, p, m = p, m, n. 3. (conclusion) mnp, mnp, mnp m, n, p n, p, m p, m, n m, n, p m, n, p m, n, p k 3 so we can take ω = log(k3 ) log(mnp) = 3 log(k) log(mnp). 14

15 Step 1: block matrices Suppose we have to multiply A of size (mm, nn ) by B of size (nn, pp ). We can decompose them into blocks: A 1,1 A 1,n A =.., B = A m,1 A m,n B 1,1 B 1,p.., B n,1 B n,p where: each A i,j is a matrix of size (m, n ) each B i,j is a matrix of size (n, p ). 15

16 Step 1: block matrices Their product is C = C 1,1 C 1,p.., C m,1 C m,p where each C i,j is a block of size (m, p ) To compute AB, we apply the algorithm in size (m, n, p) each of the products is done on blocks of size (m, n, p ) so the total number of multiplications is m, n, p m, n, p. 16

17 Step 2: easy permutations The transpose of a matrix is obtained by switching rows and columns. Prop. B t A t = (AB) t. A = = A t = Consequence: (with A of size (m, n) and B of size (n, p)) m, n, p = p, n, m. 17

18 Polynomial notation 18

19 Multiplication algorithms We have seen several algorithms for multiplying things polynomials (Karatsuba, FFT) power series matrices which all have the same structure 1. compute combinations of the inputs 2. multiply them pairwise 3. recombine the products to get the result. 19

20 We want to describe the multiplication Polynomial notation (a 0 + a 1 X)(b 0 + b 1 X) = a 0 b 0 + (a 1 b 0 + a 0 b 1 )X + a 1 b 1 X 2. We can describe this operation using a polynomial in variables (A 0, A 1 ), (B 0, B 1 ), (C 0, C 1, C 2 ): P poly2 = A 0 B 0 C 0 + A 0 B 1 C 1 + A 1 B 0 C 1 + A 1 B 1 C 2. translation compute A 0 B 0 and add it to C 0 compute A 0 B 1 and add it to C 1 compute A 1 B 0 and add it to C 1 compute A 1 B 1 and add it to C 2 20

21 Polynomial notation for Karatsuba We can rewrite the polynomial P poly2 as P poly2 = A 0 B 0 (C 0 C 1 ) + (A 0 + A 1 )(B 0 + B 1 )C 1 + A 1 B 1 (C 2 C 1 ) This is the same polynomial, just written differently. Now, the translation is: compute A 0 B 0, add the result to C 0, subtract it from C 1 compute (A 0 + A 1 )(B 0 + B 1 ), add the result to C 1 compute A 1 B 1, add the result to C 2, subtract it from C 1. 21

22 Polynomial notation for matrices For the multiplication of 2 2 matrices, we have P mat2 = A 1,1 B 1,1 C 1,1 + A 1,2 B 2,1 C 1,1 + A 1,1 B 1,2 C 1,2 + A 1,2 B 2,2 C 1,2 + A 2,1 B 1,1 C 2,1 + A 2,2 B 2,1 C 2,1 + A 2,1 B 1,2 C 2,2 + A 2,2 B 2,2 C 2,2. translation compute A 1,1 B 1,1 and add it to C 1,1, compute A 1,2 B 2,1 and add it to C 1,1,... 22

23 Polynomial notation for Strassen s algorithm We can rewrite P mat2 as P mat2 = (A 1,1 A 1,2 )B 2,2 (C 1,1 C 1,2 ) + (A 2,1 A 2,2 )B 1,1 C 2,1 + A 2,2 (B 1,1 + B 2,1 )( C 1,1 + C 2,1 ) + A 1,1 (B 1,2 + B 2,2 )(C 1,2 C 2,2 ) +... translation compute (A 1,1 A 1,2 )B 2,2, and add it to C 1,1, subtract it from C 1,

24 Polynomial notation for (1, 2) (2, 3) Let s compute ] [A 1,1 A 1,2 B 1,1 B 1,2 B 1,3 = B 2,1 B 2,2 B 2,3 ] [C 1,1 C 1,2 C 1,3. This gives P mat123 = A 1,1 B 1,1 C 1,1 + A 1,2 B 2,1 C 1,1 + A 1,1 B 1,2 C 1,2 + A 1,2 B 2,2 C 1,2 + A 1,1 B 1,3 C 1,3 + A 1,2 B 2,3 C 1,3. 24

25 Polynomial notation for (2, 3) (3, 1) Let s compute This gives A 1,1 A 1,2 A 1,3 A 2,1 A 2,2 A 2,3 C 1,1 C 2,1 C 3,1 = C 1,1. C 2,1 P mat231 = A 1,1 B 1,1 C 1,1 + A 1,2 B 2,1 C 1,1 + A 1,3 B 3,1 C 1,1 + A 2,1 B 1,1 C 2,1 + A 2,2 B 2,1 C 2,1 + A 2,3 B 3,1 C 2,1. 25

26 Comparison P mat123 = A 1,1 B 1,1 C 1,1 + A 1,2 B 2,1 C 1,1 + A 1,1 B 1,2 C 1,2 + A 1,2 B 2,2 C 1,2 + A 1,1 B 1,3 C 1,3 + A 1,2 B 2,3 C 1,3. Conclusion: P mat231 = A 1,1 B 1,1 C 1,1 + A 1,2 B 2,1 C 1,1 + A 1,3 B 3,1 C 1,1 + A 2,1 B 1,1 C 2,1 + A 2,2 B 2,1 C 2,1 + A 2,3 B 3,1 C 2,1. up to replacing A i,j by C j,i, B i,j by A i,j and C i,j by B j,i, these are the same polynomials so an algorithm for P mat231 gives an algorithm for P mat123 so 2, 3, 1 = 1, 2, 3 true in general 26

27 Approximate algorithms: an example with power series 27

28 Modular multiplication Reminder: to multiply two polynomials A, B modulo a polynomial P (with deg(p ) = d) compute C = AB return D = C rem P. Alternative solution when all the roots r 1,..., r d of P are known compute the values of A and B at r 1,..., r d return the polynomial E such that E(a i ) = A(r i )B(r i ) deg(e) < d Prop: D = E. 28

29 Remark: FFT multiplication Given A, B: evaluate A and B at roots of unity of high enough order n, by FFT multiply the values do an inverse FFT to get C = AB In general, this will return AB rem(x n 1). When n is large enough, there is no reduction, and we get AB. 29

30 Interpolation Prop. Given two different elements r 0, r 1 in k and two values v 0, v 1, the unique polynomial P such that P (r 0 ) = v 0, P (r 1 ) = v 1, deg(p ) < 2 is P = v 0 X r 1 r 0 r 1 + v 1 X r 0 r 1 r 0. Remark: similar formula with more points, but we won t need it. 30

31 Two similar situations 1. computing modulo P = X 2 (a 0 + a 1 X)(b 0 + b 1 X) mod X 2 = a 0 b 0 + (a 0 b 1 + a 1 b 0 )X naive: 3 multiplications no improvement possible over the naive algorithm 2. computing modulo P = X 2 1 = (X 1)(X + 1) (a 0 + a 1 X)(b 0 + b 1 X) mod (X 2 1) = a 0 b 0 + a 1 b 1 + (a 0 b 1 + a 1 b 0 )X naive: 4 multiplications Karatsuba: 3 multiplications evaluation / interpolation: 2 multiplications C = (a 0 + a 1 )(b 0 + b 1 ) X (a 0 a 1 )(b 0 b 1 ) X

32 More generally Computing modulo P = X 2 r 2 = (X r)(x + r): (a 0 + a 1 X)(b 0 + b 1 X) rem (X 2 r 2 ) = a 0 b 0 + r 2 a 1 b 1 + (a 0 b 1 + a 1 b 0 )X naive: 4 multiplications Karatsuba: 3 multiplications evaluation / interpolation: 2 multiplications C = (a 0 + a 1 r)(b 0 + b 1 r) X + r 2r (a 0 a 1 r)(b 0 b 1 r) X r 2r. 32

33 An approximate solution Let s suppose we can compute with real coefficients, and take r = 10 10, so we are computing modulo X We multiply ( X) and ( 4 + 5X), which gives ( X)( 4 + 5X) rem (X ) = X. This is (of course) very close to ( X)( 4 + 5X) rem X 2 = X. We can get the former using two multiplications, but the latter requires three. 33

34 Making this formal We introduce a new variable ε. Then, to compute (a 0 + a 1 X)(b 0 + b 1 X) rem X 2 we do the following: we compute C ε = (a 0 + a 1 X)(b 0 + b 1 X) rem (X 2 ε 2 ) = (a 0 b 0 + a 1 b 1 ε 2 ) + (a 0 b 1 + a 1 b 0 )X. We do this by evaluation / interpolation: we replace ε by 0. C ε = (a 0 + a 1 ε)(b 0 + b 1 ε) X + ε 2ε (a 0 a 1 ε)(b 0 b 1 ε) X ε. 2ε 34

35 Using the polynomial notation Multiplication modulo X 2 is represented by P = A 0 B 0 C 0 + A 0 B 1 C 1 + A 1 B 0 C 1. Multiplication modulo X 2 ε 2 is P ε = A 0 B 0 C 0 + ε 2 A 1 B 1 C 0 + A 0 B 1 C 1 + A 1 B 0 C 1 = P + ε 2 Q. The evaluation / interpolation algorithm shows P ε = (A 0 + A 1 ε)(b 0 + B 1 ε)(εc 0 + C 1 ) 2ε (A 0 A 1 ε)(b 0 B 1 ε)(εc 0 + C 1 ). 2ε 35

36 Summary We get (A 0 + A 1 ε)(b 0 + B 1 ε)(εc 0 + C 1 ) 2ε or (A 0 A 1 ε)(b 0 B 1 ε)(εc 0 + C 1 ) 2ε = P+ε 2 Q, (A 0 + A 1 ε)(b 0 + B 1 ε)(εc 0 + C 1 ) (A 0 A 1 ε)(b 0 B 1 ε)(εc 0 + C 1 ) 2 2 = εp+ε 3 Q. The left-hand side gives us an algorithm using 2 multiplications with coefficients involving ε that computes ε times what we want plus an higher-order term Q that we can dismiss afterwards. 36

37 Approximate algorithms for matrix multiplication 37

38 Multiplication of partial matrices We consider the product AB = C, with A = a 1,1 a 1,2 a 2,1 0, B = b 1,1 b 1,2 b 2,1 b 2,2, C = c 1,1 c 1,2 c 2,1 c 2,2. The naive algorithm uses 6 multiplications: P = A 1,1 B 1,1 C 1,1 + A 1,2 B 2,1 C 1,1 + A 1,1 B 1,2 C 1,2 + A 1,2 B 2,2 C 1,2 + A 2,1 B 1,1 C 2,1 + A 2,1 B 1,2 C 2,2. It is optimal. 38

39 The algorithm described by An approximate algorithm P ε = (A 1,2 + εa 1,1 )(B 1,2 + εb 2,2 )C 1,2 (A 2,1 + εa 1,1 )B 1,1 (C 1,1 + εc 2,1 ) A 1,2 B 1,2 (C 1,1 + C 1,2 + εc 2,2 ) A 2,1 (B 1,1 + B 1,2 + εb 2,1 )C 1,1 (A 1,2 + A 2,1 )(B 1,2 + εb 2,1 )(C 1,1 + εc 2,2 ) gives us P ε = εp + ε 2 Q, for some higher-order term Q. 39

40 Rectangular matrix multiplication We can use this trick to compute AB = C with a 1,1 a 1,2 A = a 2,1 a 2,2, B = b 1,1 b 1,2 b 2,1 b 2,2 a 3,1 a 3,2, C = c 1,1 c 1,2 c 2,1 c 2,2. This gives us an algorithm for 3, 2, 2 using 10 multiplications with coefficients involving ε that computes ε times what we want plus an higher-order term Q that we can dismiss afterwards. 40

41 Towards an algorithm for square matrices As we did for usual algorithms, we can apply permutations. This give algorithms for 3, 2, 2, 2, 3, 2 and 2, 2, 3 using 10 multiplications with coefficients involving ε that compute ε times what we want plus higher-order terms that we can dismiss afterwards. 41

42 Formalization An algorithm using k multiplications with coefficients involving ε that compute ε r 1 times m, n, p plus higher-order terms. is called an approximate algorithm of order r for m, n, p. Prop Given an approximate algorithm of order r for m, n, p, involving k multiplications, we can deduce a (normal) algorithm for m, n, p involving km(r) multiplications. Proof: all operations are done using power series addition and multiplication, modulo ε r. 42

43 Block products Prop. Suppose we have an algorithm of order r for m, n, p an algorithm of order r for m, n, p Then we can deduce an algorithm of order r + r 1 for mm, nn, pp. Proof. We are doing a product m, n, p of matrices of size m, n, p. 1. We recursively compute products of the form ε r 1 m, n p + ε r 2. We plug them into our approximate algorithm for m, n, p. This gives ε r 1 ε r 1 mm, nn, pp + ε r +r 1 43

44 Using block multiplication We have approximate algorithms of order 2 for 3, 2, 2, 2, 3, 2 and 2, 2, 3, with 10 multiplications. This gives us an approximate algorithm of order 4 for 12, 12, 12, with 1000 multiplications. and thus an approximate algorithm of order 7 for 12 2, 12 2, 12 2, with multiplications. and thus an approximate algorithm of order 10 for 12 3, 12 3, 12 3, with multiplications. and thus an approximate algorithm of order 3N + 1 for 12 N, 12 N, 12 N, with 1000 N multiplications. and thus an algorithm for 12 N, 12 N, 12 N, with 1000 N M(3N + 1) multiplications. 44

45 Conclusion Each of these algorithms gives us a value for ω: ω N = log(1000n M(3N+1)) log(12 N ) = log(1000n ) log(12 N ) + log(m(3n+1)) log(12 N ) so that ω N 3 log(10) log(12) = 3 log(10) log(12) + log(m(3n+1)) log(12 N ) 45

46 In practice i 12 i ratio vs. naive algo i 12 i ratio vs. Strassen

47 Swiss cheese matrices 47

48 Using sparsity We consider matrices with many zero entries (we ll call them motives). Theorem. Consider two motives A and B, of sizes (k, m) and (m, n). Suppose that you can compute A B using l products (possibly using approximate algorithms). Let f be the number of products in the naive product A B. Then we can take ω 3 log l log f. Simple examples. gives ω = 3 for the naive algorithm gives ω = 2.81 for Strassen. 48

49 Better examples 1. Back the previous example: and We had ω 3 log(10)/ log(12) 2.78 Now, l = 5, f = 6 = ω 3 log(5)/ log(6) The product of and can be done with l = 17, whereas f = 26 = ω 3 log(17)/ log(26)

50 Tensor powers We write A (s) and B (s) the sth tensor powers of A and B. On the previous example, A = a 1,1 a 1,2 a 2,1 0, the powers A (1), A (2) and A (3) are (I am not writing the actual coefficients) 50

51 Rows and columns Each column of A (s) is indexed by an s-uple µ of the form µ = (µ 1,..., µ s ), with 1 µ i m. Example. For s = 3, this gives: Same thing for the rows of B (s) ,1,2 51

52 Counting zeros Prop. Let k 1,..., k m be the numbers of non-zero entries in the columns 1,..., m in A. Let C µ be a column in A (s), with µ = (µ 1,..., µ s ). Then the number of non-zero entries in C µ is K(µ) = k µ1 k µs. If we let σ i be the number of µ i s that are equal to i, we have K(µ) = k σ 1 1 kσ m m. Example. µ = (2, 1, 2) = σ = (1, 2) = K(µ) = = 2. Same thing for the columns of B (s) ; we write N(µ) = n σ 1 1 nσ m m. 52

53 Cleaning We fix σ 1,..., σ m. There are M σ = s! σ 1! σ 2! σ m! choices µ = (µ 1,..., µ s ) for the columns of A (s). For all these columns, Same things for the rows of B (s), with K(µ) = K σ = k σ 1 1 kσ m m. N(µ) = N σ = n σ 1 1 nσ m m. We remove all other columns of A (s) and rows of B (s) et we call A σ and B σ what is left. 53

54 Cleaning Example: σ = (2, 1). We find the columns / rows (1, 1, 2), (1, 2, 1) and (2, 1, 1), with K σ = 4, M σ = 3, N σ = 8. The remaining A σ and B σ : and

55 From sparse to full Let U and V be any matrices of sizes (K σ, M σ ) and (M σ, N σ ). Prop. On can find G and Q such that U = GA σ, V = B σ Q. and thus UV = G(A σ B σ )Q. So If we can compute A B with l product, then K σ, M σ, N σ l s and thus log(l s ) ω 3 log(k σ M σ N σ ). 55

56 Remember that K σ M σ N σ = Asymptotically This is one of the terms in the expansion of s! σ 1! σ 2! σ m! (k 1n 1 ) σ1 (k m n m ) σ m. (k 1 n k m n m ) s. There are ( ) s + m 1 m 1 terms in the expansion. So, there is at least one choice of σ such that K σ M σ N σ (k 1n k m n m ) s ). ( s+m 1 m 1 56

57 Asymptotically This gives ( ) s + m 1 log(k σ M σ N σ ) s log(k 1 n k m n m ) log( ) m 1 or 1 log(k σ M σ N σ ) 1 s log(k 1 n k m n m ) log( ( ) s+m 1 ) Since ( ) s + m 1 m 1 we get (s + m 1) m 1 m 1 1 log(k σ M σ N σ ) 1 s log(k 1 n k m n m ) (m 1) log(s + m 1). 57

58 Conclusion Let f be the number of operations in the naive product A B. Then f = k 1 n k m n m. So we get With s : log(l s ) ω 3 s log f (m 1) log(s + m 1). ω 3 log l log f. 58

Fast reversion of formal power series

Fast reversion of formal power series Fast reversion of formal power series Fredrik Johansson LFANT, INRIA Bordeaux RAIM, 2016-06-29, Banyuls-sur-mer 1 / 30 Reversion of power series F = exp(x) 1 = x + x 2 2! + x 3 3! + x 4 G = log(1 + x)

More information

Fast reversion of power series

Fast reversion of power series Fast reversion of power series Fredrik Johansson November 2011 Overview Fast power series arithmetic Fast composition and reversion (Brent and Kung, 1978) A new algorithm for reversion Implementation results

More information

Fast Matrix Product Algorithms: From Theory To Practice

Fast Matrix Product Algorithms: From Theory To Practice Introduction and Definitions The τ-theorem Pan s aggregation tables and the τ-theorem Software Implementation Conclusion Fast Matrix Product Algorithms: From Theory To Practice Thomas Sibut-Pinote Inria,

More information

Matrices: 2.1 Operations with Matrices

Matrices: 2.1 Operations with Matrices Goals In this chapter and section we study matrix operations: Define matrix addition Define multiplication of matrix by a scalar, to be called scalar multiplication. Define multiplication of two matrices,

More information

Introduction to Algorithms

Introduction to Algorithms Lecture 1 Introduction to Algorithms 1.1 Overview The purpose of this lecture is to give a brief overview of the topic of Algorithms and the kind of thinking it involves: why we focus on the subjects that

More information

Matrix Multiplication

Matrix Multiplication Matrix Multiplication Matrix Multiplication Matrix multiplication. Given two n-by-n matrices A and B, compute C = AB. n c ij = a ik b kj k=1 c 11 c 12 c 1n c 21 c 22 c 2n c n1 c n2 c nn = a 11 a 12 a 1n

More information

Powers of Tensors and Fast Matrix Multiplication

Powers of Tensors and Fast Matrix Multiplication Powers of Tensors and Fast Matrix Multiplication François Le Gall Department of Computer Science Graduate School of Information Science and Technology The University of Tokyo Simons Institute, 12 November

More information

Fast Polynomial Multiplication

Fast Polynomial Multiplication Fast Polynomial Multiplication Marc Moreno Maza CS 9652, October 4, 2017 Plan Primitive roots of unity The discrete Fourier transform Convolution of polynomials The fast Fourier transform Fast convolution

More information

Elementary Row Operations on Matrices

Elementary Row Operations on Matrices King Saud University September 17, 018 Table of contents 1 Definition A real matrix is a rectangular array whose entries are real numbers. These numbers are organized on rows and columns. An m n matrix

More information

Kevin James. MTHSC 3110 Section 2.1 Matrix Operations

Kevin James. MTHSC 3110 Section 2.1 Matrix Operations MTHSC 3110 Section 2.1 Matrix Operations Notation Let A be an m n matrix, that is, m rows and n columns. We ll refer to the entries of A by their row and column indices. The entry in the i th row and j

More information

MATH 320, WEEK 7: Matrices, Matrix Operations

MATH 320, WEEK 7: Matrices, Matrix Operations MATH 320, WEEK 7: Matrices, Matrix Operations 1 Matrices We have introduced ourselves to the notion of the grid-like coefficient matrix as a short-hand coefficient place-keeper for performing Gaussian

More information

Integer multiplication with generalized Fermat primes

Integer multiplication with generalized Fermat primes Integer multiplication with generalized Fermat primes CARAMEL Team, LORIA, University of Lorraine Supervised by: Emmanuel Thomé and Jérémie Detrey Journées nationales du Calcul Formel 2015 (Cluny) November

More information

1 Determinants. 1.1 Determinant

1 Determinants. 1.1 Determinant 1 Determinants [SB], Chapter 9, p.188-196. [SB], Chapter 26, p.719-739. Bellow w ll study the central question: which additional conditions must satisfy a quadratic matrix A to be invertible, that is to

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

Phys 201. Matrices and Determinants

Phys 201. Matrices and Determinants Phys 201 Matrices and Determinants 1 1.1 Matrices 1.2 Operations of matrices 1.3 Types of matrices 1.4 Properties of matrices 1.5 Determinants 1.6 Inverse of a 3 3 matrix 2 1.1 Matrices A 2 3 7 =! " 1

More information

Topics in linear algebra

Topics in linear algebra Chapter 6 Topics in linear algebra 6.1 Change of basis I want to remind you of one of the basic ideas in linear algebra: change of basis. Let F be a field, V and W be finite dimensional vector spaces over

More information

Divide and conquer. Philip II of Macedon

Divide and conquer. Philip II of Macedon Divide and conquer Philip II of Macedon Divide and conquer 1) Divide your problem into subproblems 2) Solve the subproblems recursively, that is, run the same algorithm on the subproblems (when the subproblems

More information

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2

More information

Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes.

Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes. Vectors and matrices: matrices (Version 2) This is a very brief summary of my lecture notes Matrices and linear equations A matrix is an m-by-n array of numbers A = a 11 a 12 a 13 a 1n a 21 a 22 a 23 a

More information

Review from Bootcamp: Linear Algebra

Review from Bootcamp: Linear Algebra Review from Bootcamp: Linear Algebra D. Alex Hughes October 27, 2014 1 Properties of Estimators 2 Linear Algebra Addition and Subtraction Transpose Multiplication Cross Product Trace 3 Special Matrices

More information

. a m1 a mn. a 1 a 2 a = a n

. a m1 a mn. a 1 a 2 a = a n Biostat 140655, 2008: Matrix Algebra Review 1 Definition: An m n matrix, A m n, is a rectangular array of real numbers with m rows and n columns Element in the i th row and the j th column is denoted by

More information

Elementary matrices, continued. To summarize, we have identified 3 types of row operations and their corresponding

Elementary matrices, continued. To summarize, we have identified 3 types of row operations and their corresponding Elementary matrices, continued To summarize, we have identified 3 types of row operations and their corresponding elementary matrices. If you check the previous examples, you ll find that these matrices

More information

CS/COE 1501 cs.pitt.edu/~bill/1501/ Integer Multiplication

CS/COE 1501 cs.pitt.edu/~bill/1501/ Integer Multiplication CS/COE 1501 cs.pitt.edu/~bill/1501/ Integer Multiplication Integer multiplication Say we have 5 baskets with 8 apples in each How do we determine how many apples we have? Count them all? That would take

More information

MATRIX DETERMINANTS. 1 Reminder Definition and components of a matrix

MATRIX DETERMINANTS. 1 Reminder Definition and components of a matrix MATRIX DETERMINANTS Summary Uses... 1 1 Reminder Definition and components of a matrix... 1 2 The matrix determinant... 2 3 Calculation of the determinant for a matrix... 2 4 Exercise... 3 5 Definition

More information

Math 471 (Numerical methods) Chapter 3 (second half). System of equations

Math 471 (Numerical methods) Chapter 3 (second half). System of equations Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular

More information

Complexity of Matrix Multiplication and Bilinear Problems

Complexity of Matrix Multiplication and Bilinear Problems Complexity of Matrix Multiplication and Bilinear Problems François Le Gall Graduate School of Informatics Kyoto University ADFOCS17 - Lecture 3 24 August 2017 Overview of the Lectures Fundamental techniques

More information

MATH 2030: EIGENVALUES AND EIGENVECTORS

MATH 2030: EIGENVALUES AND EIGENVECTORS MATH 2030: EIGENVALUES AND EIGENVECTORS Determinants Although we are introducing determinants in the context of matrices, the theory of determinants predates matrices by at least two hundred years Their

More information

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in Vectors and Matrices Continued Remember that our goal is to write a system of algebraic equations as a matrix equation. Suppose we have the n linear algebraic equations a x + a 2 x 2 + a n x n = b a 2

More information

Chapter 1 Divide and Conquer Polynomial Multiplication Algorithm Theory WS 2015/16 Fabian Kuhn

Chapter 1 Divide and Conquer Polynomial Multiplication Algorithm Theory WS 2015/16 Fabian Kuhn Chapter 1 Divide and Conquer Polynomial Multiplication Algorithm Theory WS 2015/16 Fabian Kuhn Formulation of the D&C principle Divide-and-conquer method for solving a problem instance of size n: 1. Divide

More information

v = 1(1 t) + 1(1 + t). We may consider these components as vectors in R n and R m : w 1. R n, w m

v = 1(1 t) + 1(1 + t). We may consider these components as vectors in R n and R m : w 1. R n, w m 20 Diagonalization Let V and W be vector spaces, with bases S = {e 1,, e n } and T = {f 1,, f m } respectively Since these are bases, there exist constants v i and w such that any vectors v V and w W can

More information

Matrices. Chapter Definitions and Notations

Matrices. Chapter Definitions and Notations Chapter 3 Matrices 3. Definitions and Notations Matrices are yet another mathematical object. Learning about matrices means learning what they are, how they are represented, the types of operations which

More information

Determinants - Uniqueness and Properties

Determinants - Uniqueness and Properties Determinants - Uniqueness and Properties 2-2-2008 In order to show that there s only one determinant function on M(n, R), I m going to derive another formula for the determinant It involves permutations

More information

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0. Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

More information

1 Matrices and matrix algebra

1 Matrices and matrix algebra 1 Matrices and matrix algebra 1.1 Examples of matrices A matrix is a rectangular array of numbers and/or variables. For instance 4 2 0 3 1 A = 5 1.2 0.7 x 3 π 3 4 6 27 is a matrix with 3 rows and 5 columns

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

4 Elementary matrices, continued

4 Elementary matrices, continued 4 Elementary matrices, continued We have identified 3 types of row operations and their corresponding elementary matrices. If you check the previous examples, you ll find that these matrices are constructed

More information

Algorithms and Data Structures Strassen s Algorithm. ADS (2017/18) Lecture 4 slide 1

Algorithms and Data Structures Strassen s Algorithm. ADS (2017/18) Lecture 4 slide 1 Algorithms and Data Structures Strassen s Algorithm ADS (2017/18) Lecture 4 slide 1 Tutorials Start in week (week 3) Tutorial allocations are linked from the course webpage http://www.inf.ed.ac.uk/teaching/courses/ads/

More information

Section 9.2: Matrices.. a m1 a m2 a mn

Section 9.2: Matrices.. a m1 a m2 a mn Section 9.2: Matrices Definition: A matrix is a rectangular array of numbers: a 11 a 12 a 1n a 21 a 22 a 2n A =...... a m1 a m2 a mn In general, a ij denotes the (i, j) entry of A. That is, the entry in

More information

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in

Chapter 4 - MATRIX ALGEBRA. ... a 2j... a 2n. a i1 a i2... a ij... a in Chapter 4 - MATRIX ALGEBRA 4.1. Matrix Operations A a 11 a 12... a 1j... a 1n a 21. a 22.... a 2j... a 2n. a i1 a i2... a ij... a in... a m1 a m2... a mj... a mn The entry in the ith row and the jth column

More information

Matrix Algebra & Elementary Matrices

Matrix Algebra & Elementary Matrices Matrix lgebra & Elementary Matrices To add two matrices, they must have identical dimensions. To multiply them the number of columns of the first must equal the number of rows of the second. The laws below

More information

Math 123, Week 2: Matrix Operations, Inverses

Math 123, Week 2: Matrix Operations, Inverses Math 23, Week 2: Matrix Operations, Inverses Section : Matrices We have introduced ourselves to the grid-like coefficient matrix when performing Gaussian elimination We now formally define general matrices

More information

ICS 6N Computational Linear Algebra Matrix Algebra

ICS 6N Computational Linear Algebra Matrix Algebra ICS 6N Computational Linear Algebra Matrix Algebra Xiaohui Xie University of California, Irvine xhx@uci.edu February 2, 2017 Xiaohui Xie (UCI) ICS 6N February 2, 2017 1 / 24 Matrix Consider an m n matrix

More information

25. Strassen s Fast Multiplication of Matrices Algorithm and Spreadsheet Matrix Multiplications

25. Strassen s Fast Multiplication of Matrices Algorithm and Spreadsheet Matrix Multiplications 25.1 Introduction 25. Strassen s Fast Multiplication of Matrices Algorithm and Spreadsheet Matrix Multiplications We will use the notation A ij to indicate the element in the i-th row and j-th column of

More information

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline

More information

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.

More information

Divide and Conquer. Andreas Klappenecker. [based on slides by Prof. Welch]

Divide and Conquer. Andreas Klappenecker. [based on slides by Prof. Welch] Divide and Conquer Andreas Klappenecker [based on slides by Prof. Welch] Divide and Conquer Paradigm An important general technique for designing algorithms: divide problem into subproblems recursively

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.

a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2. Chapter 1 LINEAR EQUATIONS 11 Introduction to linear equations A linear equation in n unknowns x 1, x,, x n is an equation of the form a 1 x 1 + a x + + a n x n = b, where a 1, a,, a n, b are given real

More information

Announcements Monday, October 02

Announcements Monday, October 02 Announcements Monday, October 02 Please fill out the mid-semester survey under Quizzes on Canvas WeBWorK 18, 19 are due Wednesday at 11:59pm The quiz on Friday covers 17, 18, and 19 My office is Skiles

More information

Announcements Wednesday, October 10

Announcements Wednesday, October 10 Announcements Wednesday, October 10 The second midterm is on Friday, October 19 That is one week from this Friday The exam covers 35, 36, 37, 39, 41, 42, 43, 44 (through today s material) WeBWorK 42, 43

More information

Multiplying matrices by diagonal matrices is faster than usual matrix multiplication.

Multiplying matrices by diagonal matrices is faster than usual matrix multiplication. 7-6 Multiplying matrices by diagonal matrices is faster than usual matrix multiplication. The following equations generalize to matrices of any size. Multiplying a matrix from the left by a diagonal matrix

More information

M. Matrices and Linear Algebra

M. Matrices and Linear Algebra M. Matrices and Linear Algebra. Matrix algebra. In section D we calculated the determinants of square arrays of numbers. Such arrays are important in mathematics and its applications; they are called matrices.

More information

Week 15-16: Combinatorial Design

Week 15-16: Combinatorial Design Week 15-16: Combinatorial Design May 8, 2017 A combinatorial design, or simply a design, is an arrangement of the objects of a set into subsets satisfying certain prescribed properties. The area of combinatorial

More information

Polynomial evaluation and interpolation on special sets of points

Polynomial evaluation and interpolation on special sets of points Polynomial evaluation and interpolation on special sets of points Alin Bostan and Éric Schost Laboratoire STIX, École polytechnique, 91128 Palaiseau, France Abstract We give complexity estimates for the

More information

Matrix Arithmetic. j=1

Matrix Arithmetic. j=1 An m n matrix is an array A = Matrix Arithmetic a 11 a 12 a 1n a 21 a 22 a 2n a m1 a m2 a mn of real numbers a ij An m n matrix has m rows and n columns a ij is the entry in the i-th row and j-th column

More information

I = i 0,

I = i 0, Special Types of Matrices Certain matrices, such as the identity matrix 0 0 0 0 0 0 I = 0 0 0, 0 0 0 have a special shape, which endows the matrix with helpful properties The identity matrix is an example

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

How to find good starting tensors for matrix multiplication

How to find good starting tensors for matrix multiplication How to find good starting tensors for matrix multiplication Markus Bläser Saarland University Matrix multiplication z,... z,n..... z n,... z n,n = x,... x,n..... x n,... x n,n y,... y,n..... y n,... y

More information

Fast Convolution; Strassen s Method

Fast Convolution; Strassen s Method Fast Convolution; Strassen s Method 1 Fast Convolution reduction to subquadratic time polynomial evaluation at complex roots of unity interpolation via evaluation at complex roots of unity 2 The Master

More information

The Master Theorem for solving recurrences. Algorithms and Data Structures Strassen s Algorithm. Tutorials. The Master Theorem (cont d)

The Master Theorem for solving recurrences. Algorithms and Data Structures Strassen s Algorithm. Tutorials. The Master Theorem (cont d) The Master Theorem for solving recurrences lgorithms and Data Structures Strassen s lgorithm 23rd September, 2014 Theorem Let n 0 N, k N 0 and a, b R with a > 0 and b > 1, and let T : N R satisfy the following

More information

CSE 548: Analysis of Algorithms. Lecture 4 ( Divide-and-Conquer Algorithms: Polynomial Multiplication )

CSE 548: Analysis of Algorithms. Lecture 4 ( Divide-and-Conquer Algorithms: Polynomial Multiplication ) CSE 548: Analysis of Algorithms Lecture 4 ( Divide-and-Conquer Algorithms: Polynomial Multiplication ) Rezaul A. Chowdhury Department of Computer Science SUNY Stony Brook Spring 2015 Coefficient Representation

More information

CMPSCI611: Three Divide-and-Conquer Examples Lecture 2

CMPSCI611: Three Divide-and-Conquer Examples Lecture 2 CMPSCI611: Three Divide-and-Conquer Examples Lecture 2 Last lecture we presented and analyzed Mergesort, a simple divide-and-conquer algorithm. We then stated and proved the Master Theorem, which gives

More information

Dot Products, Transposes, and Orthogonal Projections

Dot Products, Transposes, and Orthogonal Projections Dot Products, Transposes, and Orthogonal Projections David Jekel November 13, 2015 Properties of Dot Products Recall that the dot product or standard inner product on R n is given by x y = x 1 y 1 + +

More information

LINEAR SYSTEMS (11) Intensive Computation

LINEAR SYSTEMS (11) Intensive Computation LINEAR SYSTEMS () Intensive Computation 27-8 prof. Annalisa Massini Viviana Arrigoni EXACT METHODS:. GAUSSIAN ELIMINATION. 2. CHOLESKY DECOMPOSITION. ITERATIVE METHODS:. JACOBI. 2. GAUSS-SEIDEL 2 CHOLESKY

More information

L. Vandenberghe EE133A (Spring 2017) 3. Matrices. notation and terminology. matrix operations. linear and affine functions.

L. Vandenberghe EE133A (Spring 2017) 3. Matrices. notation and terminology. matrix operations. linear and affine functions. L Vandenberghe EE133A (Spring 2017) 3 Matrices notation and terminology matrix operations linear and affine functions complexity 3-1 Matrix a rectangular array of numbers, for example A = 0 1 23 01 13

More information

Process Model Formulation and Solution, 3E4

Process Model Formulation and Solution, 3E4 Process Model Formulation and Solution, 3E4 Section B: Linear Algebraic Equations Instructor: Kevin Dunn dunnkg@mcmasterca Department of Chemical Engineering Course notes: Dr Benoît Chachuat 06 October

More information

Things we can already do with matrices. Unit II - Matrix arithmetic. Defining the matrix product. Things that fail in matrix arithmetic

Things we can already do with matrices. Unit II - Matrix arithmetic. Defining the matrix product. Things that fail in matrix arithmetic Unit II - Matrix arithmetic matrix multiplication matrix inverses elementary matrices finding the inverse of a matrix determinants Unit II - Matrix arithmetic 1 Things we can already do with matrices equality

More information

Topic 15 Notes Jeremy Orloff

Topic 15 Notes Jeremy Orloff Topic 5 Notes Jeremy Orloff 5 Transpose, Inverse, Determinant 5. Goals. Know the definition and be able to compute the inverse of any square matrix using row operations. 2. Know the properties of inverses.

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

Fast and Small: Multiplying Polynomials without Extra Space

Fast and Small: Multiplying Polynomials without Extra Space Fast and Small: Multiplying Polynomials without Extra Space Daniel S. Roche Symbolic Computation Group School of Computer Science University of Waterloo CECM Day SFU, Vancouver, 24 July 2009 Preliminaries

More information

Matrices and Vectors

Matrices and Vectors Matrices and Vectors James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 11, 2013 Outline 1 Matrices and Vectors 2 Vector Details 3 Matrix

More information

Chapter 5 Divide and Conquer

Chapter 5 Divide and Conquer CMPT 705: Design and Analysis of Algorithms Spring 008 Chapter 5 Divide and Conquer Lecturer: Binay Bhattacharya Scribe: Chris Nell 5.1 Introduction Given a problem P with input size n, P (n), we define

More information

A Review of Matrix Analysis

A Review of Matrix Analysis Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value

More information

CPSC 518 Introduction to Computer Algebra Asymptotically Fast Integer Multiplication

CPSC 518 Introduction to Computer Algebra Asymptotically Fast Integer Multiplication CPSC 518 Introduction to Computer Algebra Asymptotically Fast Integer Multiplication 1 Introduction We have now seen that the Fast Fourier Transform can be applied to perform polynomial multiplication

More information

Notes on vectors and matrices

Notes on vectors and matrices Notes on vectors and matrices EE103 Winter Quarter 2001-02 L Vandenberghe 1 Terminology and notation Matrices, vectors, and scalars A matrix is a rectangular array of numbers (also called scalars), written

More information

Chapter 1 Divide and Conquer Algorithm Theory WS 2016/17 Fabian Kuhn

Chapter 1 Divide and Conquer Algorithm Theory WS 2016/17 Fabian Kuhn Chapter 1 Divide and Conquer Algorithm Theory WS 2016/17 Fabian Kuhn Formulation of the D&C principle Divide-and-conquer method for solving a problem instance of size n: 1. Divide n c: Solve the problem

More information

A FIRST COURSE IN LINEAR ALGEBRA. An Open Text by Ken Kuttler. Matrix Arithmetic

A FIRST COURSE IN LINEAR ALGEBRA. An Open Text by Ken Kuttler. Matrix Arithmetic A FIRST COURSE IN LINEAR ALGEBRA An Open Text by Ken Kuttler Matrix Arithmetic Lecture Notes by Karen Seyffarth Adapted by LYRYX SERVICE COURSE SOLUTION Attribution-NonCommercial-ShareAlike (CC BY-NC-SA)

More information

Chapter 2. Divide-and-conquer. 2.1 Strassen s algorithm

Chapter 2. Divide-and-conquer. 2.1 Strassen s algorithm Chapter 2 Divide-and-conquer This chapter revisits the divide-and-conquer paradigms and explains how to solve recurrences, in particular, with the use of the master theorem. We first illustrate the concept

More information

Fast algorithms for polynomials and matrices Part 2: polynomial multiplication

Fast algorithms for polynomials and matrices Part 2: polynomial multiplication Fast algorithms for polynomials and matrices Part 2: polynomial multiplication by Grégoire Lecerf Computer Science Laboratory & CNRS École polytechnique 91128 Palaiseau Cedex France 1 Notation In this

More information

Math Lecture 26 : The Properties of Determinants

Math Lecture 26 : The Properties of Determinants Math 2270 - Lecture 26 : The Properties of Determinants Dylan Zwick Fall 202 The lecture covers section 5. from the textbook. The determinant of a square matrix is a number that tells you quite a bit about

More information

1 Last time: determinants

1 Last time: determinants 1 Last time: determinants Let n be a positive integer If A is an n n matrix, then its determinant is the number det A = Π(X, A)( 1) inv(x) X S n where S n is the set of n n permutation matrices Π(X, A)

More information

4 Elementary matrices, continued

4 Elementary matrices, continued 4 Elementary matrices, continued We have identified 3 types of row operations and their corresponding elementary matrices. To repeat the recipe: These matrices are constructed by performing the given row

More information

Linear Equations and Matrix

Linear Equations and Matrix 1/60 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Gaussian Elimination 2/60 Alpha Go Linear algebra begins with a system of linear

More information

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

ECON 331 Homework #2 - Solution. In a closed model the vector of external demand is zero, so the matrix equation writes:

ECON 331 Homework #2 - Solution. In a closed model the vector of external demand is zero, so the matrix equation writes: ECON 33 Homework #2 - Solution. (Leontief model) (a) (i) The matrix of input-output A and the vector of level of production X are, respectively:.2.3.2 x A =.5.2.3 and X = y.3.5.5 z In a closed model the

More information

Computational complexity and some Graph Theory

Computational complexity and some Graph Theory Graph Theory Lars Hellström January 22, 2014 Contents of todays lecture An important concern when choosing the algorithm to use for something (after basic requirements such as correctness and stability)

More information

Mathematics 13: Lecture 10

Mathematics 13: Lecture 10 Mathematics 13: Lecture 10 Matrices Dan Sloughter Furman University January 25, 2008 Dan Sloughter (Furman University) Mathematics 13: Lecture 10 January 25, 2008 1 / 19 Matrices Recall: A matrix is a

More information

CPSC 518 Introduction to Computer Algebra Schönhage and Strassen s Algorithm for Integer Multiplication

CPSC 518 Introduction to Computer Algebra Schönhage and Strassen s Algorithm for Integer Multiplication CPSC 518 Introduction to Computer Algebra Schönhage and Strassen s Algorithm for Integer Multiplication March, 2006 1 Introduction We have now seen that the Fast Fourier Transform can be applied to perform

More information

Inverses and Determinants

Inverses and Determinants Engineering Mathematics 1 Fall 017 Inverses and Determinants I begin finding the inverse of a matrix; namely 1 4 The inverse, if it exists, will be of the form where AA 1 I; which works out to ( 1 4 A

More information

Eigenvalues and eigenvectors

Eigenvalues and eigenvectors Roberto s Notes on Linear Algebra Chapter 0: Eigenvalues and diagonalization Section Eigenvalues and eigenvectors What you need to know already: Basic properties of linear transformations. Linear systems

More information

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016

Linear Algebra Notes. Lecture Notes, University of Toronto, Fall 2016 Linear Algebra Notes Lecture Notes, University of Toronto, Fall 2016 (Ctd ) 11 Isomorphisms 1 Linear maps Definition 11 An invertible linear map T : V W is called a linear isomorphism from V to W Etymology:

More information

The divide-and-conquer strategy solves a problem by: 1. Breaking it into subproblems that are themselves smaller instances of the same type of problem

The divide-and-conquer strategy solves a problem by: 1. Breaking it into subproblems that are themselves smaller instances of the same type of problem Chapter 2. Divide-and-conquer algorithms The divide-and-conquer strategy solves a problem by: 1. Breaking it into subproblems that are themselves smaller instances of the same type of problem. 2. Recursively

More information

CSCI Honor seminar in algorithms Homework 2 Solution

CSCI Honor seminar in algorithms Homework 2 Solution CSCI 493.55 Honor seminar in algorithms Homework 2 Solution Saad Mneimneh Visiting Professor Hunter College of CUNY Problem 1: Rabin-Karp string matching Consider a binary string s of length n and another

More information

CS 4424 GCD, XGCD

CS 4424 GCD, XGCD CS 4424 GCD, XGCD eschost@uwo.ca GCD of polynomials First definition Let A and B be in k[x]. k[x] is the ring of polynomials with coefficients in k A Greatest Common Divisor of A and B is a polynomial

More information

Tutorials. Algorithms and Data Structures Strassen s Algorithm. The Master Theorem for solving recurrences. The Master Theorem (cont d)

Tutorials. Algorithms and Data Structures Strassen s Algorithm. The Master Theorem for solving recurrences. The Master Theorem (cont d) DS 2018/19 Lecture 4 slide 3 DS 2018/19 Lecture 4 slide 4 Tutorials lgorithms and Data Structures Strassen s lgorithm Start in week week 3 Tutorial allocations are linked from the course webpage http://www.inf.ed.ac.uk/teaching/courses/ads/

More information

CS100: DISCRETE STRUCTURES. Lecture 3 Matrices Ch 3 Pages:

CS100: DISCRETE STRUCTURES. Lecture 3 Matrices Ch 3 Pages: CS100: DISCRETE STRUCTURES Lecture 3 Matrices Ch 3 Pages: 246-262 Matrices 2 Introduction DEFINITION 1: A matrix is a rectangular array of numbers. A matrix with m rows and n columns is called an m x n

More information

MATH2210 Notebook 2 Spring 2018

MATH2210 Notebook 2 Spring 2018 MATH2210 Notebook 2 Spring 2018 prepared by Professor Jenny Baglivo c Copyright 2009 2018 by Jenny A. Baglivo. All Rights Reserved. 2 MATH2210 Notebook 2 3 2.1 Matrices and Their Operations................................

More information

CSE 421. Dynamic Programming Shortest Paths with Negative Weights Yin Tat Lee

CSE 421. Dynamic Programming Shortest Paths with Negative Weights Yin Tat Lee CSE 421 Dynamic Programming Shortest Paths with Negative Weights Yin Tat Lee 1 Shortest Paths with Neg Edge Weights Given a weighted directed graph G = V, E and a source vertex s, where the weight of edge

More information