(Group-theoretic) Fast Matrix Multiplication

Size: px
Start display at page:

Download "(Group-theoretic) Fast Matrix Multiplication"

Transcription

1 (Group-theoretic) Fast Matrix Multiplication Ivo Hedtke Data Structures and Efficient Algorithms Group (Prof Dr M Müller-Hannemann) Martin-Luther-University Halle-Wittenberg Institute of Computer Science 30th of January, 2013 updated on 11th March, 2013 Ivo Hedtke (University Halle-Wittenberg) (Group-theoretic) Fast Matrix Multiplication 30th of January, / 25

2 1 Fast Matrix Multiplication An Overview 2 Group-theoretic Fast Matrix Multiplication An Introduction 3 Properties of the Triple Product Property and the Search for Triple Product Property Triples A Summary Ivo Hedtke (University Halle-Wittenberg) (Group-theoretic) Fast Matrix Multiplication 30th of January, / 25

3 Part1 Fast Matrix Multiplication An Overview Ivo Hedtke (University Halle-Wittenberg) (Group-theoretic) Fast Matrix Multiplication 30th of January, / 25

4 Naive Matrix Multiplication: Row Column b 1,1 b 1,j b 1,n b 2,1 b 2,j b 2,n b p,1 b p,j b p,n a 1,1 a 1,2 a 1,p a i,1 a i,2 a i,p a m,1 a m,2 a m,p c 1,1 c 1,j c 1,n c i,1 c i,j c i,n c m,1 c m,j c m,n A R m p, B R p n, C := AB R m n c i,j = (AB) i,j = p l=1 a i,l b l,j Ivo Hedtke (University Halle-Wittenberg) (Group-theoretic) Fast Matrix Multiplication 30th of January, / 25

5 Fast Fast complexity of other problems Fast Matrix Multiplication parallel comp Fast= Complexity sparse matrices Fast= Time small matrices asymptotic runtime shape/ form/ layout special types of matrices computer architecture Ivo Hedtke (University Halle-Wittenberg) (Group-theoretic) Fast Matrix Multiplication 30th of January, / 25

6 Naive Matrix Multiplication: Row Column for (int i = 0; i < N; i++) { for (int j = 0; j < M; j++) { aux = 00; for (int k = 0; k < P; k++) { aux += A[i][k]*B[k][j]; } Result[i][j] = aux; } a 1,1 a 1,2 a 1,p } a i,1 a i,2 a i,p a m,1 a m,2 a m,p b 1,1 b 1,j b 1,n b 2,1 b 2,j b 2,n b p,1 b p,j b p,n c 1,1 c 1,j c 1,n c i,1 c i,j c i,n c m,1 c m,j c m,n A R m p, B R p n, C := AB R m n c i,j = (AB) i,j = p a i,l b l,j l=1 Ivo Hedtke (University Halle-Wittenberg) (Group-theoretic) Fast Matrix Multiplication 30th of January, / 25

7 Folgendes Diagramm konnte ich aus den Daten erstellen: Naive Matrix Multiplication: Performance scalar additions n 3 n 2 scalar multiplications n 3 2n 3 n 2 FLOPs MFLOPS FLOP = Floating Point Operation FLOPs = Floating Point Operations FLOPS = Floating Point Operations per Second N Ivo Hedtke (University Halle-Wittenberg) (Group-theoretic) Fast Matrix Multiplication 30th of January, / 25

8 RAM Model (random-access machine) 1 Instructions are executed one after another, with no concurrent operations Every instruction takes the same amount of time, at least up to small constant factors Unbounded amount of available memory Memory stores words of size O(logn) bits where n is the input size Any desired memory location can be accessed in unit time For numerical and geometric algorithms, it is sometimes also assumed that words can be represent real numbers accurately Exact arithmetic on arbitrary real numbers can be done in constant time 1 [D Ajwani and H Meyerhenke: Realistic Computer Models In: M Müller-Hannemann and S Schirra: Algorithm Engineering Bridging the Gap between Algorithm Theory and Practice, LNCS 5971] Ivo Hedtke (University Halle-Wittenberg) (Group-theoretic) Fast Matrix Multiplication 30th of January, / 25

9 Real Architecture 2 < 1 KB Registers 1 ns Size < 256 MB Caches 10 ns Speed < 8 GB Main Memory 5-70 ns > 20 GB Hard Disk 10 ms 2 [D Ajwani and H Meyerhenke: Realistic Computer Models In: M Müller-Hannemann and S Schirra: Algorithm Engineering Bridging the Gap between Algorithm Theory and Practice, LNCS 5971] Ivo Hedtke (University Halle-Wittenberg) (Group-theoretic) Fast Matrix Multiplication 30th of January, / 25

10 Loop Unrolling & Tiling saxpy = Single-precision real Alpha X Plus Y: α x + y (BLAS Level 1) DO 50 I = MP1,N,4 SY(I) = SY(I) + SA*SX(I) SY(I+1) = SY(I+1) + SA*SX(I+1) SY(I+2) = SY(I+2) + SA*SX(I+2) SY(I+3) = SY(I+3) + SA*SX(I+3) 50 CONTINUE C uses row-major storage double uses 64 bit per number assume our cache block size is 128 byte cache block size of 16 numbers Ivo Hedtke (University Halle-Wittenberg) (Group-theoretic) Fast Matrix Multiplication 30th of January, / 25

11 ATLAS = Automatically Tuned Linear Algebra Software make[5]: Leaving directory /tune/blas/gemm SUCCESSFUL FINISH FOR /xummsearch BEST USER CASE 241, NB=56: MFLOP hedtke$ /Test -O TIME TEST FOR METHODS OF MATRIX MULTIPLICATION C = A*(8A), where A is a (n x n) random matrix with n = method time (sec) NaivStandard 156 MFLOPS NaivOnArray NaivLoopUnrollingFour StrassenNaiv 19GB RAM WinogradOriginal BLAS 8 GFLOPS Ivo Hedtke (University Halle-Wittenberg) (Group-theoretic) Fast Matrix Multiplication 30th of January, / 25

12 Strassen s algorithm 3 [ ] a11 a 12 A = a 21 a 22 [ ] b11 b 12 B = b 21 b 22 [ ] h1 + h 4 h 5 + h 7 h 3 + h 5 AB = h 2 + h 4 h 1 + h 3 h 2 + h 6 wikimediaorg/ wikipedia/commons/e/ ef/strassen_knuth_ Prize_lecturejpg h 1 := (a 11 + a 22 )(b 11 + b 22 ) h 2 := (a 21 + a 22 )b 11 h 3 := a 11 (b 12 b 22 ) h 4 := a 22 (b 21 b 11 ) h 5 := (a 11 + a 12 )b 22 h 6 := (a 21 a 11 )(b 11 + b 12 ) h 7 := (a 12 a 22 )(b 21 + b 22 ) 3 [Volker Strassen: Gaussian Elimination is not Optimal, Numer Math 13 (1969), ] Ivo Hedtke (University Halle-Wittenberg) (Group-theoretic) Fast Matrix Multiplication 30th of January, / 25

13 although originally it appeared in the equivalent trilinear version and was applied The Exponent of Matrix Multiplication to the evaluation of one rather than two matrix products We will end this section by representing a diagram of the progress in the reduction of the exponent M(n) := number of field operations in characteristic 0 required to multiply Diagram (~(~) is the best exponent announced by time ~ ) two n n matrices ω := inf { r R : M(n) = O(n r ) } 3,0" 2,8- coco ~-[ST69] -"t oj (r) [P80] I 'I [P791,-q ~ "~z~- [S80] [R82] / [(3Wl (P80b]" ' I I I I I I I I T ~ t t 1 [Victor Pan: How to Multiply Matrices Faster, LNCS 179, 1984] Ivo Hedtke (University Halle-Wittenberg) (Group-theoretic) Fast Matrix Multiplication 30th of January, / 25

14 Coppersmith, Winograd and Williams ω Don Coppersmith and Shmuel Winograd 4 ω Virginia Vassilevska Williams [Don Coppersmith and Shmuel Winograd, Matrix multiplication via arithmetic progressions, J Symbolic Comput 9 (1990), ] 5 [Virginia Vassilevska Williams, Multiplying matrices faster than Coppersmith-Winograd, Proceedings of the 44th Symposium on Theory of Computing, STOC 12] Ivo Hedtke (University Halle-Wittenberg) (Group-theoretic) Fast Matrix Multiplication 30th of January, / 25

15 Group-theoretic Fast Matrix Multiplication An Introduction Part2 Ivo Hedtke (University Halle-Wittenberg) (Group-theoretic) Fast Matrix Multiplication 30th of January, / 25

16 Cohn and Umans Henry Cohn and Christopher Umans 6 6 [Henry Cohn and Christopher Umans, A Group-theoretic Approach to Fast Matrix Multiplication, Proceedings of the 44th Annual Symposium on Foundations of Computer Science, October 2003, Cambridge, MA, IEEE Computer Society (2003), ] Ivo Hedtke (University Halle-Wittenberg) (Group-theoretic) Fast Matrix Multiplication 30th of January, / 25

17 basics: algebraic complexity theory n,p,m := n,p,m C n,p,m k : k n p k p m k n m, (A,B) AB bilinear complexity field k, fin dim k-vs U, V and W, k-bil map η: U V W k-bil algo: 1 i r: f i U, g i V, w i W : η(u,v) = r i=1 f i(u)g i (v)w i u U,v V (f 1,g 1,w 1 ;;f r,g r,w r ) is called k-bil algo of length r for η R(η) := min length of all k-bil algo for η k-algebra A: R(A) := rank of its k-bil multiplication map restriction of a bilinear map bil maps φ: U V W, φ : U V W restriction: linear maps σ: U U, τ: V V, ζ : W W : φ = ζ φ (σ τ) we write φ φ if restriction of φ to φ Fact: If φ φ, then R(φ) R(φ ) Ivo Hedtke (University Halle-Wittenberg) (Group-theoretic) Fast Matrix Multiplication 30th of January, / 25

18 TPP basics n,p,m := n,p,m C, n,p,m k : k n p k p m k n m, (A,B) AB R(η) rank of bilinear algorithm R(A) rank of algebra R(n,p,m) := R( n,p,m ) R(n) := R(n,n,n) R(G) : R(C[G]) (AB) s,u = ( )( t T A s,tb t,u s S,t T A s,ts 1 t Q(X) = {xy 1 : x,y X} TPP: S,T,U G fulfill the TPP: s Q(S), t Q(T), u Q(u) stu = 1 s = t = u = 1 ˆt T,u U Bˆt,uˆt 1 u G realizes n,p,m : S,T,U G, S = n, T = p, U = m and S, T, U fulfill the TPP In this case we call (S,T,U) a TPP triple of G Its size is npm TPP (subset) capacity: β(g) = max{npm : G realizes n, p, m } {d i } character degrees of G r-character capacity: D r (G) = i d r i ) Ivo Hedtke (University Halle-Wittenberg) (Group-theoretic) Fast Matrix Multiplication 30th of January, / 25

19 Part3 Properties of the Triple Product Property and the Search for Triple Product Property Triples A Summary Ivo Hedtke (University Halle-Wittenberg) (Group-theoretic) Fast Matrix Multiplication 30th of January, / 25

20 Properties of the TPP / TPP triples If S, T and U fulfill the TPP then [Murthy, arxiv: ] S + T + U G + 2 [Hedtke, arxiv: v2] Q(X) Q(Y ) = 1 X Y {S,T,U} [Hedtke, arxiv: v2] Q(S) + Q(T) + Q(U) G + 2 [Hedtke, arxiv: v2] If S, T and U fulfill the TPP, then there exists a triple S, T and U with S = S, T = T, U = U and S T = T U = S U = 1 which also fulfills the TPP Let G be a group If (S,T,U) is a TPP triple of G, then (dsa,dtb,duc) is a TPP triple for all a,b,c,d G, too basic TPP triple: 1 S T U academics/other-fellows/ emeritus-fellows/peter-neumann/ [P M Neumann, A note on the triple product property for subsets of finite groups, LMS J Comput Math 14 (2011), ] Ivo Hedtke (University Halle-Wittenberg) (Group-theoretic) Fast Matrix Multiplication 30th of January, / 25

21 New Characterizations of the TPP [me & S Murthy, Search and test algorithms for triple product property triples, Groups Compl Crypt, Vol 4 Issue 1 (2012), p ] Theorem Three subsets S, T and U of G form a TPP triple (S,T,U) iff (i) 1 S T U, (ii) Q(T) Q(U) = 1 and (iii) Q(S) Q(T)Q(U) = 1 more general: (S 1,S 2,S 3 ) is a TPP triple if and only if for all π Sym(3) 1 S 1 S 2 S 3, Q(S π2 ) Q(S π3 ) = 1, and Q(S π1 ) Q(S π2 )Q(S π3 ) = 1 Theorem Let G be a group and (S,T,U) a basic TPP triple of subsets of G such that either (i) two members, say S and T, are subgroups of G which permute, or (ii) one member, say S, is a normal subgroup of G Then S T U G Ivo Hedtke (University Halle-Wittenberg) (Group-theoretic) Fast Matrix Multiplication 30th of January, / 25

22 New Characterizations of the TPP Let C be a finite set and C = {C 1,,C k } a partition of it A set X C is called a subtransversal for C with support supp C (X) = T C if for all C i C X C i = { 1 Ci T, 0 otherwise special case: C = set of cosets of a subgroup S of a group G, then any subtransversal T for S \ G will be called a subtransversal for S in G Theorem Let G be a group, S a subgroup of G, and T, U subsets of G 1 If (S,T,U) is a TPP triple of G then T and U are subtransversals for S in G such that supp S\G (T) supp S\G (U) = {S} (*) 2 If T and U are also subgroups of G, and T and U are subtransversals for S in G satisfying (*) then (S,T,U) is a TPP triple of G Ivo Hedtke (University Halle-Wittenberg) (Group-theoretic) Fast Matrix Multiplication 30th of January, / 25

23 Search for TPP triples Ivo Hedtke (University Halle-Wittenberg) (Group-theoretic) Fast Matrix Multiplication 30th of January, / 25

24 Theoretical insights obtained from experiments Conjecture 74 For any group G, β g (G) D 3 (G) Conjecture 75 Let D 2n denote the dihedral group of order 2n (i) If n is a multiple of 3, then β(d 2n ) = β g (D 2n ) = 8 3 n (ii) If n is not a multiple of 3, then β(d 2n ) < 8 3 n and β g(d 2n ) = 2n Conjecture 76 If G is a group with a cyclic subgroup of index 2, then β(g) 4 3 G Theorem There is no group G that realizes 3,3,3 such that R(G) < 23, or that realizes 4,4,4 such that R(G) < 49 Ivo Hedtke (University Halle-Wittenberg) (Group-theoretic) Fast Matrix Multiplication 30th of January, / 25

25 5 5 matrices (techn rep in preparation) Theorem There is no group G that realizes 5,5,5 such that R(G) < 100 Suppose G realizes 5,5,5 If G has a subgroup H of index 2, then H realizes 3, 3, 3 If G realizes 5,5,5 and G 72, then G has no abelian subgroups of index 2 If G is group with R(G) < 100 that realizes 5,5,5, then G is non-abelian and 45 G 72 No group G of order 64 fulfills R(G) < 100 and realizes 5,5,5 Search candidates: The final list contains ten groups of order 48 and two of order 54 better search algorithm for k,k,k TPP triples Ivo Hedtke (University Halle-Wittenberg) (Group-theoretic) Fast Matrix Multiplication 30th of January, / 25

Approaches to bounding the exponent of matrix multiplication

Approaches to bounding the exponent of matrix multiplication Approaches to bounding the exponent of matrix multiplication Chris Umans Caltech Based on joint work with Noga Alon, Henry Cohn, Bobby Kleinberg, Amir Shpilka, Balazs Szegedy Simons Institute Sept. 7,

More information

Group-theoretic approach to Fast Matrix Multiplication

Group-theoretic approach to Fast Matrix Multiplication Jenya Kirshtein 1 Group-theoretic approach to Fast Matrix Multiplication Jenya Kirshtein Department of Mathematics University of Denver Jenya Kirshtein 2 References 1. H. Cohn, C. Umans. Group-theoretic

More information

I. Approaches to bounding the exponent of matrix multiplication

I. Approaches to bounding the exponent of matrix multiplication I. Approaches to bounding the exponent of matrix multiplication Chris Umans Caltech Based on joint work with Noga Alon, Henry Cohn, Bobby Kleinberg, Amir Shpilka, Balazs Szegedy Modern Applications of

More information

Discovering Fast Matrix Multiplication Algorithms via Tensor Decomposition

Discovering Fast Matrix Multiplication Algorithms via Tensor Decomposition Discovering Fast Matrix Multiplication Algorithms via Tensor Decomposition Grey Ballard SIAM onference on omputational Science & Engineering March, 207 ollaborators This is joint work with... Austin Benson

More information

Fast Approximate Matrix Multiplication by Solving Linear Systems

Fast Approximate Matrix Multiplication by Solving Linear Systems Electronic Colloquium on Computational Complexity, Report No. 117 (2014) Fast Approximate Matrix Multiplication by Solving Linear Systems Shiva Manne 1 and Manjish Pal 2 1 Birla Institute of Technology,

More information

How to find good starting tensors for matrix multiplication

How to find good starting tensors for matrix multiplication How to find good starting tensors for matrix multiplication Markus Bläser Saarland University Matrix multiplication z,... z,n..... z n,... z n,n = x,... x,n..... x n,... x n,n y,... y,n..... y n,... y

More information

Fast Matrix Product Algorithms: From Theory To Practice

Fast Matrix Product Algorithms: From Theory To Practice Introduction and Definitions The τ-theorem Pan s aggregation tables and the τ-theorem Software Implementation Conclusion Fast Matrix Product Algorithms: From Theory To Practice Thomas Sibut-Pinote Inria,

More information

Powers of Tensors and Fast Matrix Multiplication

Powers of Tensors and Fast Matrix Multiplication Powers of Tensors and Fast Matrix Multiplication François Le Gall Department of Computer Science Graduate School of Information Science and Technology The University of Tokyo Simons Institute, 12 November

More information

Matrix Multiplication

Matrix Multiplication Matrix Multiplication Matrix Multiplication Matrix multiplication. Given two n-by-n matrices A and B, compute C = AB. n c ij = a ik b kj k=1 c 11 c 12 c 1n c 21 c 22 c 2n c n1 c n2 c nn = a 11 a 12 a 1n

More information

Finding a Heaviest Triangle is not Harder than Matrix Multiplication

Finding a Heaviest Triangle is not Harder than Matrix Multiplication Finding a Heaviest Triangle is not Harder than Matrix Multiplication Artur Czumaj Department of Computer Science New Jersey Institute of Technology aczumaj@acm.org Andrzej Lingas Department of Computer

More information

Generalizing of ahigh Performance Parallel Strassen Implementation on Distributed Memory MIMD Architectures

Generalizing of ahigh Performance Parallel Strassen Implementation on Distributed Memory MIMD Architectures Generalizing of ahigh Performance Parallel Strassen Implementation on Distributed Memory MIMD Architectures Duc Kien Nguyen 1,IvanLavallee 2,Marc Bui 2 1 CHArt -Ecole Pratique des Hautes Etudes &UniversitéParis

More information

hal , version 1-29 Feb 2008

hal , version 1-29 Feb 2008 Compressed Modular Matrix Multiplication Jean-Guillaume Dumas Laurent Fousse Bruno Salvy February 29, 2008 Abstract We propose to store several integers modulo a small prime into a single machine word.

More information

Ma/CS 6b Class 12: Graphs and Matrices

Ma/CS 6b Class 12: Graphs and Matrices Ma/CS 6b Class 2: Graphs and Matrices 3 3 v 5 v 4 v By Adam Sheffer Non-simple Graphs In this class we allow graphs to be nonsimple. We allow parallel edges, but not loops. Incidence Matrix Consider a

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a beginning course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc I will review some of these terms here,

More information

Complexity of Matrix Multiplication and Bilinear Problems

Complexity of Matrix Multiplication and Bilinear Problems Complexity of Matrix Multiplication and Bilinear Problems François Le Gall Graduate School of Informatics Kyoto University ADFOCS17 - Lecture 3 24 August 2017 Overview of the Lectures Fundamental techniques

More information

Black Box Linear Algebra

Black Box Linear Algebra Black Box Linear Algebra An Introduction to Wiedemann s Approach William J. Turner Department of Mathematics & Computer Science Wabash College Symbolic Computation Sometimes called Computer Algebra Symbols

More information

Yale university technical report #1402.

Yale university technical report #1402. The Mailman algorithm: a note on matrix vector multiplication Yale university technical report #1402. Edo Liberty Computer Science Yale University New Haven, CT Steven W. Zucker Computer Science and Appled

More information

Fast matrix multiplication using coherent configurations

Fast matrix multiplication using coherent configurations Fast matrix multiplication using coherent configurations Henry Cohn Christopher Umans Abstract We introduce a relaxation of the notion of tensor rank, called s-rank, and show that upper bounds on the s-rank

More information

CMPSCI611: Three Divide-and-Conquer Examples Lecture 2

CMPSCI611: Three Divide-and-Conquer Examples Lecture 2 CMPSCI611: Three Divide-and-Conquer Examples Lecture 2 Last lecture we presented and analyzed Mergesort, a simple divide-and-conquer algorithm. We then stated and proved the Master Theorem, which gives

More information

Outline. policies for the first part. with some potential answers... MCS 260 Lecture 10.0 Introduction to Computer Science Jan Verschelde, 9 July 2014

Outline. policies for the first part. with some potential answers... MCS 260 Lecture 10.0 Introduction to Computer Science Jan Verschelde, 9 July 2014 Outline 1 midterm exam on Friday 11 July 2014 policies for the first part 2 questions with some potential answers... MCS 260 Lecture 10.0 Introduction to Computer Science Jan Verschelde, 9 July 2014 Intro

More information

Polynomial multiplication and division using heap.

Polynomial multiplication and division using heap. Polynomial multiplication and division using heap. Michael Monagan and Roman Pearce Department of Mathematics, Simon Fraser University. Abstract We report on new code for sparse multivariate polynomial

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

Group-theoretic Algorithms for Matrix Multiplication

Group-theoretic Algorithms for Matrix Multiplication Group-theoretic Algorithms for Matrix Multiplication Henry Cohn Robert Kleinberg Balázs Szegedy Christopher Umans Abstract We further develop the group-theoretic approach to fast matrix multiplication

More information

A Divide-and-Conquer Algorithm for Functions of Triangular Matrices

A Divide-and-Conquer Algorithm for Functions of Triangular Matrices A Divide-and-Conquer Algorithm for Functions of Triangular Matrices Ç. K. Koç Electrical & Computer Engineering Oregon State University Corvallis, Oregon 97331 Technical Report, June 1996 Abstract We propose

More information

On the geometry of matrix multiplication

On the geometry of matrix multiplication On the geometry of matrix multiplication J.M. Landsberg Texas A&M University Spring 2018 AMS Sectional Meeting 1 / 27 A practical problem: efficient linear algebra Standard algorithm for matrix multiplication,

More information

Toward High Performance Matrix Multiplication for Exact Computation

Toward High Performance Matrix Multiplication for Exact Computation Toward High Performance Matrix Multiplication for Exact Computation Pascal Giorgi Joint work with Romain Lebreton (U. Waterloo) Funded by the French ANR project HPAC Séminaire CASYS - LJK, April 2014 Motivations

More information

Consider the following example of a linear system:

Consider the following example of a linear system: LINEAR SYSTEMS Consider the following example of a linear system: Its unique solution is x + 2x 2 + 3x 3 = 5 x + x 3 = 3 3x + x 2 + 3x 3 = 3 x =, x 2 = 0, x 3 = 2 In general we want to solve n equations

More information

Implementations of 3 Types of the Schreier-Sims Algorithm

Implementations of 3 Types of the Schreier-Sims Algorithm Implementations of 3 Types of the Schreier-Sims Algorithm Martin Jaggi m.jaggi@gmx.net MAS334 - Mathematics Computing Project Under supervison of Dr L.H.Soicher Queen Mary University of London March 2005

More information

Version of block Lanczos-type algorithm for solving sparse linear systems

Version of block Lanczos-type algorithm for solving sparse linear systems Bull Math Soc Sci Math Roumanie Tome 53(101) o 3, 2010, 225 230 Version of block Lanczos-type algorithm for solving sparse linear systems by MA Cherepniov Dedicated to the memory of Laurenţiu Panaitopol

More information

Chapter 5. Divide and Conquer CLRS 4.3. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved.

Chapter 5. Divide and Conquer CLRS 4.3. Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved. Chapter 5 Divide and Conquer CLRS 4.3 Slides by Kevin Wayne. Copyright 2005 Pearson-Addison Wesley. All rights reserved. 1 Divide-and-Conquer Divide-and-conquer. Break up problem into several parts. Solve

More information

Divide and Conquer. Andreas Klappenecker. [based on slides by Prof. Welch]

Divide and Conquer. Andreas Klappenecker. [based on slides by Prof. Welch] Divide and Conquer Andreas Klappenecker [based on slides by Prof. Welch] Divide and Conquer Paradigm An important general technique for designing algorithms: divide problem into subproblems recursively

More information

GPU acceleration of Newton s method for large systems of polynomial equations in double double and quad double arithmetic

GPU acceleration of Newton s method for large systems of polynomial equations in double double and quad double arithmetic GPU acceleration of Newton s method for large systems of polynomial equations in double double and quad double arithmetic Jan Verschelde joint work with Xiangcheng Yu University of Illinois at Chicago

More information

9. Numerical linear algebra background

9. Numerical linear algebra background Convex Optimization Boyd & Vandenberghe 9. Numerical linear algebra background matrix structure and algorithm complexity solving linear equations with factored matrices LU, Cholesky, LDL T factorization

More information

ALGEBRAIC AND MULTILINEAR-ALGEBRAIC TECHNIQUES FOR FAST MATRIX MULTIPLICATION GUY MATHIAS GOUAYA. submitted in accordance with the requirements

ALGEBRAIC AND MULTILINEAR-ALGEBRAIC TECHNIQUES FOR FAST MATRIX MULTIPLICATION GUY MATHIAS GOUAYA. submitted in accordance with the requirements ALGEBRAIC AND MULILINEAR-ALGEBRAIC ECHNIQUES FOR FAS MARIX MULIPLICAION by GUY MAHIAS GOUAYA submitted in accordance with the requirements for the degree of MASER OF SCIENCES in the subject Applied Mathematics

More information

Algorithms and Data Structures Strassen s Algorithm. ADS (2017/18) Lecture 4 slide 1

Algorithms and Data Structures Strassen s Algorithm. ADS (2017/18) Lecture 4 slide 1 Algorithms and Data Structures Strassen s Algorithm ADS (2017/18) Lecture 4 slide 1 Tutorials Start in week (week 3) Tutorial allocations are linked from the course webpage http://www.inf.ed.ac.uk/teaching/courses/ads/

More information

2.6 Complexity Theory for Map-Reduce. Star Joins 2.6. COMPLEXITY THEORY FOR MAP-REDUCE 51

2.6 Complexity Theory for Map-Reduce. Star Joins 2.6. COMPLEXITY THEORY FOR MAP-REDUCE 51 2.6. COMPLEXITY THEORY FOR MAP-REDUCE 51 Star Joins A common structure for data mining of commercial data is the star join. For example, a chain store like Walmart keeps a fact table whose tuples each

More information

Introduction The Nature of High-Performance Computation

Introduction The Nature of High-Performance Computation 1 Introduction The Nature of High-Performance Computation The need for speed. Since the beginning of the era of the modern digital computer in the early 1940s, computing power has increased at an exponential

More information

Eliminations and echelon forms in exact linear algebra

Eliminations and echelon forms in exact linear algebra Eliminations and echelon forms in exact linear algebra Clément PERNET, INRIA-MOAIS, Grenoble Université, France East Coast Computer Algebra Day, University of Waterloo, ON, Canada, April 9, 2011 Clément

More information

Strassen s Algorithm for Tensor Contraction

Strassen s Algorithm for Tensor Contraction Strassen s Algorithm for Tensor Contraction Jianyu Huang, Devin A. Matthews, Robert A. van de Geijn The University of Texas at Austin September 14-15, 2017 Tensor Computation Workshop Flatiron Institute,

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite

More information

Inner Rank and Lower Bounds for Matrix Multiplication

Inner Rank and Lower Bounds for Matrix Multiplication Inner Rank and Lower Bounds for Matrix Multiplication Joel Friedman University of British Columbia www.math.ubc.ca/ jf Jerusalem June 19, 2017 Joel Friedman (UBC) Inner Rank and Lower Bounds June 19, 2017

More information

Tutorials. Algorithms and Data Structures Strassen s Algorithm. The Master Theorem for solving recurrences. The Master Theorem (cont d)

Tutorials. Algorithms and Data Structures Strassen s Algorithm. The Master Theorem for solving recurrences. The Master Theorem (cont d) DS 2018/19 Lecture 4 slide 3 DS 2018/19 Lecture 4 slide 4 Tutorials lgorithms and Data Structures Strassen s lgorithm Start in week week 3 Tutorial allocations are linked from the course webpage http://www.inf.ed.ac.uk/teaching/courses/ads/

More information

ECS130 Scientific Computing. Lecture 1: Introduction. Monday, January 7, 10:00 10:50 am

ECS130 Scientific Computing. Lecture 1: Introduction. Monday, January 7, 10:00 10:50 am ECS130 Scientific Computing Lecture 1: Introduction Monday, January 7, 10:00 10:50 am About Course: ECS130 Scientific Computing Professor: Zhaojun Bai Webpage: http://web.cs.ucdavis.edu/~bai/ecs130/ Today

More information

On the Exponent of the All Pairs Shortest Path Problem

On the Exponent of the All Pairs Shortest Path Problem On the Exponent of the All Pairs Shortest Path Problem Noga Alon Department of Mathematics Sackler Faculty of Exact Sciences Tel Aviv University Zvi Galil Department of Computer Science Sackler Faculty

More information

Dynamic Matrix Rank.

Dynamic Matrix Rank. Dynamic Matrix Rank Gudmund Skovbjerg Frandsen 1 and Peter Frands Frandsen 2 1 BRICS, University of Aarhus, Denmark gudmund@daimi.au.dk 2 Rambøll Management, Aarhus, Denmark Peter.Frandsen@r-m.com Abstract.

More information

FPGA Implementation of a Predictive Controller

FPGA Implementation of a Predictive Controller FPGA Implementation of a Predictive Controller SIAM Conference on Optimization 2011, Darmstadt, Germany Minisymposium on embedded optimization Juan L. Jerez, George A. Constantinides and Eric C. Kerrigan

More information

Linear System of Equations

Linear System of Equations Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F.

More information

CSC : Homework #3

CSC : Homework #3 CSC 707-001: Homework #3 William J. Cook wjcook@math.ncsu.edu Monday, March 15, 2004 1 Exercise 4.13 on page 118 (3-Register RAM) Exercise 4.13 Verify that a RAM need not have an unbounded number of registers.

More information

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100

Preliminary Linear Algebra 1. Copyright c 2012 Dan Nettleton (Iowa State University) Statistics / 100 Preliminary Linear Algebra 1 Copyright c 2012 Dan Nettleton (Iowa State University) Statistics 611 1 / 100 Notation for all there exists such that therefore because end of proof (QED) Copyright c 2012

More information

Chapter 2. Divide-and-conquer. 2.1 Strassen s algorithm

Chapter 2. Divide-and-conquer. 2.1 Strassen s algorithm Chapter 2 Divide-and-conquer This chapter revisits the divide-and-conquer paradigms and explains how to solve recurrences, in particular, with the use of the master theorem. We first illustrate the concept

More information

ww.padasalai.net

ww.padasalai.net t w w ADHITHYA TRB- TET COACHING CENTRE KANCHIPURAM SUNDER MATRIC SCHOOL - 9786851468 TEST - 2 COMPUTER SCIENC PG - TRB DATE : 17. 03. 2019 t et t et t t t t UNIT 1 COMPUTER SYSTEM ARCHITECTURE t t t t

More information

Elementary maths for GMT

Elementary maths for GMT Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

More information

Total Ordering on Subgroups and Cosets

Total Ordering on Subgroups and Cosets Total Ordering on Subgroups and Cosets Alexander Hulpke Department of Mathematics Colorado State University 1874 Campus Delivery Fort Collins, CO 80523-1874 hulpke@math.colostate.edu Steve Linton Centre

More information

The Master Theorem for solving recurrences. Algorithms and Data Structures Strassen s Algorithm. Tutorials. The Master Theorem (cont d)

The Master Theorem for solving recurrences. Algorithms and Data Structures Strassen s Algorithm. Tutorials. The Master Theorem (cont d) The Master Theorem for solving recurrences lgorithms and Data Structures Strassen s lgorithm 23rd September, 2014 Theorem Let n 0 N, k N 0 and a, b R with a > 0 and b > 1, and let T : N R satisfy the following

More information

Matrix-Matrix Multiplication

Matrix-Matrix Multiplication Week5 Matrix-Matrix Multiplication 51 Opening Remarks 511 Composing Rotations Homework 5111 Which of the following statements are true: cosρ + σ + τ cosτ sinτ cosρ + σ sinρ + σ + τ sinτ cosτ sinρ + σ cosρ

More information

A non-commutative algorithm for multiplying (7 7) matrices using 250 multiplications

A non-commutative algorithm for multiplying (7 7) matrices using 250 multiplications A non-commutative algorithm for multiplying (7 7) matrices using 250 multiplications Alexandre Sedoglavic To cite this version: Alexandre Sedoglavic. A non-commutative algorithm for multiplying (7 7) matrices

More information

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra

Lecture: Linear algebra. 4. Solutions of linear equation systems The fundamental theorem of linear algebra Lecture: Linear algebra. 1. Subspaces. 2. Orthogonal complement. 3. The four fundamental subspaces 4. Solutions of linear equation systems The fundamental theorem of linear algebra 5. Determining the fundamental

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

Fast Matrix Multiplication: Limitations of the Laser Method

Fast Matrix Multiplication: Limitations of the Laser Method Electronic Colloquium on Computational Complexity, Report No. 154 (2014) Fast Matrix Multiplication: Limitations of the Laser Method Andris Ambainis University of Latvia Riga, Latvia andris.ambainis@lu.lv

More information

On the singular elements of a semisimple Lie algebra and the generalized Amitsur-Levitski Theorem

On the singular elements of a semisimple Lie algebra and the generalized Amitsur-Levitski Theorem On the singular elements of a semisimple Lie algebra and the generalized Amitsur-Levitski Theorem Bertram Kostant, MIT Conference on Representations of Reductive Groups Salt Lake City, Utah July 10, 2013

More information

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark DM559 Linear and Integer Programming LU Factorization Marco Chiarandini Department of Mathematics & Computer Science University of Southern Denmark [Based on slides by Lieven Vandenberghe, UCLA] Outline

More information

Sporadic and related groups. Lecture 11 Matrices over finite fields J 4

Sporadic and related groups. Lecture 11 Matrices over finite fields J 4 Sporadic and related groups. Lecture 11 Matrices over finite fields J 4 Original aim of the meat-axe. Find the (degrees of the) 13 irreducible representations of M 24 mod 2. Gordon James found 12 of them

More information

Introduction. Vectors and Matrices. Vectors [1] Vectors [2]

Introduction. Vectors and Matrices. Vectors [1] Vectors [2] Introduction Vectors and Matrices Dr. TGI Fernando 1 2 Data is frequently arranged in arrays, that is, sets whose elements are indexed by one or more subscripts. Vector - one dimensional array Matrix -

More information

THE COMPLEXITY OF THE QUATERNION PROD- UCT*

THE COMPLEXITY OF THE QUATERNION PROD- UCT* 1 THE COMPLEXITY OF THE QUATERNION PROD- UCT* Thomas D. Howell Jean-Claude Lafon 1 ** TR 75-245 June 1975 2 Department of Computer Science, Cornell University, Ithaca, N.Y. * This research was supported

More information

1. Write a program to calculate distance traveled by light

1. Write a program to calculate distance traveled by light G. H. R a i s o n i C o l l e g e O f E n g i n e e r i n g D i g d o h H i l l s, H i n g n a R o a d, N a g p u r D e p a r t m e n t O f C o m p u t e r S c i e n c e & E n g g P r a c t i c a l M a

More information

CS 4424 Matrix multiplication

CS 4424 Matrix multiplication CS 4424 Matrix multiplication 1 Reminder: matrix multiplication Matrix-matrix product. Starting from a 1,1 a 1,n A =.. and B = a n,1 a n,n b 1,1 b 1,n.., b n,1 b n,n we get AB by multiplying A by all columns

More information

4.2 Floating-Point Numbers

4.2 Floating-Point Numbers 101 Approximation 4.2 Floating-Point Numbers 4.2 Floating-Point Numbers The number 3.1416 in scientific notation is 0.31416 10 1 or (as computer output) -0.31416E01..31416 10 1 exponent sign mantissa base

More information

Computing Machine-Efficient Polynomial Approximations

Computing Machine-Efficient Polynomial Approximations Computing Machine-Efficient Polynomial Approximations N. Brisebarre, S. Chevillard, G. Hanrot, J.-M. Muller, D. Stehlé, A. Tisserand and S. Torres Arénaire, LIP, É.N.S. Lyon Journées du GDR et du réseau

More information

arxiv: v3 [cs.sc] 19 Aug 2011

arxiv: v3 [cs.sc] 19 Aug 2011 A New General-Purpose Method to Multiply 3x3 Matrices Using Only 23 Multiplications arxiv:1108.2830v3 [cs.sc] 19 Aug 2011 Nicolas T. Courtois 1,3, Gregory V. Bard 2, and Daniel Hulme 1,3 1 University College

More information

Notes on vectors and matrices

Notes on vectors and matrices Notes on vectors and matrices EE103 Winter Quarter 2001-02 L Vandenberghe 1 Terminology and notation Matrices, vectors, and scalars A matrix is a rectangular array of numbers (also called scalars), written

More information

! Break up problem into several parts. ! Solve each part recursively. ! Combine solutions to sub-problems into overall solution.

! Break up problem into several parts. ! Solve each part recursively. ! Combine solutions to sub-problems into overall solution. Divide-and-Conquer Chapter 5 Divide and Conquer Divide-and-conquer.! Break up problem into several parts.! Solve each part recursively.! Combine solutions to sub-problems into overall solution. Most common

More information

Algebraic structures I

Algebraic structures I MTH5100 Assignment 1-10 Algebraic structures I For handing in on various dates January March 2011 1 FUNCTIONS. Say which of the following rules successfully define functions, giving reasons. For each one

More information

CS 542G: Conditioning, BLAS, LU Factorization

CS 542G: Conditioning, BLAS, LU Factorization CS 542G: Conditioning, BLAS, LU Factorization Robert Bridson September 22, 2008 1 Why some RBF Kernel Functions Fail We derived some sensible RBF kernel functions, like φ(r) = r 2 log r, from basic principles

More information

A fast randomized algorithm for the approximation of matrices preliminary report

A fast randomized algorithm for the approximation of matrices preliminary report DRAFT A fast randomized algorithm for the approximation of matrices preliminary report Yale Department of Computer Science Technical Report #1380 Franco Woolfe, Edo Liberty, Vladimir Rokhlin, and Mark

More information

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION 1

IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION 1 IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION 1 Automatic Reproduction of a Genius Algorithm: Strassen s Algorithm Revisited by Genetic Search Seunghyun Oh and Byung-Ro Moon, Member, IEEE Abstract In 1968,

More information

Implementation of the DKSS Algorithm for Multiplication of Large Numbers

Implementation of the DKSS Algorithm for Multiplication of Large Numbers Implementation of the DKSS Algorithm for Multiplication of Large Numbers Christoph Lüders Universität Bonn The International Symposium on Symbolic and Algebraic Computation, July 6 9, 2015, Bath, United

More information

RECAP How to find a maximum matching?

RECAP How to find a maximum matching? RECAP How to find a maximum matching? First characterize maximum matchings A maximal matching cannot be enlarged by adding another edge. A maximum matching of G is one of maximum size. Example. Maximum

More information

CSC321 Lecture 2: Linear Regression

CSC321 Lecture 2: Linear Regression CSC32 Lecture 2: Linear Regression Roger Grosse Roger Grosse CSC32 Lecture 2: Linear Regression / 26 Overview First learning algorithm of the course: linear regression Task: predict scalar-valued targets,

More information

Construction of low complexity Array based Quasi Cyclic Low density parity check (QC-LDPC) codes with low error floor

Construction of low complexity Array based Quasi Cyclic Low density parity check (QC-LDPC) codes with low error floor Construction of low complexity Array based Quasi Cyclic Low density parity check (QC-LDPC) codes with low error floor Pravin Salunkhe, Prof D.P Rathod Department of Electrical Engineering, Veermata Jijabai

More information

HP-15C Nth-degree Polynomial Fitting

HP-15C Nth-degree Polynomial Fitting HP-15C Nth-degree Polynomial Fitting Valentín Albillo (Ex-PPC #4747, HPCC #1075) It frequently happens, both in theoretical problems and in real-life applications, that we ve got a series of data points

More information

Permuting Streaming Data Using RAMs

Permuting Streaming Data Using RAMs Permuting Streaming Data Using RAMs MARKUS PÜSCHEL, PETER A. MILDER, and JAMES C. HOE Carnegie Mellon University This paper presents a method for constructing hardware structures that perform a fixed permutation

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Strassen-like algorithms for symmetric tensor contractions

Strassen-like algorithms for symmetric tensor contractions Strassen-like algorithms for symmetric tensor contractions Edgar Solomonik Theory Seminar University of Illinois at Urbana-Champaign September 18, 2017 1 / 28 Fast symmetric tensor contractions Outline

More information

Geometric Generalization of Gaussian Period Relations with Application to Noether s Problem for Meta-Cyclic Groups

Geometric Generalization of Gaussian Period Relations with Application to Noether s Problem for Meta-Cyclic Groups TOKYO J. MATH. VOL. 28,NO. 1, 2005 Geometric Generalization of Gaussian Period Relations with Application to Noether s Problem for Meta-Cyclic Groups Ki-ichiro HASHIMOTO and Akinari HOSHI Waseda University

More information

5.1 Banded Storage. u = temperature. The five-point difference operator. uh (x, y + h) 2u h (x, y)+u h (x, y h) uh (x + h, y) 2u h (x, y)+u h (x h, y)

5.1 Banded Storage. u = temperature. The five-point difference operator. uh (x, y + h) 2u h (x, y)+u h (x, y h) uh (x + h, y) 2u h (x, y)+u h (x h, y) 5.1 Banded Storage u = temperature u= u h temperature at gridpoints u h = 1 u= Laplace s equation u= h u = u h = grid size u=1 The five-point difference operator 1 u h =1 uh (x + h, y) 2u h (x, y)+u h

More information

Lecture 4: Products of Matrices

Lecture 4: Products of Matrices Lecture 4: Products of Matrices Winfried Just, Ohio University January 22 24, 2018 Matrix multiplication has a few surprises up its sleeve Let A = [a ij ] m n, B = [b ij ] m n be two matrices. The sum

More information

arxiv: v1 [math.rt] 16 Jun 2015

arxiv: v1 [math.rt] 16 Jun 2015 Representations of group rings and groups Ted Hurley arxiv:5060549v [mathrt] 6 Jun 205 Abstract An isomorphism between the group ring of a finite group and a ring of certain block diagonal matrices is

More information

An introduction to parallel algorithms

An introduction to parallel algorithms An introduction to parallel algorithms Knut Mørken Department of Informatics Centre of Mathematics for Applications University of Oslo Winter School on Parallel Computing Geilo January 20 25, 2008 1/26

More information

Linear Methods (Math 211) - Lecture 2

Linear Methods (Math 211) - Lecture 2 Linear Methods (Math 211) - Lecture 2 David Roe September 11, 2013 Recall Last time: Linear Systems Matrices Geometric Perspective Parametric Form Today 1 Row Echelon Form 2 Rank 3 Gaussian Elimination

More information

(x 1 +x 2 )(x 1 x 2 )+(x 2 +x 3 )(x 2 x 3 )+(x 3 +x 1 )(x 3 x 1 ).

(x 1 +x 2 )(x 1 x 2 )+(x 2 +x 3 )(x 2 x 3 )+(x 3 +x 1 )(x 3 x 1 ). CMPSCI611: Verifying Polynomial Identities Lecture 13 Here is a problem that has a polynomial-time randomized solution, but so far no poly-time deterministic solution. Let F be any field and let Q(x 1,...,

More information

b 1 b 2.. b = b m A = [a 1,a 2,...,a n ] where a 1,j a 2,j a j = a m,j Let A R m n and x 1 x 2 x = x n

b 1 b 2.. b = b m A = [a 1,a 2,...,a n ] where a 1,j a 2,j a j = a m,j Let A R m n and x 1 x 2 x = x n Lectures -2: Linear Algebra Background Almost all linear and nonlinear problems in scientific computation require the use of linear algebra These lectures review basic concepts in a way that has proven

More information

IMPROVING THE PERFORMANCE OF SPARSE LU MATRIX FACTORIZATION USING A SUPERNODAL ALGORITHM

IMPROVING THE PERFORMANCE OF SPARSE LU MATRIX FACTORIZATION USING A SUPERNODAL ALGORITHM IMPROVING THE PERFORMANCE OF SPARSE LU MATRIX FACTORIZATION USING A SUPERNODAL ALGORITHM Bogdan OANCEA PhD, Associate Professor, Artife University, Bucharest, Romania E-mail: oanceab@ie.ase.ro Abstract:

More information

Exploiting In-Memory Processing Capabilities for Density Functional Theory Applications

Exploiting In-Memory Processing Capabilities for Density Functional Theory Applications Exploiting In-Memory Processing Capabilities for Density Functional Theory Applications 2016 Aug 23 P. F. Baumeister, T. Hater, D. Pleiter H. Boettiger, T. Maurer, J. R. Brunheroto Contributors IBM R&D

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 2 Systems of Linear Equations Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

Linear System of Equations

Linear System of Equations Linear System of Equations Linear systems are perhaps the most widely applied numerical procedures when real-world situation are to be simulated. Example: computing the forces in a TRUSS. F F 5. 77F F.

More information

SYMBOL EXPLANATION EXAMPLE

SYMBOL EXPLANATION EXAMPLE MATH 4310 PRELIM I REVIEW Notation These are the symbols we have used in class, leading up to Prelim I, and which I will use on the exam SYMBOL EXPLANATION EXAMPLE {a, b, c, } The is the way to write the

More information

LU Factorization with Panel Rank Revealing Pivoting and Its Communication Avoiding Version

LU Factorization with Panel Rank Revealing Pivoting and Its Communication Avoiding Version LU Factorization with Panel Rank Revealing Pivoting and Its Communication Avoiding Version Amal Khabou James Demmel Laura Grigori Ming Gu Electrical Engineering and Computer Sciences University of California

More information

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6

CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 CME 302: NUMERICAL LINEAR ALGEBRA FALL 2005/06 LECTURE 6 GENE H GOLUB Issues with Floating-point Arithmetic We conclude our discussion of floating-point arithmetic by highlighting two issues that frequently

More information

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University March 2, 2018 Linear Algebra (MTH 464)

More information

Generalizations of Product-Free Subsets

Generalizations of Product-Free Subsets Generalizations of Product-Free Subsets The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation As Published Publisher Kedlaya, Kiran

More information