Math 304 (Spring 2010) - Lecture 2

Similar documents
MATH 3511 Lecture 1. Solving Linear Systems 1

Math 4377/6308 Advanced Linear Algebra

Matrices: 2.1 Operations with Matrices

Review of Basic Concepts in Linear Algebra

Mathematics 13: Lecture 10

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.

Kevin James. MTHSC 3110 Section 2.1 Matrix Operations

ICS 6N Computational Linear Algebra Matrix Algebra

Solving Linear Systems Using Gaussian Elimination. How can we solve

CS 246 Review of Linear Algebra 01/17/19

7.5 Operations with Matrices. Copyright Cengage Learning. All rights reserved.

Basic Concepts in Linear Algebra

LU Factorization. Marco Chiarandini. DM559 Linear and Integer Programming. Department of Mathematics & Computer Science University of Southern Denmark

Time Analysis of Sorting and Searching Algorithms

Elementary maths for GMT

Lecture 2. Fundamentals of the Analysis of Algorithm Efficiency

Chapter 4 No. 4.0 Answer True or False to the following. Give reasons for your answers.

Lecture 3 Linear Algebra Background

Notes on vectors and matrices

10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections )

Linear Algebra (Review) Volker Tresp 2018

Part A Simulation and Statistical Programming HT15. Lecturer: Geoff Nicholls. University of Oxford. Lecture 10: Recursion, Efficiency and Runtime.

Matrix Arithmetic. j=1

Section 9.2: Matrices.. a m1 a m2 a mn

The QR Factorization

Linear System of Equations

Lecture 1 INF-MAT : Chapter 2. Examples of Linear Systems

Linear Algebraic Equations

Evaluating Determinants by Row Reduction

Running Time Evaluation

Section 9.2: Matrices. Definition: A matrix A consists of a rectangular array of numbers, or elements, arranged in m rows and n columns.

Determinants. Samy Tindel. Purdue University. Differential equations and linear algebra - MA 262

L. Vandenberghe EE133A (Spring 2017) 3. Matrices. notation and terminology. matrix operations. linear and affine functions.

Lecture 1 INF-MAT3350/ : Some Tridiagonal Matrix Problems

Math Linear Algebra Final Exam Review Sheet

Worksheet 3: Matrix-Vector Products ( 1.3)

Matrix & Linear Algebra

Math113: Linear Algebra. Beifang Chen

Linear Algebra and Matrix Inversion

Introduction. Vectors and Matrices. Vectors [1] Vectors [2]

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

CS100: DISCRETE STRUCTURES. Lecture 3 Matrices Ch 3 Pages:

Matrices Gaussian elimination Determinants. Graphics 2009/2010, period 1. Lecture 4: matrices

Matrix decompositions

Countable and uncountable sets. Matrices.

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix-Vector-Product Analysis of Matrix-Matrix Product

Scientific Computing: Dense Linear Systems

Gaussian Elimination and Back Substitution

I = i 0,

CSED233: Data Structures (2017F) Lecture4: Analysis of Algorithms

Data Structures and Algorithms. Asymptotic notation

n n matrices The system of m linear equations in n variables x 1, x 2,..., x n can be written as a matrix equation by Ax = b, or in full

MATH Topics in Applied Mathematics Lecture 12: Evaluation of determinants. Cross product.

Test 2 Review Math 1111 College Algebra

Matrix Multiplication

Analysis of Algorithms

Elementary Matrices. MATH 322, Linear Algebra I. J. Robert Buchanan. Spring Department of Mathematics

What is Performance Analysis?

Enumerate all possible assignments and take the An algorithm is a well-defined computational

AMS526: Numerical Analysis I (Numerical Linear Algebra)

Analysis of Algorithm Efficiency. Dr. Yingwu Zhu

Lecture 4: Products of Matrices

MATRICES. a m,1 a m,n A =

CLASSICAL FORMS OF LINEAR PROGRAMS, CONVERSION TECHNIQUES, AND SOME NOTATION

1. Which one of the following points is a singular point of. f(x) = (x 1) 2/3? f(x) = 3x 3 4x 2 5x + 6? (C)

15-780: LinearProgramming

MTH 35, SPRING 2017 NIKOS APOSTOLAKIS

Computation of the mtx-vec product based on storage scheme on vector CPUs

G1110 & 852G1 Numerical Linear Algebra

Chapter 2 - Basics Structures MATH 213. Chapter 2: Basic Structures. Dr. Eric Bancroft. Fall Dr. Eric Bancroft MATH 213 Fall / 60

Math Matrix Algebra

Prepared by: M. S. KumarSwamy, TGT(Maths) Page

Countable and uncountable sets. Matrices.

Numerical Linear Algebra

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

CSE 160 Lecture 13. Numerical Linear Algebra

EE263 Review Session 1

Section 1.6. M N = [a ij b ij ], (1.6.2)

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

Lemma 8: Suppose the N by N matrix A has the following block upper triangular form:

5.1 Introduction to Matrices

3. Vector spaces 3.1 Linear dependence and independence 3.2 Basis and dimension. 5. Extreme points and basic feasible solutions

j=1 [We will show that the triangle inequality holds for each p-norm in Chapter 3 Section 6.] The 1-norm is A F = tr(a H A).

Lectures on Linear Algebra for IT

Announcements. CSE332: Data Abstractions Lecture 2: Math Review; Algorithm Analysis. Today. Mathematical induction. Dan Grossman Spring 2010

Matrix-Matrix Multiplication

Computer Algorithms CISC4080 CIS, Fordham Univ. Outline. Last class. Instructor: X. Zhang Lecture 2

Computer Algorithms CISC4080 CIS, Fordham Univ. Instructor: X. Zhang Lecture 2

EE731 Lecture Notes: Matrix Computations for Signal Processing

What is the Matrix? Linear control of finite-dimensional spaces. November 28, 2010

Chapter 8 Gauss Elimination. Gab-Byung Chae

Phys 201. Matrices and Determinants

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

printing Three areas: solid calculus, particularly calculus of several

Determinants of 2 2 Matrices

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Section Partitioned Matrices and LU Factorization

Transcription:

Math 304 (Spring 010) - Lecture Emre Mengi Department of Mathematics Koç University emengi@ku.edu.tr Lecture - Floating Point Operation Count p.1/10

Efficiency of an algorithm is determined by the total # of,,, required. Lecture - Floating Point Operation Count p./10

Efficiency of an algorithm is determined by the total # of,,, required. Crudeness in floating point operation (flop) count Lecture - Floating Point Operation Count p./10

Efficiency of an algorithm is determined by the total # of,,, required. Crudeness in floating point operation (flop) count Time required for data transfers is ignored. Lecture - Floating Point Operation Count p./10

Efficiency of an algorithm is determined by the total # of,,, required. Crudeness in floating point operation (flop) count Time required for data transfers is ignored. All of the operations,,, are considered of same computational difficulty. In reality, are more expensive. Lecture - Floating Point Operation Count p./10

Inner (or dot) product : Let f : R n R be defined as where a = [ f(x) = a 1 x 1 + a x +...a n x n = a T x ] T [ ] T a 1... a n R n and x = x 1... x n R n. Lecture - Floating Point Operation Count p.3/10

Inner (or dot) product : Let f : R n R be defined as where a = [ f(x) = a 1 x 1 + a x +...a n x n = a T x ] T [ ] T a 1... a n R n and x = x 1... x n R n. Pseudocode to compute f(x) f 0 for j = 1, n do f f + a j x j }{{} flops Return f Lecture - Floating Point Operation Count p.3/10

Inner (or dot) product : Let f : R n R be defined as f(x) = a 1 x 1 + a x +...a n x n = a T x where a = [ a 1... a n ] T R n and x = [ x 1... x n ] T R n. Pseudocode to compute f(x) f 0 for j = 1, n do f f + a j x j }{{} flops Return f Total flop count : flops per iteration for j = 1,...,n Total # of flops = n j=1 = n Lecture - Floating Point Operation Count p.3/10

Matrix-vector product : Let g : R n R m be defined as g(x) = Ax = x 1 A 1 + x A + + x n A n [ ] T where A = A 1... A n is an m n real matrix with [ ] T A 1,...,A n R m and x = x 1... x n R n. Lecture - Floating Point Operation Count p.4/10

Matrix-vector product : Let g : R n R m be defined as g(x) = Ax = x 1 A 1 + x A + + x n A n [ ] T where A = A 1... A n is an m n real matrix with [ ] T A 1,...,A n R m and x = x 1... x n R n. e.g. 1 1 0 1 3 1 1 = 1 3 1 0 1 +1 1 = 0 1 10 Lecture - Floating Point Operation Count p.4/10

Pseudocode to compute g(x) = Ax Given an m n real matrix A and x R n. g 0 (where g R n ) for j = 1, n do g g + x j A j }{{} m flops Return g Lecture - Floating Point Operation Count p.5/10

Pseudocode to compute g(x) = Ax Given an m n real matrix A and x R n. g 0 (where g R n ) for j = 1, n do g g + x j A j }{{} m flops Return g Above g + x j A j requires m addition and m multiplication for each j. Lecture - Floating Point Operation Count p.5/10

Pseudocode to compute g(x) = Ax Given an m n real matrix A and x R n. g 0 (where g R n ) for j = 1, n do g g + x j A j }{{} m flops Return g Above g + x j A j requires m addition and m multiplication for each j. Total flop count : m flops per iteration for j = 1,...,n Total # of flops = n j=1 m = mn Lecture - Floating Point Operation Count p.5/10

Inner product view of the matrix-vector product g(x) = Ax. g(x) = 6 4 Ā 1 x Ā x. Ā m x 3 7 5 = 6 4 a 11 x 1 + a 1 x + + a 1n x n a 1 x 1 + a x + + a nn x n. a m1 x 1 + a m x + + a mn x n 3 7 5 where A = 6 4 Ā 1 Ā. Ā m 3 7 5 and Ā 1,...,Ā m are the rows of A and a ij is the entry of A at the ith row and jth column. Lecture - Floating Point Operation Count p.6/10

Inner product view of the matrix-vector product g(x) = Ax. g(x) = 6 4 Ā 1 x Ā x. Ā m x 3 7 5 = 6 4 a 11 x 1 + a 1 x + + a 1n x n a 1 x 1 + a x + + a nn x n. a m1 x 1 + a m x + + a mn x n 3 7 5 where A = 6 4 Ā 1 Ā. Ā m 3 7 5 and Ā 1,...,Ā m are the rows of A and a ij is the entry of A at the ith row and jth column. e.g. 6 4 1 1 0 1 3 1 3 7 6 5 4 1 3 7 5 = 6 4 ()() + (1)( ) + ( )(1) (1)() + (0)( ) + ( 1)(1) (3)() + ( 1)( ) + ()(1) 3 7 5 = 6 4 0 1 10 3 7 5 Lecture - Floating Point Operation Count p.6/10

Pseudocode to compute g(x) = Ax exploiting the inner-product view Given an m n real matrix A and x R n. g 0 (where g R n ) for i = 1, m do for j = 1, n do g i g i + a ij x j }{{} flops Return g Lecture - Floating Point Operation Count p.7/10

Pseudocode to compute g(x) = Ax exploiting the inner-product view Given an m n real matrix A and x R n. g 0 (where g R n ) for i = 1, m do for j = 1, n do g i g i + a ij x j }{{} flops Return g Total flop count : flops per iteration for each j = 1,...,n and i = 1,...,m Total # of flops = m n i=1 j=1 = m i=1 n = mn Lecture - Floating Point Operation Count p.7/10

Matrix-matrix product : Given an n p matrix A and a p m matrix X. The product B = AX is an n m matrix and defined as b ij = Ā i X j = p a ik x kj k=1 where Āi is the ith row of A, X j is the jth column of X and b ij, a ij, x ij denote the (i, j)-entry of B, A and X,respectively. Lecture - Floating Point Operation Count p.8/10

Matrix-matrix product : Given an n p matrix A and a p m matrix X. The product B = AX is an n m matrix and defined as b ij = Ā i X j = p a ik x kj k=1 where Āi is the ith row of A, X j is the jth column of X and b ij, a ij, x ij denote the (i, j)-entry of B, A and X,respectively. e.g. 4 1 1 0 3 5 4 1 1 1 3 5 = 4 ( 1) + 1(1) (1) + 1( ) 1( 1) + 0(1) 1(1) + 0( ) 3 5 = 4 1 0 1 1 3 5 Lecture - Floating Point Operation Count p.8/10

Pseudocode to compute the product B = AX Given n p and p m matrices A and X. B 0 for i = 1, n do for j = 1, m do for k = 1, p do b ij b ij + a ik x kj }{{} flops Return g Lecture - Floating Point Operation Count p.9/10

Pseudocode to compute the product B = AX Given n p and p m matrices A and X. B 0 for i = 1, n do for j = 1, m do for k = 1, p do b ij b ij + a ik x kj }{{} flops Return g Total flop count : flops per iteration for each k = 1,...,p, j = 1,...,m and i = 1,...,n Total # of flops = P n i=1 P m j=1 P p k=1 = nmp Lecture - Floating Point Operation Count p.9/10

Big-O notation Floating Point Operation Count Lecture - Floating Point Operation Count p.10/10

Big-O notation Floating Point Operation Count The inner product a T x requires n = O(n) flops (linear # of flops). Lecture - Floating Point Operation Count p.10/10

Big-O notation Floating Point Operation Count The inner product a T x requires n = O(n) flops (linear # of flops). The matrix-vector product Ax for a square matrix A (with m = n) requires n = O(n ) flops (quadratic # of flops). Lecture - Floating Point Operation Count p.10/10

Big-O notation Floating Point Operation Count The inner product a T x requires n = O(n) flops (linear # of flops). The matrix-vector product Ax for a square matrix A (with m = n) requires n = O(n ) flops (quadratic # of flops). The matrix-matrix product AX for square n n matrices A and X (with m = n = p) requires n 3 = O(n 3 ) flops (cubic # of flops). Lecture - Floating Point Operation Count p.10/10

Big-O notation Floating Point Operation Count The inner product a T x requires n = O(n) flops (linear # of flops). The matrix-vector product Ax for a square matrix A (with m = n) requires n = O(n ) flops (quadratic # of flops). The matrix-matrix product AX for square n n matrices A and X (with m = n = p) requires n 3 = O(n 3 ) flops (cubic # of flops). The notation g(n) = O(f(n)) means asymptotically f(n) scaled up to a constant grows at least as fast as g(n), i.e. g(n) = O(f(n)) if there exists an n 0 and c such that g(n) cf(n) for all n n 0 Lecture - Floating Point Operation Count p.10/10

Big-O notation Floating Point Operation Count The inner product a T x requires n = O(n) flops (linear # of flops). The matrix-vector product Ax for a square matrix A (with m = n) requires n = O(n ) flops (quadratic # of flops). The matrix-matrix product AX for square n n matrices A and X (with m = n = p) requires n 3 = O(n 3 ) flops (cubic # of flops). The notation g(n) = O(f(n)) means asymptotically f(n) scaled up to a constant grows at least as fast as g(n), i.e. g(n) = O(f(n)) if there exists an n 0 and c such that g(n) cf(n) for all n n 0 Examples: n = O(n) as well as n = O(n ) and n = O(n 3 ) Lecture - Floating Point Operation Count p.10/10

Big-O notation Floating Point Operation Count The inner product a T x requires n = O(n) flops (linear # of flops). The matrix-vector product Ax for a square matrix A (with m = n) requires n = O(n ) flops (quadratic # of flops). The matrix-matrix product AX for square n n matrices A and X (with m = n = p) requires n 3 = O(n 3 ) flops (cubic # of flops). The notation g(n) = O(f(n)) means asymptotically f(n) scaled up to a constant grows at least as fast as g(n), i.e. g(n) = O(f(n)) if there exists an n 0 and c such that g(n) cf(n) for all n n 0 Examples: n = O(n) as well as n = O(n ) and n = O(n 3 ) n = O(n ) as well as n = O(n 3 ), but n is not O(n). Lecture - Floating Point Operation Count p.10/10