Practical Linear Algebra. Class Notes Spring 2009

Size: px
Start display at page:

Download "Practical Linear Algebra. Class Notes Spring 2009"

Transcription

1 Practical Linear Algebra Class Notes Spring 9 Robert van de Geijn Maggie Myers Department of Computer Sciences The University of Texas at Austin Austin, TX 787 Draft January 8, Copyright 9 Robert van de Geijn and Maggie Myers

2

3 Contents Introduction A Motivating Example: Predicting the Weather Vectors 6 Equality 6 Copy copy 6 3 Scaling scal 7 4 Vector addition add 8 5 Scaled vector addition axpy 8 6 Dot product dot 9 7 Vector length norm 9 8 A Simple Linear Algebra Package: SLAP 9 3 A Bit of History 4 Matrices and Matrix-Vector Multiplication 4 First: Linear transformations 4 From linear transformation to matrix-vector multiplication 9 43 Special Matrices 5 44 Computing the matrix-vector multiply 9 45 Cost of matrix-vector multiplication Scaling and adding matrices Partitioning matrices and vectors into submatrices blocks and subvectors 35 5 A High Level Representation of Algorithms 38 5 Matrix-vector multiplication 38 5 Transpose matrix-vector multiplication 4 53 Triangular matrix-vector multiplication 4 54 Symmetric matrix-vector multiplication 45 6 Representing Algorithms in Code 46 7 Outer Product and Rank- Update 48 8 A growing library 5 i

4 8 General matrix-vector multiplication gemv 5 8 Triangular matrix-vector multiplication trmv 5 83 Symmetric matrix-vector multiplication symv 5 84 Rank- update ger Symmetric Rank- update syr 53 9 Answers 54 Matrix-Matrix Multiplication 7 Motivating Example: Rotations 7 Composing Linear Transformations 7 3 Special Cases of Matrix-Matrix Multiplication 76 4 Properties of Matrix-Matrix Multiplication 79 5 Multiplying partitioned matrices 8 6 Summing it all up 86 7 Additional Exercises 88 3 Gaussian Elimination 93 3 Solving a System of linear Equations via Gaussian Elimination GE, Take 93 3 Matrix Notation GE, Take Towards Gauss Transforms GE, Take Gauss Transforms GE, Take 4 35 Gauss Transforms Continued GE, Take Toward the LU Factorization GE Take Coding Up Gaussian Elimination 3 4 LU Factorization 7 4 Gaussian Elimination Once Again 8 4 LU factorization 8 43 Forward Substitution Solving a Unit Lower Triangular System 3 44 Backward Substitution Solving an Upper Triangular System 4 45 Solving the Linear System 7 46 When LU Factorization Breaks Down 8 47 Permutations 3 48 Back to When LU Factorization Breaks Down The Inverse of a Matrix 4 49 First, some properties 4 49 That s about all we will say about determinants Gauss-Jordan method Inverting a matrix using the LU factorization Inverting the LU factorization In practice, do not use inverted matrices! More about inverses 5

5 iii 5 Vector Spaces: Theory and Practice 55 5 Vector Spaces 55 5 Why Should We Care? 57 5 A systematic procedure first try 6 5 A systematic procedure second try Linear Independence Bases Exercises 7 56 The Answer to Life, The Universe, and Everything Answers 78 6 Orthogonality 83 6 Orthogonal Vectors and Subspaces 83 6 Motivating Example, Part I Solving a Linear Least-Squares Problem Motivating Example, Part II 9 65 Computing an Orthonormal Basis 9 66 Motivating Example, Part III 9 67 What does this all mean? Exercises 7 The Singular Value Decomposition 9 7 The Theorem 9 7 Consequences of the SVD Theorem 73 Projection onto a column space 4 74 Low-rank Approximation of a Matrix 6 8 QR factorization 9 8 Classical Gram-Schmidt process 9 8 Modified Gram-Schmidt process 3 83 Householder QR factorization 3 83 Householder transformations reflectors 3 83 Algorithms Forming Q Solving Linear Least-Squares Problems 4 9 Eigenvalues and Eigenvectors 4 9 Motivating Example 4 9 Problem Statement 4 9 Eigenvalues and vector a a matrix 44

6 iv

7 Preface v

8 vi

9 Chapter Introduction In this chapter, we give a motivating example and use this to introduce linear algebra notation A Motivating Example: Predicting the Weather Let us assume that on any day, the following table tells us how the weather on that day predicts the weather talgorithms and architectureshe next day: Tomorrow Today sunny cloudy rainy sunny 4 3 cloudy rainy 4 3 Table : Table that predicts the weather This table is interpreted as follows: If today it is rainy, then the probability that it will be cloudy tomorrow is 6, etc Example If today is cloudy, what is the probability that tomorrow is sunny? cloudy? rainy? To answer this, we simply consult Table : the probability it will be sunny, cloudy, and rainy are given by 3, 3, and 4, respectively

10 Chapter Introduction Example If today is cloudy, what is the probability that the day after tomorrow is sunny? cloudy? rainy? Now this gets a bit more difficult Let us focus on the question what the probability is that it is sunny the day after tomorrow We don t know what the weather will be tomorrow What we do know is The probability that it will be sunny the day after tomorrow and sunny tomorrow is 4 3 The probability that it will sunny the day after tomorrow and cloudy tomorrow is 3 3 The probability that it will sunny the day after tomorrow and rainy tomorrow is 4 Thus, the probability that it will be sunny the day after tomorrow if it is cloudy today is Exercise 3 Work out the probabilities that it will be cloudy/rainy the day after tomorrow Example 4 If today is cloudy, what is the probability that a week from today it is sunny? cloudy? rainy? We will not answer this question now We insert it to make the point that things can get messy As is usually the case when we are trying to answer quantitative questions, it helps to introduce some notation Let χ k s Let χ k c Let χ k r denote the probability that it will be sunny k days from now denote the probability that it will be cloudy k days from now denote the probability that it will be rainy k days from now Here, χ is the Greek lower case letter chi, pronounced [kai] in English Now, Table, Example, and Exercise 3 motivate the equations χ k+ s 4 χ k s + 3 χ k c + χ k r χ k+ c 4 χ k s + 3 χ k c + 6 χ k r χ k+ r χ k s + 4 χ k c + 3 χ k r

11 A Motivating Example: Predicting the Weather 3 The probabilities that denote what the weather may be on day k and the table that summarizes the probabilities is often represented as a state vector, x, and transition matrix, P, respectively: x k χ k s χ k c χ k r and P The transition from day k to day k + is then written as the matrix-vector product multiplication χ k+ s 4 3 χ k s χ k+ c χ k c, χ k+ 4 3 r χ k r or x k+ P x k, which is simply a more compact representation way of writing the equations in Assume again that today is cloudy so that the probability that it is sunny, cloudy, or rainy today is,, and, respectively: ]x χ s χ c χ r Then the vector of probabilities for tomorrow s weather, x, is given by χ s χ c χ r χ s χ c χ r The vector of probabilities for the day after tomorrow, x, is given by χ s χ c χ r χ s χ c χ r

12 4 Chapter Introduction Repeating this process, we can find the probabilities for the weather for the next seven days, under the assumption that today is cloudy: x k k Exercise 5 Follow the instructions for this problem given on the class wiki For the example described in this section, Recreate the above table by programming it up with Matlab or Octave, starting with the assumption that today is cloudy Create two similar tables starting with the assumption that today is sunny andrainy, respectively 3 Compare how x 7 differs depending on today s weather 4 What do you notice if you compute x k starting with today being sunny/cloudy/rainy and you let k get large? 5 What does x represent? Alternatively, 5 6 could have been stated as χ s χ c χ r Q χ s χ c χ r χ s χ c χ r χ s χ c χ r, 7 where Q is the transition matrix that tells us how the weather today predicts the weather the day after tomorrow

13 A Motivating Example: Predicting the Weather 5 Exercise 6 Given Table, create the following table, which predicts the weather the day after tomorrow given the weather today: Day after Tomorrow sunny cloudy rainy This then tells us the entries in Q in 7 Today sunny cloudy rainy

14 6 Chapter Introduction Vectors We will call a one-dimensional array of numbers a real-valued column vector: χ χ x, χ n where χ i R for i < n The set of all such vectors is denoted by R n A vector has a direction and a length: Its direction can be visualized by drawing an arrow from the origin to the point χ, χ,, χ n Its length is given by the Euclidean length of this arrow: The length of x R n is given by x χ + χ + + χ n, which is often called the two-norm A number of operations with vectors will be encountered in the course of this book In the remainder of this section, let x, y R n α R, with χ ψ ζ χ x, y ψ, and z ζ χ n ψ n ζ n Equality Two vectors are equal if all their components are element-wise equal: x y if and only if, for i < n, χ i ψ i Copy copy The copy operation assigns the content of one vector to another In our mathematical notation we will denote this by the symbol : pronounce: becomes An algorithm and M-script for the copy operation are given in Figure Remark 7 Unfortunately, M-script starts indexing at one, which explains the difference in indexing between the algorithm and the M-script implementation Cost: Copying one vector to another requires n memory operations memops: The vector x of length n must be read, requiring n memops, and the vector y must be written, which accounts for the other n memops

15 Vectors 7 for i,, n ψ i : χ i endfor for i:n yi x i ; end Figure : Algorithm and M-script for the vector copy operation y : x for i,, n ψ i : αχ i endfor for i:n yi alpha * x i ; end Figure : Algorithm and M-script for the vector scale operation y : αx for i,, n ζ i : χ i + ψ i endfor for i:n zi xi + y i ; end Figure 3: Algorithm and M-script for the vector addition operation z : x + y for i,, n ζ i : αχ i + ψ i endfor for i:n zi alpha * xi + yi; end Figure 4: Algorithm and M-script for scaled vector addition z : αx + y α : for i,, n α : χ i ψ i + α endfor alpha ; for i:n alpha xi * yi + alpha; end Figure 5: Algorithm and M-script for the dot product α : x T y 3 Scaling scal Multiplying vector x by scalar α yields a new vector, αx, in the same direction as x by scaled by a factor α Scaling a vector by α means each of its components, χ i, is scaled by α: αx α χ χ χ n αχ αχ αχ n An algorithm and M-script for the scal operation are given in Figure

16 8 Chapter Introduction Cost: On a computer, real numbers are stored as floating point numbers and real arithmetic is approximated with floating point arithmetic Thus, we count floating point computations flops: a multiplication or addition each cost one flop Scaling a vector requires n flops and n memops 4 Vector addition add The vector addition x + y is defined by x + y χ χ + ψ ψ χ + ψ χ + ψ χ n ψ n χ n + ψ n In other words, the vectors are added element-wise, yielding a new vector of the same length An algorithm and M-script for the add operation are given in Figure 3 Cost: Vector addition requires 3n memops x and y are read and the resulting vector is written and n flops Exercise 8 Let x, y R n Show that vector addition commutes: x + y y + x 5 Scaled vector addition axpy One of the most commonly encountered operations when implementing more complex linear algebra operations is the scaled vector addition y : αx + y: αx + y α χ χ χ n + ψ ψ ψ n αχ + ψ αχ + ψ αχ n + ψ n It is often referred to as the axpy operation, which stands for alpha times x plus y An algorithm and M-script for the axpy operation are given in Figure 4 We emphasize that typically it is used in situations where the output vector overwrites the input vector y Cost: The axpy operation requires 3n memops and n flops Notice that by combining the scaling and vector addition into one operation, there is the opportunity to reduce the number of memops that are incurred separately by the scal and add operations

17 Vectors 9 6 Dot product dot The other commonly encountered operation is the dot inner product It is defined by n x T y χ i ψ i χ ψ + χ ψ + + χ n ψ n i We have already encountered the dot product in, for example, the right-hand side of each equation is a dot product and 6 An algorithm and M-script for the dot operation are given in Figure 5 Cost: A dot product requires approximately n memops and n flops Exercise 9 Let x, y R n Show that x T y y T x 7 Vector length norm n The Euclidean length of a vector x the two-norm is given by x i χ i Clearly x x T x, so that the dot operation can be used to compute this length Cost: If computed with a dot product, it requires approximately n memops and n flops Remark In practice the two-norm is typically not computed via the dot operation, since it can cause overflow or underflow in floating point arithmetic Instead, vectors x and y are first scaled to prevent such unpleasant side effects 8 A Simple Linear Algebra Package: SLAP As part of this course, we will demonstrate how linear algebra libraries are naturally layered by incrementally building a small library in M-script For example, the axpy operation can be implemented as a function as illustrated in Figure 7 A couple of comments: In this text, we distinguish between a column vector and a row vector a vector written as a row of vectors or, equivalently, a column vector that has been transposed In M-script, every variable is viewed as a multi-dimensional array For our purposes, it suffices to think of all arrays as being two dimensional A scalar is the special case where the array has one row and one column A vector is the case where the array consists of one row or one column If variable x consists of one column, then x i

18 Chapter Introduction Operation Abbrev Definition Function Approx cost flops memops Vector-vector operations Copy copy y : x y SLAP Copy x n Vector scaling scal y : αx y SLAP Scal alpha, x n n Adding add z : x + y z SLAP Add x, y n 3n Scaled addition axpy z : αx + y z SLAP Axpy alpha, x, y n 3n Dot product dot α : x T y alpha SLAP Dot x, y n n Length norm α : x alpha SLAP Norm x n n Figure 6: A summary of the more commonly encountered vector-vector routines in our library equals the ith component of that column If variable x consists of one row, then x i equals the ith component of that row Thus, in this sense, M-script is blind to whether a vector is a row or a column vector It may seem a bit strange that we are creating functions for the mentioned operations For example, M-script can compute the inner product of column vectors x and y via the expression x * y and thus there is no need for the function The reason we are writing explicit functions is to make the experience of building the library closer to how one would do it in practice with a language like the C programming language We expect the reader to be able to catch on without excessive explanation M-script is a very simple and intuitive language When in doubt, try help <topic> or doc <topic> at the prompt A list of functions for performing common vector-vector operations is given in Figure 6 Exercise Start building your library by implement the functions in Figure 6 See directions on the class wiki page 3 A Bit of History The functions Figure 6 are very similar in functionality to Fortran routines known as the level- Basic Linear Algebra Subprograms BLAS that are commonly used in scientific libraries These were first proposed in the 97s and were used in the development of one of the first linear algebra libraries, LINPACK See

19 3 A Bit of History function [ z ] SLAP_Axpy alpha, x, y % % Compute z alpha * x + y % % If y is a column/row vector, then % z is a matching vector This is % because the operation is almost always % used to overwrite y axpy alpha, x, y % [ m_x, n_x ] size x ; [ m_y, n_y ] size y ; if n_x n m_x; else n n_x; endif % x is a column % x is a row octave:> x [ > > - > > ] x - octave:> y [ > > > > ] y if n_y & m_y ~ n abort axpy: incompatible lengths ; else if m_y & n_y ~ n abort axpy: incompatible lengths ; endif z y; for i:n zi alpha * x i + zi; end return end octave:3> axpy -, x, y ans - octave:4> - * x + y ans - Figure 7: Left: A routine for computing axpy octave Right: Testing the axpy function in C Lawson, R Hanson, D Kincaid, and F Krogh, Basic Linear Algebra Subprograms for Fortran Usage, ACM Trans on Math Soft, J J Dongarra, J R Bunch, C B Moler, and G W Stewart, LINPACK Users Guide, SIAM, Philadelphia, 979

20 Chapter Introduction 4 Matrices and Matrix-Vector Multiplication To understand what a matrix is in the context of linear algebra, we have to start with linear transformations, after which we can discuss how matrices and matrix-vector multiplication represent such transformations 4 First: Linear transformations Definition Linear transformation A function L : R n R m is said to be a linear transformation if for all x, y R n and α R Lαx αlx Transforming a scaled vector is the same as scaling the transformed vector Lx + y Lx + Ly The transformation is distributive Example 3 A rotation is a linear transformation For example, let R θ : R R be the transformation that rotates a vector through angle θ Let x, y R and α R Figure 8 illustrates that scaling a vector first by α and then rotating it yields the same result as rotating the vector first and then scaling it Figure illustrates that rotation x + y yields the same vector as adding the vectors after rotating them first Lemma 4 Let x, y R n, α, β R and L : R n R m be a linear transformation Then Lαx + βy αlx + βly Proof: Lαx + βy Lαx + Lβy αlx + βly Lemma 5 Let {x o, x,, x n } R n and let L : R n R m be a linear transformation Then Lx + x + + x n Lx + Lx + + Lx n 8

21 4 Matrices and Matrix-Vector Multiplication 3 Scale then rotate Rotate then scale a d b e c f Figure 8: An illustration that for a rotation, R θ : R R, one can scale the input first and then apply the transformation or one can transform the input first and then scale the result: R θ x R θ x

22 4 Chapter Introduction Figure 9: A generalization of Figure 8: R θ αx αr θ x While it is tempting to say that this is simply obvious, we are going to prove this rigorously Wheneven one tries to prove a result for a general n, where n is a natural number, one often uses a proof by induction We are going to give the proof first, and then we will try to explain it Proof: Proof by induction on n Base case: n For this case, we must show that Lx Lx This is trivially true Inductive step: Inductive Hypothesis IH: Assume that the result is true for n k where k : Lx + x + + x k Lx + Lx + + Lx k We will show that the result is then also true for n k + : Lx + x + + x k Lx + Lx + + Lx k Assume that k Then Lx + x + + x k expose extra term Lx + x + + x k + x k associativity of vector addition Lx + x + + x k + x k Lemma 4 Lx + x + + x k + Lx k Inductive Hypothesis Lx + Lx + + Lx k + Lx k By the Principle of Mathematical Induction the result holds for all n

23 4 Matrices and Matrix-Vector Multiplication 5 Add then rotate Rotate then add a d b e c f Figure : An illustration that for a rotation, R θ : R R, one can add vectors first and then apply the transformation or one can transform the vectors first and then add the results: R θ x + y R θ x + R θ y

24 6 Chapter Introduction The idea is as follows: The base case shows that the result is true for n :Lx Lx The inductive step shows that if the result is true for n, then the result is true for n + so that Lx + x Lx + Lx Since the result is indeed true for n as proven by the base case we now know that the result is also true for n The inductive step also implies that if the result is true for n, then it is also true for n 3 Since we just reasoned that it is true for n, we now know it is also true for n 3: Lx + x + x Lx + Lx + Lx And so forth Remark 6 The Principle of Mathematical Induction says that if one can show that a property holds for n n b and one can show that if it holds for n k, where k n b, then it is also holds for n k +, one can conclude that the property holds for all n n b In the above example and quite often n b Example 7 Show that n i i nn / Proof by induction: Base case: n For this case, we must show that i i / i i Definition of summation arithmetic / This proves the base case Inductive step: Inductive Hypothesis IH: Assume that the result is true for n k where k : k i kk / We will show that the result is then also true for n k + : k+ i i i k + k + / Assume that k Then

25 4 Matrices and Matrix-Vector Multiplication 7 k+ i i arithmetic k i i split off last term k i i + k Inductive Hypothesis kk / + k algebra k k/ + k/ algebra k + k/ algebra k + k/ arithmetic k + k + / This proves the inductive step By the Principle of Mathematical Induction the result holds for all n As we become more proficient, we will start combining steps For now, we give lots of detail to make sure everyone stays on board Exercise 8 Use mathematical induction to prove that n i i n nn /6 Theorem 9 Let {x o, x,, x n } R n, {α o, α,, α n } R, and let L : R n R m be a linear transformation Then Lα x + α x + + α n x n α Lx + α Lx + + α n Lx n 9 Proof: Lα x + α x + + α n x n Lemma 5 Lα x + Lα x + + Lα n x n Definition of linear transformation α Lx + α Lx + + α k Lx k + α n Lx n

26 8 Chapter Introduction Example The transformation F is a linear transformation χ χ χ + χ The way we prove this is to pick arbitrary α R, x χ χ which we then show that F αx αf x and F x + y F x + F y: χ, and y ψ Show F αx αf x: χ αχ αχ + αχ F αx F α F χ αχ αχ αχ + χ χ + χ α χ αf αf x αχ χ χ ψ for Show F x + y F x + F y: χ ψ χ + ψ F x + y F + F χ ψ χ + ψ χ + ψ + χ + ψ χ + χ + ψ + ψ χ + ψ χ + ψ χ + χ ψ + ψ + χ ψ F + F χ F x + F y ψ χ ψ Example The transformation F χ ψ χ + ψ χ + is not a linear transformation Recall that in order for it to be a linear transformation, F αx αf x This means that F Pick α Here denotes the vector of all zeroes Notice that for this particular F, F and thus this cannot be a linear transformation Remark Always check what happens if you plug in the zero vector However, if F, this does not mean that the transformation is a linear transformation It merely means it might be

27 4 Matrices and Matrix-Vector Multiplication 9 χ χψ Example 3 The transformation F is not a linear transformation ψ χ The first check should be whether F The answer in this case is yes However, F F 4 F Hence, there is a vector x R and α R such that F αx αf x This means that F is not a linear transformation Exercise 4 For each of the following, determine whether it is a linear transformation or not: χ χ F χ χ χ F χ χ χ 4 From linear transformation to matrix-vector multiplication Now we are ready to link linear transformations to matrices and matrix-vector multiplication The definitions of vector scaling and addition mean that any x R n can be written as x χ χ χ n χ }{{} e + χ }{{} e + + χ n }{{} e n n χ j e j j Definition 5 The vectors e j R n mentioned above are called the unit basis vectors and the notation e j is reserved for these vectors Exercise 6 Let x, e i R n Show that e T i x x T e i χ i the ith element of x

28 Chapter Introduction Now, let L : R n R m be a linear transformation Then, given x R n, the result of y Lx is a vector in R m But then n n n y Lx L χ j e j χ j Le j χ j a j, j where we let a j Le j The linear transformation L is completely described by these vectors {a,, a n }: Given any vector x, Lx n j χ ja j By arranging these vectors as the columns of a two-dimensional array, the matrix A, we arrive at the observation that the matrix is simply a representation of the corresponding linear transformation L If we let α, α, α,n α, α, α,n A α m, α m, α m,n j }{{} }{{} } {{ } a a a n so that α i,j equals the ith component of vector a j, then j Ax Lx χ a + χ a + + χ n a n α, α, α, χ + χ α, + + χ n α m, α m, α, χ + α, χ + + α,n χ n α, χ + α, χ + + α,n χ n α m, χ + α m, χ + + α m,n χ n α,n α,n α m,n Definition 7 R m n The set of all m n real valued matrices is denoted by R m n Thus, A R m n means that A is a real valued matrix Remark 8 We adopt the convention that Roman lowercase letters are used for vector variables and Greek lowercase letters for scalars Also, the elements of a vector are denoted by the Greek letter that corresponds to the Roman letter used to denote the vector A table of corresponding letters is given in Figure

29 4 Matrices and Matrix-Vector Multiplication Matrix Vector Scalar Note Symbol L A TEX Code A a α \alpha alpha B b β \beta beta C c γ \gamma gamma D d δ \delta delta E e ɛ \epsilon epsilon e j jth unit basis vector F f φ \phi phi G g ξ \xi xi H h η \eta eta I Used for identity matrix K k κ \kappa kappa L l λ \lambda lambda M m µ \mu mu m row dimension N n ν \nu nu ν is shared with V n column dimension P p π \pi pi Q q θ \theta theta R r ρ \rho rho S s σ \sigma sigma T t τ \tau tau U u υ \upsilon upsilon V v ν \nu nu ν shared with N W w ω \omega omega X x χ \chi chi Y y ψ \psi psi Z z ζ \zeta zeta Figure : Correspondence between letters used for matrices uppercase Roman/vectors lowercase Roman and the symbols used to denote their scalar entries lowercase Greek letters

30 Chapter Introduction Definition 9 Matrix-vector multiplication α, α, α,n χ α, α, α,n χ α m, α m, α m,n χ n α, χ + α, χ + + α,n χ n α, χ + α, χ + + α,n χ n α m, χ + α m, χ + + α m,n χ n Example 3 Compute Ax when A Answer: 3, the first column of the matrix! 3 and x It is not hard to see that if e j is the j unit basis vector as defined in Definition 5, then Ae j a a a j a n a + a + + a j + + a n a j Also, given a vector x, the dot product e T i x equals the ith entry in x, χ i : e T i x χ χ χ i χ n χ + χ + + χ i + + χ n χ i

31 4 Matrices and Matrix-Vector Multiplication 3 a b c Figure : Computing the matrix that represents rotation R θ a-b Trigonomitry tells us the coefficients of R θ e and R θ e This then motivates the matrix that is given in c, so that the rotation can be written as a matrix-vector multiplication Example 3 T 3 the, element of the matrix We notice that α i,j e T i Ae j T 3, Later, we will see that e T i A equals the ith row of matrix A and that α i,j e T i Ae j e T i Ae j e T i Ae j Example 3 Recall that a rotation R θ : R R is a linear transformation We now show how to compute the matrix, Q, that represents this rotation

32 4 Chapter Introduction Given that the transformation is from R to R, we know that the matrix will be a matrix: it will take vectors of size two as input and will produce vectors of size two We have learned that the first column of the matrix Q will equal R θ e and the second column will equal R θ e In Figure we motivate that cosθ R θ e sinθ and R θ e sinθ cosθ We conclude that cosθ sinθ Q sinθ cosθ χ This means that a vector x is transformed into χ cosθ sinθ χ cosθχ sinθχ R θ x Qx sinθ cosθ χ sinθχ + cosθχ This is a formula you may have seen in a precalculus or physics course when discussing change of coordinates, except with the coordinates χ and χ replaced by x and y, respectively Example 33 The transformation F χ χ χ + χ χ was shown in Example to be a linear transformation Let us give an alternative proof of this: we will compute a possible matrix, A, that represents this linear transformation We will then show that F x Ax, which then means that F is a linear transformation To compute a possible matrix consider: F and F Thus, if F is a linear transformation, then F x Ax where A Ax χ χ + χ F which finished the proof that F is a linear transformation χ χ χ χ F x Now,

33 4 Matrices and Matrix-Vector Multiplication 5 Exercise 34 Show that the transformation in Example is not a linear transformation by computing a possible matrix that represents it, and then showing that it does not represent it Exercise 35 Show that the transformation in Example 3 is not a linear transformation by computing a possible matrix that represents it, and then showing that it does not represent it Exercise 36 For each of the transformations in Exercise 4 compute a possible matrix that represents it and use it to show whether the transformation is linear 43 Special Matrices The identity matrix Let L I : R n R n be the function defined for every x R n as L I x x Clearly L I αx + βy αx + βy αl I x + βl I y, so that we recognize it as a linear transformation We will denote the matrix that represents L I by the letter I and call it the identity matrix By the definition of a matrix, the j column of I is given by L I e j e j Thus, the identity matrix is given by I Notice that clearly Ix x and a column partitioning of I yields I e e e n In M-script, an n n identity matrix is generated by the command eye n Diagonal matrices Definition 37 Diagonal matrix A matrix A R n n is said to be diagonal if α i,j for all i j

34 6 Chapter Introduction Example 38 Let A Ax 3 3 and x 3 Then 6 4 We notice that a diagonal matrix scales individual components of a vector by the corresponding diagonal element of the matrix Exercise 39 Let D 3 What linear transformation, L, does this matrix represent? In particular, answer the following questions: L : R? R?? Give? and?? A linear transformation can be describe by how it transforms the unit basis vectors: Le Le Le χ L χ χ In M-script, an n n diagonal matrix with indicated components results from the function call diag x Example 4 > x [ 3 ] > A diag x A 3 Here x can be either a row or a column vector When A is already a matrix, this same function returns the diagonal of A as a column vector:

35 4 Matrices and Matrix-Vector Multiplication 7 > y diag A y 3 In linear algebra an element-wise vector-vector product is not a meaningful operation: when x, y R n the product xy has no meaning However, in M-script, if x and y are both column vectors or matrices for that matter, the operation x * y is an element-wise multiplication Thus, diag x * y can be alternatively written as x * y Triangular matrices Definition 4 Triangular matrix A matrix A R n n is said to be lower triangular if α i,j for all i < j strictly lower triangular if α i,j for all i j unit lower triangular if α i,j for all i < j and α i,j if i j upper triangular if α i,j for all i > j strictly upper triangular if α i,j for all i j unit upper triangular if α i,j for all i > j and α i,j if i j If a matrix is either lower or upper triangular, it is said to be triangular Exercise 4 Give examples for each of the triangular matrices in Definition 4 Exercise 43 Show that a matrix that is both lower and upper triangular is in fact a diagonal matrix In M-script, given an n n matrix A, its lower and upper triangular parts can be extracted by the calls tril A and triu A, respectively Its strictly lower and strictly upper triangular parts can be extract by the calls tril A, - and triu A,, respectively

36 8 Chapter Introduction Exercise 44 Add the functions trilu A and triuu A to your SLAP library These functions return the lower and upper triangular part of A, respectively, with the diagonal set to ones Thus, > A [ ]; > trilu A ans - 3 Hint: use the tril and eye functions You will also want to use the size function to extract the dimensions of A, to pass in to eye Transpose matrix Definition 45 Transpose matrix Let A R m n and B R n m Then B is said to be the transpose of A if, for i < m and j < n, β j,i α i,j The transpose of a matrix A is denoted by A T We have already used T to indicate a row vector, which is consistent with the above definition Rather than writing supplying our own function for transposing a matrix or vector, we will rely on the M-script operation: B A creates a matrix that equals the transpose of matrix A There is a reason for this: rarely do we explicitly transpose a matrix Instead, we will try to always write our routines so that the operation is computed as if the matrix is transposed, but without performing the transpose so that we do not pay the overhead in time and memory associated with the transpose Example 46 Let A and x Then A T 3 3 T 3 3 and xt 4 T 4

37 4 Matrices and Matrix-Vector Multiplication 9 Remark 47 Clearly, A T T A Symmetric matrices Definition 48 Symmetric matrix A matrix A R n n is said to be symmetric if A A T Exercise 49 Show that a triangular matrix that is also symmetric is in fact a diagonal matrix 44 Computing the matrix-vector multiply The output of a matrix-vector multiply is itself a vector: If A R m n and x R n, then y Ax is a vector of length m y R m We now discuss different ways in which to compute this operation Via dot products: If one looks at a typical row in α i, χ + α i, χ + + α i,n χ n one notices that this is just the dot product of vectors α i, α i, ã i and x α i,n χ χ χ n in other words, the dot product of the ith row of A, viewed as a column vector, with the vector x, which one can visualize as ψ ψ i ψ m α, α, α,n α i, α i, α i,n α m, α m, α m,n χ χ χ n

38 3 Chapter Introduction Example 5 Let A Ax 3 3 T T T 3 and x 3 Then The algorithm for computing y : Ax + y is given in Figure 3left To its right is the M-script function that implements the matrix-vector multiplication If initially y, then these compute y : Ax Now, let us revisit the fact that the matrix-vector multiply can be computed as dot products of the rows of A with the vector x Think of the matrix A as individual rows: A ã T ã T ã T m where ã i is the column vector which, when transposed, becomes the ith row of the matrix Then ã T ã T ã T x Ax x ã T x, ã T m x ã T m,

39 4 Matrices and Matrix-Vector Multiplication 3 for i,, m for j,, n ψ i : ψ i + α i,j χ j endfor endfor for i:m for j:n y i y i + A i,j * x j ; end end Figure 3: Algorithm and M-script for computing y : Ax + y for i,, m for j,, n ψ i : ψ i + α i,j χ j endfor endfor ψ i : ψ i + ã T i x for i:m end y i y i + FLA_Dot A i,:, x ; Figure 4: Algorithm and M-script for computing y : Ax + y via dot products A i,: equals the ith row of array A for j,, n for i,, m ψ i : ψ i + α i,j χ j endfor endfor for j:n for i:m y i y i + A i,j * x j ; end end Figure 5: Algorithm and M-script for computing y : Ax+y This is exactly the algorithm in Figure 3, except with the order of the loops interchanged for j,, n for i,, m ψ i : ψ i + α i,j χ j endfor endfor y : χ ja j + y for j:n end y FLA_Axpy x j, A :, j, y ; Figure 6: Algorithm and M-script for computing y : Ax + y via axpy operations A :,j equals the jth column of array A

40 3 Chapter Introduction which is exactly what we reasoned before An algorithm and corresponding M-script implementation that exploit this insight are given in Figure 4 Via axpy operations: Next, we note that, by definition, α, χ + α, χ + + α,n χ n α, χ + α, χ + + α,n χ n α m, χ + α m, χ + + α m,n χ n α, α, α, χ + χ α, + + χ n α m, α m, α,n α,n α m,n This suggests the alternative algorithm and M-script for computing y : Ax + y given in Figure 5, which are exactly the algorithm and M-script given in Figure 3left but with the two loops interchanged If we let a j denote the vector that equals the jth column of A, then A a a a n and Ax χ α, α, α m, }{{} a + χ α, α, α m, }{{} a χ a + χ a + + χ n a n + + χ n α,n α,n α m,n } {{ } a n This suggests the algorithm and M-script for computing y : Ax+y given in Figure 6 It illustrates how matrix-vector multiplication can be expressed in terms of axpy operations Example 5 Let A and x Then 3 Ax

41 4 Matrices and Matrix-Vector Multiplication Cost of matrix-vector multiplication Computing y : Ax + y, where A R m n, requires mn multiplies and mn adds, for a total of mn floating point operations flops This count is the same regardless of the order of the loops ie, regardless of whether the matrix-vector multiply is organized by computing dot operations with the rows or axpy operations with the columns 46 Scaling and adding matrices Theorem 5 Let L A : R n R m be a linear transformation and, for all x R n, define the function L B : R n R m by L B x αl A x Then L B x is a linear transformation Proof: Let x, y R n and β, γ R Then L B βx + γy αl A βx + γy αβl A x + γl A y αβl A x + αγl A y βαl A x + γαl A y βαl A x + γαl A y βl B x + γl B y Hence L B is a linear transformation Now, let A be the matrix that represents L A Then, for all x R n, αax αl A x L B x Since L B is a linear transformation, there should be a matrix B such that, for all x R n, Bx L B x αax One way to find how that matrix relates to α and A is to recall that b j Be j, the jth column of B Thus, b j Be j αae j αa j, where a j equals the jth column of A We conclude that B is computed from A by scaling each column by α But that simply means that each element of B is scaled by α This motivates the following definition Definition 53 Scaling a matrix If A R m n and α R, then α, α, α,n α, α, α,n α α m, α m, α m,n αα, αα, αα,n αα, αα, αα,n αα m, αα m, αα m,n

42 34 Chapter Introduction An alternative motivation for this definition is to consider α, χ + α, χ + + α,n χ n α, χ + α, χ + + α,n χ n αax α α m, χ + α m, χ + + α m,n χ n αα, χ + α, χ + + α,n χ n αα, χ + α, χ + + α,n χ n αα m, χ + α m, χ + + α m,n χ n αα, χ + αα, χ + + αα,n χ n αα, χ + αα, χ + + αα,n χ n αα m, χ + αα m, χ + + αα m,n χ n αα, αα, αα,n χ αα, αα, αα,n χ αax αα m, αα m, αα m,n χ n Remark 54 Since, by design, αax αax we can drop the parentheses and write αax which also equals Aαx since Lx Ax is a linear transformation Theorem 55 Let L A : R n R m and L B : R n R m both be linear transformations and, for all x R n, define the function L C : R n R m by L C x L A x + L B x Then L C x is a linear transformations Exercise 56 Prove Theorem Now, let A, B, and C be the matrices that represent L A, L B, and L C from Theorem, respectively Then, for all x R n, Cx L C x L A x + L B x One way to find how matrix C relates to matrices A and B is to exploit the fact that c j Ce j L C e j L A e j + L B e j Ae j + Be j a j + b j, where a j, b j, and c j equal the jth columns of A, B, and C, respectively Thus, the jth column of C equals the sum of the corresponding columns of A and B But that simply means that each element of C equals the sum of the corresponding elements of A and B:

43 4 Matrices and Matrix-Vector Multiplication 35 Definition 57 Matrix addition If A, B R m n, then α, α, α,n α, α, α,n + α m, α m, α m,n β, β, β,n β, β, β,n β m, β m, β m,n α, + β, α, + β, α,n + β,n α, + β, α, + β, α,n + β,n α m, + β m, α m, + β m, α m,n + β m,n Exercise 58 Note: I have changed this question from before Give a motivation for matrix addition by considering a linear transformation L C x L A x + L B x 47 Partitioning matrices and vectors into submatrices blocks and subvectors Theorem 59 Let A R m n, x R n, and y R n Let m m + m + m M, m i for i,, M ; and n n + n + n N, n j for j,, N ; and Partition A A, A, A,N A, A, A,N, x x x, and y y y A M, A M, A M,N x N y M with A i,j R m i n j, x j R n j, and y i R m i Then y i N j A i,jx j This theorem is intuitively true, and messy to prove carefully, and therefore we will not give its proof, relying on examples instead

44 36 Chapter Introduction Remark 6 If one partitions matrix A, vector x, and vector y into blocks, and one makes sure the dimensions match up, then blocked matrix-vector multiplication proceeds exactly as does a regular matrix-vector multiplication except that individual multiplications of scalars commute while in general individual multiplications with matrix and vector blocks submatrices and subvectors do not Example 6 Consider A A a A a T α a T A a A x x χ x , and y y ψ y,, where y, y R Then y y ψ y A a A a T α a T A a A x χ x A x + a χ + A x a T x + α χ + a T x A x + a χ + A x 4 5 5

45 4 Matrices and Matrix-Vector Multiplication 37 Remark 6 The labeling of the submatrices and subvectors in and was carefully chosen to convey information: The letters that are used convey information about the shapes For example, for a and a the use of a lowercase Roman letter indicates they are column vectors while the T s in a T and a T indicate that they are row vectors Symbols α and χ indicate these are scalars We will use these conventions consistently to enhance readability

46 38 Chapter Introduction y : Mvmult unb vara, x, y AT Partition A, y A B yt y B where A T is n and y T is while ma T < ma do Repartition AT A B A a T A where a is a row ψ : a T x + ψ Continue with AT A B endwhile A a T A, yt y B, yt y B y ψ y y ψ y y : Mvmult unb vara, x, y Partition A xt A L A R, x where A L is m and x T is while mx T < mx do Repartition AL A R A a A, xt x B where a is a column y : χ a + y x B Continue with xt AL A R A a A, x B endwhile x χ x x χ x Figure 7: The algorithms in Figures 4 and 6, represented in FLAME notation 5 A High Level Representation of Algorithms It is our experience that many, if not most, errors in coding are related to indexing mistakes We now discuss how matrix algorithms can be described without excessive indexing 5 Matrix-vector multiplication In Figure 7, we present algorithms for computing y : Ax+y, already given in Figures 4 and 6, using a notation that is meant to hide indexing Let us discuss the algorithm on the left in detail We compare and contrast with the partitioning of matrix A and vector y in Figure 4 A ǎ T ǎ T ǎ T m with the partitioning in Figure 7left A A a T A and y and y ψ ψ ψ m y ψ y

47 5 A High Level Representation of Algorithms 39 In a nutshell, during the ith iteration starting with i, ǎ T A ǎ T i y a T ǎ T i A and ψ ǎ T i+ y ǎ T n ψ ψ i ψ i ψ i+ Thus, ψ : a T x + ψ in Figure 7 is the same as the computation ψ i : ǎ T i x + ψ i in Figure 4 Thus, AT yt and represent two sets of rows of A and elements of y, respectively: A B y B ψ n those with which the algorithm is finished A T and y T and those with which the algorithm will compute in the future A B and y B Here subscripts T and B stand for Top and Bottom, respectively Repartition AT A B A a T A, yt y B exposes the top row of A B and top element of y B so that the algorithm can update that element with ψ : a T x + ψ Continue with A AT a A T, B A adds a T Partition A yt y B y ψ y y ψ y and ψ to A T and y T, respectively, since these parts of A and y are now done AT A B yt, y y B where A T is n and y T is initializes A T and y T to be empty, since initially the algorithm is finished with no rows of A and no elements of y With this explanation, we believe that both algorithms in Figure 7 become intuitive The subscripts L and R in the algorithm on the right denote the Left and Bottom part of matrix A, respectively

48 4 Chapter Introduction y : Mvmult t unb vara, x, y Partition A yt A L A R, y where A L is m and y T is while my T < my do Repartition AL A R A a A, yt y B where a is a column ψ : a T x + ψ y B y ψ y Continue with y yt AL A R A a A, ψ y B y endwhile y : Mvmult t unb vara, x, y AT Partition A, x A B xt x B where A T is n and x T is while ma T < ma do Repartition AT A B A a T A where a is a row y : χ a + y Continue with AT A B endwhile A a T A, xt x B, xt x B x χ x x χ x Figure 8: Algorithms for computing y : A T x + y Notice that matrix A is not explicitly transposed Be sure to compare and contrast with the algorithms in Figure 7 5 Transpose matrix-vector multiplication Example 63 Let A A T x 3 3 T 3 and x 3 3 Then 3 The thing to notice is that what was a column in A becomes a row in A T The above example motivates the observation that if 6 7 A AT A B A a T A then Moreover, if A T A T T A T B A T a A T A A L A R A a A

49 5 A High Level Representation of Algorithms 4 y : Mvmult unb varba, x, y AT L A T R Partition A, A BL A BR xt yt x, y x B y B where A T L is, x T, y T are while ma T L < ma do Repartition AT L A T R A BL xt x B A BR x χ x A a A a T α a T A a A, yt y B where α, χ, and ψ are scalars y ψ y, y : Mvmult unb varba, x, y AT L A T R Partition A, A BL A BR xt yt x, y x B y B where A T L is, x T, y T are while ma T L < ma do Repartition AT L A T R A BL xt x B A BR x χ x A a A a T α a T A a A, yt y B where α, χ, and ψ are scalars y ψ y, ψ : a T x + α χ + a T x + ψ Continue with AT L A T R A BL xt x B endwhile A BR x χ x A a A a T α a T A a A, yt y B y ψ y, y : χ a + y ψ : χ α + ψ y : χ a + y Continue with AT L A T R A BL xt x B endwhile A BR x χ x A a A a T α a T A a A, yt y B y ψ y, Figure 9: The algorithms in Figure 7, exposing more submatrices and vectors Works for square matrices and prepares for the discussion on multiplying with triangular and symmetric matrices then A A T T L A T R A T a T A T This motivates the algorithms for computing y A T x + y in Figure 8

50 4 Chapter Introduction 53 Triangular matrix-vector multiplication Theorem 64 Let U be an upper triangular matrix Partition U UT L U u U T R U u U BL U T υ u T, BR U u U where U T L and U are square matrices Then U BL, u T U, and u, where indicates a matrix or vector of the appropriate dimensions U T L and U BR are upper triangular matrices Rather then proving the above theorem, we look at an example, after which we will declare this intuitively obvious : Example 65 Consider U u U u T υ u T U u U, We notice that u T, U, and u A similar theorem holds for lower triangular matrices: Theorem 66 Let L be a lower triangular matrix Partition L LT L L l L T R L l L BL L T λ l T, BR L l L where L T L and L are square matrices Then L BL, l L, and l, where indicates a matrix or vector of the appropriate dimensions L T L and L BR are lower triangular matrices

51 5 A High Level Representation of Algorithms 43 Let us consider y : Ux, where U is an n n upper triangular matrix The purpose of the game is to take advantage of the zeroes below the diagonal What we are going to do first is to take the matrix-vector multiplication algorithms in Figure 7 and change them to algorithms that partition the matrix A similar to how U will be partitioned As a consequence, both x and y need to be partitioned to make sure that dimensions match The result is given in Figure 9 Comparing the algorithms on the left in Figures 7 and 9, we notice that the single update ψ : a T x + ψ now becomes ψ : a α a T x χ x + ψ a T x + α χ + a T x + ψ Comparing the algorithms on the right in Figures 7 and 9, we notice that the single update y : χ a x + x now becomes y a y ψ : χ α + ψ, y a y or, the separate computations y : χ a + y, ψ : χ α + ψ, and y : χ a + y Now we are ready to change the algorithms in Figure 9 into algorithms that exploit the upper triangular structure of matrix U, as indicated in Figure Exercise 67 Modify the algorithms in Figure 9 to compute y : Lx, where L is a lower triangular matrix Exercise 68 Modify the algorithms in Figure so that x is overwritten with x : Ux, without using the vector y Exercise 69 Reason why the algorithm you developed for Exercise 67 cannot be trivially changed so that x : Lx without requiring y What is the solution? Exercise 7 Develop algorithms for computing x : U T x and x : L T x, where U and L are respectively upper triangular and lower triangular, without explicitly transposing matrices U and L

52 44 Chapter Introduction y : Trmv un unb varu, x, y UT L U T R Partition U, U BR xt yt x, y x B y B where U T L is, x T, y T are while mu T L < mu do Repartition UT L U T R xt U BR x B x χ x U u U υ u T U, yt y B where υ, χ, and ψ are scalars y ψ y, y : Trmv un unb varu, x, y UT L U T R Partition U, U BR xt yt x, y x B y B where U T L is, x T, y T are while mu T L < mu do Repartition UT L U T R xt U BR x B x χ x U u U υ u T U, yt y B where υ, χ, and ψ are scalars y ψ y, ψ : u T x + υ χ + u T x + ψ Continue with UT L U T R xt U BR x B endwhile x χ x U u U υ u T A, yt y B y ψ y, y : χ u + y ψ : χ υ + ψ y : χ u + y Continue with UT L U T R xt U BR x B endwhile x χ x U u U υ u T A, yt y B y ψ y, Figure : The algorithms in Figure 9, modified to take advantage of the upper triangular structure of U They compute y : Ux if y upon entry Cost Let us analyze the algorithm in Figure left The cost is in the update ψ : υ χ + u T x + ψ which is typically computed in two steps: ψ : υ χ + ψ followed by a dot product ψ : u T x + ψ Now, during the first iteration, u and x are of length, so that that iteration requires flops for the first step During the ith iteration staring with i, u and x are of length i so that the cost of that iteration is flops for the first step and i flops for the second Thus, if A is an n n matrix, then the total cost is given by flops n n nn [ + i] n + i n + i i n + n n n + n n Exercise 7 Compute the cost, in flops, of the algorithm in Figure right

53 5 A High Level Representation of Algorithms 45 y : Symv u unb vara, x, y AT L A T R Partition A, A BL A BR xt yt x, y x B y B where A T L is, x T, y T are while ma T L < ma do Repartition AT L A T R A BL xt x B A BR x χ x A a A a T α a T A a A, yt y B where α, χ, and ψ are scalars ψ : a T {}}{ a T x + α χ + a T x Continue with AT L A T R A BL xt x B endwhile A BR x χ x y ψ y A a A a T α a T A a A, yt y B y ψ y,, y : Symv u unb vara, x, y AT L A T R Partition A, A BL A BR xt yt x, y x B y B where A T L is, x T, y T are while ma T L < ma do Repartition AT L A T R A BL xt x B A BR x χ x A a A a T α a T A a A, yt y B where α, χ, and ψ are scalars y : χ a + y ψ : χ α + ψ y : χ a }{{} + y a Continue with AT L A T R A BL xt x B endwhile A BR x χ x y ψ y A a A a T α a T A a A, yt y B y ψ y,, Figure : The algorithms in Figure 9, modified to compute y : Ax + y where A is symmetric and only the upper triangular part of A is stored 54 Symmetric matrix-vector multiplication Theorem 7 Let A be a symmetric matrix Partition A AT L A a A T R A a A BL A T α a T, BR A a A where A T A and L are square matrices Then A BL A T T R, a a, A A T, and a a A T L and A BR are symmetric matrices Again, we don t prove this, giving an example instead:

54 46 Chapter Introduction Example 73 Consider A a A a T α a T A a A, We notice that a T a, A T A, and a T A Since the upper and lower triangular part of a symmetric matrix are simply the transpose of each other, it is only necessary to store half the matrix: only the upper triangular part or only the lower triangular part We now discuss how to modify the algorithms in Figure 9 to compute y : Ax + y when A is symmetric and only stored in the upper triangular part of the matrix The change is simple: a and a are not stored and thus For the algorithm in Figure 9left, the update ψ : a T x + α χ + a T x + ψ must be changed to ψ : a T x + α χ + a T x + ψ, as noted in Figure left For the algorithm in Figure 9right, the update y : χ a + y must be changed to y : χ a + y as noted in Figure right Exercise 74 Modify the algorithms in Figure 9 to compute y : Ax + y, where A is symmetric and stored in the lower triangular part of matrix 6 Representing Algorithms in Code We believe that the notation used in Figures 7 has a number of advantages It is very visual It facilitates comparing and contrasting different algorithms for different related operations The opportunity for introducing indexing errors is greatly reduced The problem now is that errors can still be introduced as the algorithms are translated to M- script or some other language For this, we created a few routines that provide Application

Exercise 1.3 Work out the probabilities that it will be cloudy/rainy the day after tomorrow.

Exercise 1.3 Work out the probabilities that it will be cloudy/rainy the day after tomorrow. 54 Chapter Introduction 9 s Exercise 3 Work out the probabilities that it will be cloudy/rainy the day after tomorrow Exercise 5 Follow the instructions for this problem given on the class wiki For the

More information

Section Vectors

Section Vectors Section 1.2 - Vectors Maggie Myers Robert A. van de Geijn The University of Texas at Austin Practical Linear Algebra Fall 2009 http://z.cs.utexas.edu/wiki/pla.wiki/ 1 Vectors We will call a one-dimensional

More information

Matrix-Vector Operations

Matrix-Vector Operations Week3 Matrix-Vector Operations 31 Opening Remarks 311 Timmy Two Space View at edx Homework 3111 Click on the below link to open a browser window with the Timmy Two Space exercise This exercise was suggested

More information

From Matrix-Vector Multiplication to Matrix-Matrix Multiplication

From Matrix-Vector Multiplication to Matrix-Matrix Multiplication Week4 From Matrix-Vector Multiplication to Matrix-Matrix Multiplication here are a LO of programming assignments this week hey are meant to help clarify slicing and dicing hey show that the right abstractions

More information

Matrix-Vector Operations

Matrix-Vector Operations Week3 Matrix-Vector Operations 31 Opening Remarks 311 Timmy Two Space View at edx Homework 3111 Click on the below link to open a browser window with the Timmy Two Space exercise This exercise was suggested

More information

Chapter 3 - From Gaussian Elimination to LU Factorization

Chapter 3 - From Gaussian Elimination to LU Factorization Chapter 3 - From Gaussian Elimination to LU Factorization Maggie Myers Robert A. van de Geijn The University of Texas at Austin Practical Linear Algebra Fall 29 http://z.cs.utexas.edu/wiki/pla.wiki/ 1

More information

Week6. Gaussian Elimination. 6.1 Opening Remarks Solving Linear Systems. View at edx

Week6. Gaussian Elimination. 6.1 Opening Remarks Solving Linear Systems. View at edx Week6 Gaussian Elimination 61 Opening Remarks 611 Solving Linear Systems View at edx 193 Week 6 Gaussian Elimination 194 61 Outline 61 Opening Remarks 193 611 Solving Linear Systems 193 61 Outline 194

More information

Linear Transformations and Matrices

Linear Transformations and Matrices Week2 Linear Transformations and Matrices 2 Opening Remarks 2 Rotating in 2D View at edx Let R θ : R 2 R 2 be the function that rotates an input vector through an angle θ: R θ (x) θ x Figure 2 illustrates

More information

Contents. basic algebra. Learning outcomes. Time allocation. 1. Mathematical notation and symbols. 2. Indices. 3. Simplification and factorisation

Contents. basic algebra. Learning outcomes. Time allocation. 1. Mathematical notation and symbols. 2. Indices. 3. Simplification and factorisation basic algebra Contents. Mathematical notation and symbols 2. Indices 3. Simplification and factorisation 4. Arithmetic of algebraic fractions 5. Formulae and transposition Learning outcomes In this workbook

More information

Notes on Householder QR Factorization

Notes on Householder QR Factorization Notes on Householder QR Factorization Robert A van de Geijn Department of Computer Science he University of exas at Austin Austin, X 7872 rvdg@csutexasedu September 2, 24 Motivation A fundamental problem

More information

Vectors in Linear Algebra

Vectors in Linear Algebra Week Vectors in Linear Algebra Opening Remarks Take Off Co-Pilot Roger Murdock (to Captain Clarence Oveur): We have clearance, Clarence Captain Oveur: Roger, Roger What s our vector, Victor? From Airplane

More information

LAFF Linear Algebra Foundations and Frontiers. Daniel Homola

LAFF Linear Algebra Foundations and Frontiers. Daniel Homola LAFF Linear Algebra Foundations and Frontiers Daniel Homola Notes for the MOOC course by The University of Texas at Austin September 2015 Contents I Week 1-4 5 1 Vectors 6 1.1 What is a vector.............................

More information

Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases

Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases Week Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases. Opening Remarks.. Low Rank Approximation 463 Week. Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases 464.. Outline..

More information

Eigenvalues, Eigenvectors, and Diagonalization

Eigenvalues, Eigenvectors, and Diagonalization Week12 Eigenvalues, Eigenvectors, and Diagonalization 12.1 Opening Remarks 12.1.1 Predicting the Weather, Again Let us revisit the example from Week 4, in which we had a simple model for predicting the

More information

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3 Linear Algebra Row Reduced Echelon Form Techniques for solving systems of linear equations lie at the heart of linear algebra. In high school we learn to solve systems with or variables using elimination

More information

Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases

Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases Week Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases. Opening Remarks.. Low Rank Approximation 38 Week. Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases 38.. Outline..

More information

Matrix-Matrix Multiplication

Matrix-Matrix Multiplication Week5 Matrix-Matrix Multiplication 51 Opening Remarks 511 Composing Rotations Homework 5111 Which of the following statements are true: cosρ + σ + τ cosτ sinτ cosρ + σ sinρ + σ + τ sinτ cosτ sinρ + σ cosρ

More information

More Gaussian Elimination and Matrix Inversion

More Gaussian Elimination and Matrix Inversion Week7 More Gaussian Elimination and Matrix Inversion 7 Opening Remarks 7 Introduction 235 Week 7 More Gaussian Elimination and Matrix Inversion 236 72 Outline 7 Opening Remarks 235 7 Introduction 235 72

More information

Notes on LU Factorization

Notes on LU Factorization Notes on LU Factorization Robert A van de Geijn Department of Computer Science The University of Texas Austin, TX 78712 rvdg@csutexasedu October 11, 2014 The LU factorization is also known as the LU decomposition

More information

Eigenvalues, Eigenvectors, and Diagonalization

Eigenvalues, Eigenvectors, and Diagonalization Week12 Eigenvalues, Eigenvectors, and Diagonalization 12.1 Opening Remarks 12.1.1 Predicting the Weather, Again View at edx Let us revisit the example from Week 4, in which we had a simple model for predicting

More information

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved

Fundamentals of Linear Algebra. Marcel B. Finan Arkansas Tech University c All Rights Reserved Fundamentals of Linear Algebra Marcel B. Finan Arkansas Tech University c All Rights Reserved 2 PREFACE Linear algebra has evolved as a branch of mathematics with wide range of applications to the natural

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

LINEAR ALGEBRA KNOWLEDGE SURVEY

LINEAR ALGEBRA KNOWLEDGE SURVEY LINEAR ALGEBRA KNOWLEDGE SURVEY Instructions: This is a Knowledge Survey. For this assignment, I am only interested in your level of confidence about your ability to do the tasks on the following pages.

More information

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

Mathematics Review Exercises. (answers at end)

Mathematics Review Exercises. (answers at end) Brock University Physics 1P21/1P91 Mathematics Review Exercises (answers at end) Work each exercise without using a calculator. 1. Express each number in scientific notation. (a) 437.1 (b) 563, 000 (c)

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra R. J. Renka Department of Computer Science & Engineering University of North Texas 02/03/2015 Notation and Terminology R n is the Euclidean n-dimensional linear space over the

More information

Matrix-Matrix Multiplication

Matrix-Matrix Multiplication Chapter Matrix-Matrix Multiplication In this chapter, we discuss matrix-matrix multiplication We start by motivating its definition Next, we discuss why its implementation inherently allows high performance

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

MATH 320, WEEK 7: Matrices, Matrix Operations

MATH 320, WEEK 7: Matrices, Matrix Operations MATH 320, WEEK 7: Matrices, Matrix Operations 1 Matrices We have introduced ourselves to the notion of the grid-like coefficient matrix as a short-hand coefficient place-keeper for performing Gaussian

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Vector Spaces, Orthogonality, and Linear Least Squares

Vector Spaces, Orthogonality, and Linear Least Squares Week Vector Spaces, Orthogonality, and Linear Least Squares. Opening Remarks.. Visualizing Planes, Lines, and Solutions Consider the following system of linear equations from the opener for Week 9: χ χ

More information

Scientific Computing: Dense Linear Systems

Scientific Computing: Dense Linear Systems Scientific Computing: Dense Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 February 9th, 2012 A. Donev (Courant Institute)

More information

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture Week9 Vector Spaces 9. Opening Remarks 9.. Solvable or not solvable, that s the question Consider the picture (,) (,) p(χ) = γ + γ χ + γ χ (, ) depicting three points in R and a quadratic polynomial (polynomial

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

CSSS/STAT/SOC 321 Case-Based Social Statistics I. Levels of Measurement

CSSS/STAT/SOC 321 Case-Based Social Statistics I. Levels of Measurement CSSS/STAT/SOC 321 Case-Based Social Statistics I Levels of Measurement Christopher Adolph Department of Political Science and Center for Statistics and the Social Sciences University of Washington, Seattle

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6

Orthogonality. 6.1 Orthogonal Vectors and Subspaces. Chapter 6 Chapter 6 Orthogonality 6.1 Orthogonal Vectors and Subspaces Recall that if nonzero vectors x, y R n are linearly independent then the subspace of all vectors αx + βy, α, β R (the space spanned by x and

More information

AP Physics 1 Summer Assignment. Directions: Find the following. Final answers should be in scientific notation. 2.)

AP Physics 1 Summer Assignment. Directions: Find the following. Final answers should be in scientific notation. 2.) AP Physics 1 Summer Assignment DUE THE FOURTH DAY OF SCHOOL- 2018 Purpose: The purpose of this packet is to make sure that we all have a common starting point and understanding of some of the basic concepts

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

Elementary Linear Algebra

Elementary Linear Algebra Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We

More information

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

Matrix Algebra: Vectors

Matrix Algebra: Vectors A Matrix Algebra: Vectors A Appendix A: MATRIX ALGEBRA: VECTORS A 2 A MOTIVATION Matrix notation was invented primarily to express linear algebra relations in compact form Compactness enhances visualization

More information

Notes on Solving Linear Least-Squares Problems

Notes on Solving Linear Least-Squares Problems Notes on Solving Linear Least-Squares Problems Robert A. van de Geijn The University of Texas at Austin Austin, TX 7871 October 1, 14 NOTE: I have not thoroughly proof-read these notes!!! 1 Motivation

More information

A Review of Matrix Analysis

A Review of Matrix Analysis Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value

More information

Roundoff Error. Monday, August 29, 11

Roundoff Error. Monday, August 29, 11 Roundoff Error A round-off error (rounding error), is the difference between the calculated approximation of a number and its exact mathematical value. Numerical analysis specifically tries to estimate

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

1 Integration of Rational Functions Using Partial Fractions

1 Integration of Rational Functions Using Partial Fractions MTH Fall 008 Essex County College Division of Mathematics Handout Version 4 September 8, 008 Integration of Rational Functions Using Partial Fractions In the past it was far more usual to simplify or combine

More information

14 Singular Value Decomposition

14 Singular Value Decomposition 14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing

More information

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim Introduction - Motivation Many phenomena (physical, chemical, biological, etc.) are model by differential equations. Recall the definition of the derivative of f(x) f f(x + h) f(x) (x) = lim. h 0 h Its

More information

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Math 3108: Linear Algebra

Math 3108: Linear Algebra Math 3108: Linear Algebra Instructor: Jason Murphy Department of Mathematics and Statistics Missouri University of Science and Technology 1 / 323 Contents. Chapter 1. Slides 3 70 Chapter 2. Slides 71 118

More information

Next topics: Solving systems of linear equations

Next topics: Solving systems of linear equations Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:

More information

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220)

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for 2017-01-30 Most of mathematics is best learned by doing. Linear algebra is no exception. You have had a previous class in which you learned the basics of linear algebra, and you will have plenty

More information

Notation, Matrices, and Matrix Mathematics

Notation, Matrices, and Matrix Mathematics Geographic Information Analysis, Second Edition. David O Sullivan and David J. Unwin. 010 John Wiley & Sons, Inc. Published 010 by John Wiley & Sons, Inc. Appendix A Notation, Matrices, and Matrix Mathematics

More information

Systems of Linear Equations and Matrices

Systems of Linear Equations and Matrices Chapter 1 Systems of Linear Equations and Matrices System of linear algebraic equations and their solution constitute one of the major topics studied in the course known as linear algebra. In the first

More information

A primer on matrices

A primer on matrices A primer on matrices Stephen Boyd August 4, 2007 These notes describe the notation of matrices, the mechanics of matrix manipulation, and how to use matrices to formulate and solve sets of simultaneous

More information

Matrix & Linear Algebra

Matrix & Linear Algebra Matrix & Linear Algebra Jamie Monogan University of Georgia For more information: http://monogan.myweb.uga.edu/teaching/mm/ Jamie Monogan (UGA) Matrix & Linear Algebra 1 / 84 Vectors Vectors Vector: A

More information

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.

Linear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations. POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems

More information

Applied Numerical Linear Algebra. Lecture 8

Applied Numerical Linear Algebra. Lecture 8 Applied Numerical Linear Algebra. Lecture 8 1/ 45 Perturbation Theory for the Least Squares Problem When A is not square, we define its condition number with respect to the 2-norm to be k 2 (A) σ max (A)/σ

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Decompositions, numerical aspects Gerard Sleijpen and Martin van Gijzen September 27, 2017 1 Delft University of Technology Program Lecture 2 LU-decomposition Basic algorithm Cost

More information

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects

Program Lecture 2. Numerical Linear Algebra. Gaussian elimination (2) Gaussian elimination. Decompositions, numerical aspects Numerical Linear Algebra Decompositions, numerical aspects Program Lecture 2 LU-decomposition Basic algorithm Cost Stability Pivoting Cholesky decomposition Sparse matrices and reorderings Gerard Sleijpen

More information

8.3 Householder QR factorization

8.3 Householder QR factorization 83 ouseholder QR factorization 23 83 ouseholder QR factorization A fundamental problem to avoid in numerical codes is the situation where one starts with large values and one ends up with small values

More information

ELEMENTARY LINEAR ALGEBRA

ELEMENTARY LINEAR ALGEBRA ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,

More information

Systems of Linear Equations and Matrices

Systems of Linear Equations and Matrices Chapter 1 Systems of Linear Equations and Matrices System of linear algebraic equations and their solution constitute one of the major topics studied in the course known as linear algebra. In the first

More information

BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix-Vector-Product Analysis of Matrix-Matrix Product

BLAS: Basic Linear Algebra Subroutines Analysis of the Matrix-Vector-Product Analysis of Matrix-Matrix Product Level-1 BLAS: SAXPY BLAS-Notation: S single precision (D for double, C for complex) A α scalar X vector P plus operation Y vector SAXPY: y = αx + y Vectorization of SAXPY (αx + y) by pipelining: page 8

More information

Review of Matrices and Block Structures

Review of Matrices and Block Structures CHAPTER 2 Review of Matrices and Block Structures Numerical linear algebra lies at the heart of modern scientific computing and computational science. Today it is not uncommon to perform numerical computations

More information

Linear Algebra Methods for Data Mining

Linear Algebra Methods for Data Mining Linear Algebra Methods for Data Mining Saara Hyvönen, Saara.Hyvonen@cs.helsinki.fi Spring 2007 1. Basic Linear Algebra Linear Algebra Methods for Data Mining, Spring 2007, University of Helsinki Example

More information

Lecture 2: Linear Algebra

Lecture 2: Linear Algebra Lecture 2: Linear Algebra Rajat Mittal IIT Kanpur We will start with the basics of linear algebra that will be needed throughout this course That means, we will learn about vector spaces, linear independence,

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

This appendix provides a very basic introduction to linear algebra concepts.

This appendix provides a very basic introduction to linear algebra concepts. APPENDIX Basic Linear Algebra Concepts This appendix provides a very basic introduction to linear algebra concepts. Some of these concepts are intentionally presented here in a somewhat simplified (not

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

AM205: Assignment 2. i=1

AM205: Assignment 2. i=1 AM05: Assignment Question 1 [10 points] (a) [4 points] For p 1, the p-norm for a vector x R n is defined as: ( n ) 1/p x p x i p ( ) i=1 This definition is in fact meaningful for p < 1 as well, although

More information

Homework 2 Foundations of Computational Math 2 Spring 2019

Homework 2 Foundations of Computational Math 2 Spring 2019 Homework 2 Foundations of Computational Math 2 Spring 2019 Problem 2.1 (2.1.a) Suppose (v 1,λ 1 )and(v 2,λ 2 ) are eigenpairs for a matrix A C n n. Show that if λ 1 λ 2 then v 1 and v 2 are linearly independent.

More information

MATH 3511 Lecture 1. Solving Linear Systems 1

MATH 3511 Lecture 1. Solving Linear Systems 1 MATH 3511 Lecture 1 Solving Linear Systems 1 Dmitriy Leykekhman Spring 2012 Goals Review of basic linear algebra Solution of simple linear systems Gaussian elimination D Leykekhman - MATH 3511 Introduction

More information

2 Systems of Linear Equations

2 Systems of Linear Equations 2 Systems of Linear Equations A system of equations of the form or is called a system of linear equations. x + 2y = 7 2x y = 4 5p 6q + r = 4 2p + 3q 5r = 7 6p q + 4r = 2 Definition. An equation involving

More information

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education

MTH Linear Algebra. Study Guide. Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education MTH 3 Linear Algebra Study Guide Dr. Tony Yee Department of Mathematics and Information Technology The Hong Kong Institute of Education June 3, ii Contents Table of Contents iii Matrix Algebra. Real Life

More information

Matrix Algebra for Engineers Jeffrey R. Chasnov

Matrix Algebra for Engineers Jeffrey R. Chasnov Matrix Algebra for Engineers Jeffrey R. Chasnov The Hong Kong University of Science and Technology The Hong Kong University of Science and Technology Department of Mathematics Clear Water Bay, Kowloon

More information

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in

. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in Vectors and Matrices Continued Remember that our goal is to write a system of algebraic equations as a matrix equation. Suppose we have the n linear algebraic equations a x + a 2 x 2 + a n x n = b a 2

More information

MATH 612 Computational methods for equation solving and function minimization Week # 6

MATH 612 Computational methods for equation solving and function minimization Week # 6 MATH 612 Computational methods for equation solving and function minimization Week # 6 F.J.S. Spring 2014 University of Delaware FJS MATH 612 1 / 58 Plan for this week Discuss any problems you couldn t

More information

Image Registration Lecture 2: Vectors and Matrices

Image Registration Lecture 2: Vectors and Matrices Image Registration Lecture 2: Vectors and Matrices Prof. Charlene Tsai Lecture Overview Vectors Matrices Basics Orthogonal matrices Singular Value Decomposition (SVD) 2 1 Preliminary Comments Some of this

More information

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information

These variables have specific names and I will be using these names. You need to do this as well.

These variables have specific names and I will be using these names. You need to do this as well. Greek Letters In Physics, we use variables to denote a variety of unknowns and concepts. Many of these variables are letters of the Greek alphabet. If you are not familiar with these letters, you should

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite

More information

MATH2210 Notebook 2 Spring 2018

MATH2210 Notebook 2 Spring 2018 MATH2210 Notebook 2 Spring 2018 prepared by Professor Jenny Baglivo c Copyright 2009 2018 by Jenny A. Baglivo. All Rights Reserved. 2 MATH2210 Notebook 2 3 2.1 Matrices and Their Operations................................

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

a11 a A = : a 21 a 22

a11 a A = : a 21 a 22 Matrices The study of linear systems is facilitated by introducing matrices. Matrix theory provides a convenient language and notation to express many of the ideas concisely, and complicated formulas are

More information

Solving Dense Linear Systems I

Solving Dense Linear Systems I Solving Dense Linear Systems I Solving Ax = b is an important numerical method Triangular system: [ l11 l 21 if l 11, l 22 0, ] [ ] [ ] x1 b1 = l 22 x 2 b 2 x 1 = b 1 /l 11 x 2 = (b 2 l 21 x 1 )/l 22 Chih-Jen

More information