From Matrix-Vector Multiplication to Matrix-Matrix Multiplication
|
|
- Mitchell Hampton
- 6 years ago
- Views:
Transcription
1 Week4 From Matrix-Vector Multiplication to Matrix-Matrix Multiplication here are a LO of programming assignments this week hey are meant to help clarify slicing and dicing hey show that the right abstractions in the mathematics, when reflected in how we program, allow one to implement algorithms very quickly hey help you understand special properties of matrices Practice as much as you think will benefit your understanding of the material here is no need to do them all! 41 Opening Remarks 411 Predicting the Weather View at edx he following table tells us how the weather for any day eg, today predicts the weather for the next day eg, tomorrow: omorrow oday sunny cloudy rainy sunny cloudy rainy his table is interpreted as follows: If today is rainy, then the probability that it will be cloudy tomorrow is 06, etc 117
2 Week 4 From Matrix-Vector Multiplication to Matrix-Matrix Multiplication 118 Homework 4111 If today is cloudy, what is the probability that tomorrow is sunny? cloudy? rainy? View at edx Homework 4112 If today is sunny, what is the probability that the day after tomorrow is sunny? cloudy? rainy? ry this! If today is cloudy, what is the probability that a week from today it is sunny? cloudy? rainy? hink about this for at most two minutes, and then look at the answer When things get messy, it helps to introduce some notation View at edx Let χ k s Let χ k c Let χ k r denote the probability that it will be sunny k days from now on day k denote the probability that it will be cloudy k days from now denote the probability that it will be rainy k days from now he discussion so far motivate the equations χ k+1 s 04 χ k s + 03 χ k c + 01 χ k r χ k+1 c 04 χ k s + 03 χ k c + 06 χ k r χ k+1 r 02 χ k s + 04 χ k c + 03 χ k r he probabilities that denote what the weather may be on day k and the table that summarizes the probabilities are often represented as a state vector, x k, and transition matrix, P, respectively: x k χ k s χ k c χ k r and P he transition from day k to day k + 1 is then written as the matrix-vector product multiplication χ k+1 s χ k+1 c χ k+1 r or x k+1 Px k, which is simply a more compact representation way of writing the system of linear equations What this demonstrates is that matrix-vector multiplication can also be used to compactly write a set of simultaneous linear equations χ k s χ k c χ k r
3 41 Opening Remarks 119 Assume again that today is cloudy so that the probability that it is sunny, cloudy, or rainy today is 0, 1, and 0, respectively: χ 0 s 0 x 0 χ 0 c 1 χ 0 r 0 If we KNOW today is cloudy, then the probability that is is sunny today is zero, etc Ah! Our friend the unit basis vector reappears! hen the vector of probabilities for tomorrow s weather, x 1, is given by χ 1 s χ s χ 1 c χ c χ 1 r χ r Ah! Pe 1 p 1, where p 1 is the second column in matrix P You should not be surprised! he vector of probabilities for the day after tomorrow, x 2, is given by χ 2 s χ 1 s χ 2 c χ 1 c χ 2 r χ 1 r Repeating this process preferrably using Python rather than by hand, we can find the probabilities for the weather for the next seven days, under the assumption that today is cloudy: x k k View at edx Homework 4113 Follow the instructions in the above video View at edx We could build a table that tells us how to predict the weather for the day after tomorrow from the weather today:
4 Week 4 From Matrix-Vector Multiplication to Matrix-Matrix Multiplication 120 Day after omorrow sunny cloudy rainy oday sunny cloudy rainy One way you can do this is to observe that χ 2 s χ 2 c χ 2 r χ 1 s χ 1 c χ 1 r χ 0 s χ 0 c χ 0 r Q χ 0 s χ 0 c χ 0 r, where Q is the transition matrix that tells us how the weather today predicts the weather the day after tomorrow Well, actually, we don t yet know that applying a matrix to a vector twice is a linear transformation We ll learn that later this week Now, just like P is simply the matrix of values from the original table that showed how the weather tomorrow is predicted from today s weather, Q is the matrix of values for the above table Homework 4114 Given omorrow oday sunny cloudy rainy sunny cloudy rainy fill in the following table, which predicts the weather the day after tomorrow given the weather today: Day after omorrow sunny cloudy rainy oday sunny cloudy rainy Now here is the hard part: Do so without using your knowledge about how to perform a matrix-matrix multiplication, since you won t learn about that until later this week May we suggest that you instead use MALAB to perform the necessary calculations
5 41 Opening Remarks Outline 41 Opening Remarks Predicting the Weather Outline What You Will Learn Preparation Partitioned Matrix-Vector Multiplication ransposing a Partitioned Matrix Matrix-Vector Multiplication, Again Matrix-Vector Multiplication with Special Matrices ranspose Matrix-Vector Multiplication riangular Matrix-Vector Multiplication Symmetric Matrix-Vector Multiplication Matrix-Matrix Multiplication Product Motivation From Composing Linear ransformations to Matrix-Matrix Multiplication Computing the Matrix-Matrix Product Special Shapes Cost Enrichment Markov Chains: heir Application Wrap Up Homework Summary 155
6 Week 4 From Matrix-Vector Multiplication to Matrix-Matrix Multiplication What You Will Learn Upon completion of this unit, you should be able to Apply matrix vector multiplication to predict the probability of future states in a Markov process Make use of partitioning to perform matrix vector multiplication ranspose a partitioned matrix Partition conformally, ensuring that the size of the matrices and vectors match so that matrix-vector multiplication works ake advantage of special structures to perform matrix-vector multiplication with triangular and symmetric matrices Express and implement various matrix-vector multiplication algorithms using the FLAME notation and FlamePy Make connections between the composition of linear transformations and matrix-matrix multiplication Compute a matrix-matrix multiplication Recognize scalars and column/row vectors as special cases of matrices Compute common vector-vector and matrix-vector operations as special cases of matrix-matrix multiplication Compute an outer product xy as a special case of matrix-matrix multiplication and recognize that he rows of the resulting matrix are scalar multiples of y he columns of the resulting matrix are scalar multiples of x
7 42 Preparation Preparation 421 Partitioned Matrix-Vector Multiplication View at edx Motivation Consider A A 00 a 01 A a 10 α 11 a , x 0 2 y 0 x χ 1 3, and y ψ 1, 4 5 where y 0, R 2 hen y Ax means that y y 0 A 00 a 01 A 02 x 0 A 00 x 0 + a 01 χ 1 + A 02 ψ 1 a 10 α 11 a 12 χ 1 a 10 x 0 + α 11 χ 1 + a 12 x 2 A 20 x 0 + a 21 χ 1 + A
8 Week 4 From Matrix-Vector Multiplication to Matrix-Matrix Multiplication Homework 4211 Consider A and x , and partition these into submatrices regions as follows: A 00 a 01 A 02 a 10 α 11 a 12 and x 0 χ 1, where A 00 R 3x3, x 0 R 3, α 11 is a scalar, and χ 1 is a scalar Show with lines how A and x are partitioned: Homework 4212 With the partitioning of matrices A and x in the above exercise, repeat the partitioned matrixvector multiplication, similar to how this unit started heory Let A R m n, x R n, and y R m Partition A A 0,0 A 0,1 A 0,N 1 A 1,0 A 1,1 A 1,N 1, x x 0 x 1, and y y 0 y 1 A M 1,0 A M 1,1 A M 1,N 1 x N 1 y M 1 where m m 0 + m m M 1, m i 0 for i 0,,M 1, n n 0 + n n N 1, n j 0 for j 0,,N 1, and
9 42 Preparation 125 A i, j R m i n j, x j R n j, and y i R m i If y Ax then A 0,0 A 0,1 A 0,N 1 A 1,0 A 1,1 A 1,N 1 x 0 x 1 A M 1,0 A M 1,1 A M 1,N 1 x N 1 A 0,0 x 0 + A 0,1 x A 0,N 1 x N 1 A 1,0 x 0 + A 1,1 x A 1,N 1 x N 1 A M 1,0 x 0 + A M 1,1 x A M 1,N 1 x N 1 In other words, N 1 y i j0 A i, j x j his is intuitively true and messy to prove carefully herefore we will not give its proof, relying on the many examples we will encounter in subsequent units instead If one partitions matrix A, vector x, and vector y into blocks, and one makes sure the dimensions match up, then blocked matrix-vector multiplication proceeds exactly as does a regular matrix-vector multiplication except that individual multiplications of scalars commute while in general individual multiplications with matrix and vector blocks submatrices and subvectors do not he labeling of the submatrices and subvectors in this unit was carefully chosen to convey information Consider A 00 a 01 A 02 A a 10 α 11 a 12 he letters that are used convey information about the shapes For example, for a 01 and a 21 the use of a lowercase Roman letter indicates they are column vectors while the s in a 10 and a 12 indicate that they are row vectors Symbols α 11 and χ 1 indicate these are scalars We will use these conventions consistently to enhance readability Notice that the partitioning of matrix A and vectors x and y has to be conformal he simplest way to understand this is that matrix-vector multiplication only works if the sizes of matrices and vectors being multiply match So, a partitioning of A, x, and y, when performing a given operation, is conformal if the suboperations with submatrices and subvectors that are encountered make sense 422 ransposing a Partitioned Matrix View at edx
10 Week 4 From Matrix-Vector Multiplication to Matrix-Matrix Multiplication 126 Motivation Consider his example illustrates a general rule: When transposing a partitioned matrix matrix partitioned into submatrices, you transpose the matrix of blocks, and then you transpose each block Homework 4221 Show, step-by-step, how to transpose heory Let A R m n be partitioned as follows: A A 0,0 A 0,1 A 0,N 1 A 1,0 A 1,1 A 1,N 1 A M 1,0 A M 1,1 A M 1,N 1, where A i, j R m i n j hen A A 0,0 A 1,0 A M 1,0 A 0,1 A 1,1 A M 1,1 A 0,N 1 A 1,N 1 A M 1,N 1 ransposing a partitioned matrix means that you view each submatrix as if it is a scalar, and you then transpose the matrix as if it is a matrix of scalars But then you recognize that each of those scalars is actually a submatrix and you also transpose that submatrix
11 42 Preparation 127 Special cases We now discuss a number of special cases that you may encounter Each submatrix is a scalar If A α 0,0 α 0,1 α 0,N 1 α 1,0 α 1,1 α 1,N 1 α M 1,0 α M 1,1 α M 1,N 1 then A α 0,0 α 1,0 α M 1,0 α 0,1 α 1,1 α M 1,1 α 0,0 α 1,0 α M 1,0 α 0,1 α 1,1 α M 1,1 α 0,N 1 α 1,N 1 α M 1N 1 α 0,N 1 α 1,N 1 α M 1,N 1 his is because the transpose of a scalar is just that scalar he matrix is partitioned by rows If A ã 0 ã 1, ã m 1 where each ã i is a row of A, then A ã 0 ã 1 ã m 1 ã 0 ã 1 ã m 1 ã 0 ã 1 ã m 1 his shows that rows of A, ã i, become columns of A : ã i he matrix is partitioned by columns If A a 0 a 1 a n 1, where each a j is a column of A, then A his shows that columns of A, a j, become rows of A : a j a 0 a 1 a n 1 a 0 a 1 a n blocked partitioning If A A L A BL A R A BR,
12 Week 4 From Matrix-Vector Multiplication to Matrix-Matrix Multiplication 128 then A A L A R A BL A BR 3 3 blocked partitioning If then A Anyway, you get the idea!!! A 00 a 01 A 02 A a 10 α 11 a 12, A 00 a 01 A 02 A 00 a 10 A 20 a 10 α 11 a 12 a 01 α 11 a 21 A 02 a 12 A 22 A 00 a 10 A 20 a 01 α 11 a 21 A 02 a 12 A 22
13 42 Preparation 129 Homework 4222 ranspose the following matrices: For any matrix A R m n, A A A 423 Matrix-Vector Multiplication, Again View at edx Motivation In the next few units, we will modify the matrix-vector multiplication algorithms from last week so that they can take advantage of matrices with special structure eg, triangular or symmetric matrices Now, what makes a triangular or symmetric matrix special? For one thing, it is square For another, it only requires one triangle of a matrix to be stored It was for this reason that we ended up with algorithm skeletons that looked like the one in Figure 41 when we presented algorithms for triangularizing or symmetrizing a matrix
14 Week 4 From Matrix-Vector Multiplication to Matrix-Matrix Multiplication 130 Algorithm: [A] : ALGORIHM SKELEONA Partition A A L A BL wherea L is 0 0 while ma L < ma do A R A BR Repartition A L A BL A R A BR whereα 11 is 1 1 A 00 a 01 A 02 a 10 α 11 a 12 Continue with endwhile A L A BL A R A BR A 00 a 01 A 02 a 10 α 11 a 12 Figure 41: Code skeleton for algorithms when matrices are triangular or symmetric Now, consider a typical partitioning of a matrix that is encountered in such an algorithm: A 00 a 01 A 02 a 10 α 01 a 12, where each represents an entry in the matrix in this case 6 6 If, for example, the matrix is lower triangular, A 00 a 01 A 02 a 10 α 11 a , 0 0 0
15 42 Preparation 131 Algorithm: y : MVMUL N UNB VAR1BA, x, y Partition A A L A R, A BL A BR x x, y y where A L is 0 0, x, y are 0 1 while ma L < ma do Repartition A A 00 a 01 A 02 L A R a 10 α 11 a 12, A BL A BR x x 0 χ 1, y y 0 ψ 1 Algorithm: y : MVMUL N UNB VAR2BA, x, y Partition A A L A R, A BL A BR x x, y y where A L is 0 0, x, y are 0 1 while ma L < ma do Repartition A A 00 a 01 A 02 L A R a 10 α 11 a 12, A BL A BR x x 0 χ 1, y y 0 ψ 1 y 0 : χ 1 a 01 + y 0 ψ 1 : a 10 x 0 + α 11 χ 1 + a 12 + ψ 1 ψ 1 : χ 1 α 11 + ψ 1 : χ 1 a 21 + Continue with A A 00 a 01 A 02 L A R a 10 α 11 a 12, A BL A BR x x 0 χ 1, y y 0 ψ 1 endwhile Continue with A A 00 a 01 A 02 L A R a 10 α 11 a 12, A BL A BR x x 0 χ 1, y y 0 ψ 1 endwhile Figure 42: Alternative algorithms for matrix-vector multiplication then a 01 0, A 02 0, and a 12 0 Remember: the 0 is a matrix or vector of appropriate size If instead the matrix is symmetric with only the lower triangular part stored, then a 01 a 10 a10, A 02 A 20, and a 12 a 21 he above observation leads us to express the matrix-vector multiplication algorithms for computing y : Ax + y given in Figure 42 Note: For the left algorithm, what was previously the current row in matrix A, a 1, is now viewed as consisting of three parts: a 1 a 10 α 11 a 12 while the vector x is now also partitioned into three parts: x x 0 χ 1 As we saw in the first week, the partitioned dot product becomes a 1 x a 10 α 11 a 12 x 1 x 0 χ 1 x 1 a 10x 0 + α 11 χ 1 + a 12,
16 Week 4 From Matrix-Vector Multiplication to Matrix-Matrix Multiplication 132 which explains why the update ψ 1 : a 1 x + ψ 1 is now ψ 1 : a 10x 0 + α 11 χ 1 + a 12 + ψ 1 Similar, for the algorithm on the right, based on the matrix-vector multiplication algorithm that uses the AXPY operations, we note that y : χ 1 a 1 + y is replaced by y 0 ψ 1 : χ 1 a 01 α 11 + y 0 ψ 1 a 21 which equals his explains the update y 0 ψ 1 : χ 1 a 01 + y 0 χ 1 α 11 + ψ 1 χ 1 a 21 + y 0 : χ 1 a 01 + y 0 ψ 1 : χ 1 α 11 + ψ 1 : χ 1 a 21 + Now, for matrix-vector multiplication y : Ax + y, it is not beneficial to break the computation up in this way ypically, a dot product is more efficient than multiple operations with the subvectors Similarly, typically one AXPY is more efficient then multiple AXPYs But the observations in this unit lay the foundation for modifying the algorithms to take advantage of special structure in the matrix, later this week Homework 4231 Implement routines [ y out ] Mvmult n unb var1b A, x, y ; and [ y out ] Mvmult n unb var2b A, x, y that compute y : Ax + y via the algorithms in Figure Matrix-Vector Multiplication with Special Matrices 431 ranspose Matrix-Vector Multiplication View at edx
17 43 Matrix-Vector Multiplication with Special Matrices 133 Algorithm: y : MVMUL UNB VAR1A, x, y Partition A A L A R, y y where A L is m 0 and y is 0 1 while my < my do Repartition A L A R A 0 a 1 A 2 ψ 1 : a 1 x + ψ 1 Continue with A L A R A 0 a 1 A 2 endwhile, y, y y 0 ψ 1 y 0 ψ 1 Algorithm: y : MVMUL UNB VAR2A, x, y Partition A A, x x A B where A is 0 n and x is 0 1 while ma < ma do Repartition A A B y : χ 1 a 1 + y Continue with A A B endwhile A 0 a 1 A 2 A 0 a 1 A 2,, x x x 0 χ 1 x 0 χ 1 Figure 43: Algorithms for computing y : A x + y Motivation Let A and x hen A x he thing to notice is that what was a column in A becomes a row in A Algorithms Let us consider how to compute y : A x + y It would be possible to explicitly transpose matrix A into a new matri using, for example, the transpose function you wrote in Week 3 and to then compute y : Bx + y his approach has at least two drawbacks: You will need space for the matri Computational scientists tend to push the limits of available memory, and hence are always hesitant to use large amounts of space that isn t absolutely necessary ransposing A into B takes time A matrix-vector multiplication requires 2mn flops ransposing a matrix requires 2mn memops mn reads from memory and mn writes to memory Memory operations are very slow relative to floating point operations So, you will spend all your time transposing the matrix Now, the motivation for this unit suggest that we can simply use columns of A for the dot products in the dot product based algorithm for y : Ax+y his suggests the algorithm in FLAME notation in Figure 43 left Alternatively, one can exploit the
18 Week 4 From Matrix-Vector Multiplication to Matrix-Matrix Multiplication 134 fact that columns in A become rows of A to change the algorithm for computing y : Ax + y that is based on AXPY operations into an algorithm for computing y : A x + y, as shown in Figure 43 right Implementation Homework 4311 Implement the routines [ y out ] Mvmult t unb var1 A, x, y ; and [ y out ] Mvmult t unb var2 A, x, y that compute y : A x + y via the algorithms in Figure 43 Homework 4312 Implementations achieve better performance finish faster if one accesses data consecutively in memory Now, most scientific computing codes store matrices in column-major order which means that the first column of a matrix is stored consecutively in memory, then the second column, and so forth Now, this means that an algorithm that accesses a matrix by columns tends to be faster than an algorithm that accesses a matrix by rows hat, in turn, means that when one is presented with more than one algorithm, one should pick the algorithm that accesses the matrix by columns Our FLAME notation makes it easy to recognize algorithms that access the matrix by columns For the matrix-vector multiplication y : Ax + y, would you recommend the algorithm that uses dot products or the algorithm that uses axpy operations? For the matrix-vector multiplication y : A x+y, would you recommend the algorithm that uses dot products or the algorithm that uses axpy operations? he point of this last exercise is to make you aware of the fact that knowing more than one algorithm can give you a performance edge Useful if you pay $30 million for a supercomputer and you want to get the most out of its use 432 riangular Matrix-Vector Multiplication View at edx Motivation Let U R n n be an upper triangular matrix and x R n be a vector Consider U 00 u 01 U 02 x Ux u 10 υ 11 u 12 χ U 20 u 21 U ,
19 43 Matrix-Vector Multiplication with Special Matrices 135 where s indicate components of the result that aren t important in our discussion right now We notice that u 10 0 a vector of two zeroes and hence we need not compute with it heory If U U L U BL U R U BR U 00 u 01 U 02 u 10 υ 11 u 12 U 20 u 21 U 22, where U L and U 00 are square matrices hen U BL 0, u 10 0, U 20 0, and u 21 0, where 0 indicates a matrix or vector of the appropriate dimensions U L and U BR are upper triangular matrices We will just state this as intuitively obvious Similarly, if L L L L BL L R L BR L 00 l 01 L 02 l 10 λ 11 l 12 L 20 l 21 L 22, where L L and L 00 are square matrices, then L R 0, l 01 0, L 02 0, and l12 0, where 0 indicates a matrix or vector of the appropriate dimensions L L and L BR are lower triangular matrices Algorithms Let us start by focusing on y : Ux + y, where U is upper triangular he algorithms from the previous section can be restated as in Figure 44, replacing A by U Now, notice the parts in gray Since u 10 0 and u 21 0, those computations need not be performed! Bingo, we have two algorithms that take advantage of the zeroes below the diagonal We probably should explain the names of the routines: RMVP UN UNB VAR1: riangular matrix-vector multiply plus y, with upper triangular matrix that is not transposed, unblocked variant 1 Yes, a bit convoluted, but such is life Homework 4321 Write routines [ y out ] rmvp un unb var1 U, x, y ; and [ y out ] rmvp un unb var2 U, x, y that implement the algorithms in Figure 44 that compute y : Ux + y Homework 4322 Modify the algorithms in Figure 45 so that they compute y : Lx + y, where L is a lower triangular matrix: Just strike out the parts that evaluate to zero We suggest you do this homework in conjunction with the next one Homework 4323 Write the functions [ y out ] rmvp ln unb var1 L, x, y ; and [ y out ] rmvp ln unb var2 L, x, y that implement thenalgorithms for computing y : Lx + y from Homework 4322
20 Week 4 From Matrix-Vector Multiplication to Matrix-Matrix Multiplication 136 Algorithm: y : RMVP UN UNB VAR1U, x, y Partition U U L U R, U BL U BR x x, y y where U L is 0 0, x, y are 0 1 while mu L < mu do Repartition U U 00 u 01 U 02 L U R u 10 υ 11 u 12, U BL U BR U 20 u 21 U 22 x x 0 χ 1, y y 0 ψ 1 Algorithm: y : RMVP UN UNB VAR2U, x, y Partition U U L U R, U BL U BR x x, y y where U L is 0 0, x, y are 0 1 while mu L < mu do Repartition U U 00 u 01 U 02 L U R u 10 υ 11 u 12, U BL U BR U 20 u 21 U 22 x x 0 χ 1, y y 0 ψ 1 y 0 : χ 1 u 01 + y 0 ψ 1 : u 10 x 0+ υ 11 χ 1 + u 12 + ψ 1 ψ 1 : χ 1 υ 11 + ψ 1 : χ 1 u 21 + Continue with U L U R U BL U BR x U 00 u 01 U 02 u 10 υ 11 u 12, U 20 u 21 A 22 x 0 χ 1, y y 0 ψ 1 Continue with U L U R U BL U BR x U 00 u 01 U 02 u 10 υ 11 u 12, U 20 u 21 A 22 x 0 χ 1, y y 0 ψ 1 endwhile endwhile Figure 44: Algorithms for computing y : Ux + y, where U is upper triangular Homework 4324 Modify the algorithms in Figure 46 to compute x : Ux, where U is an upper triangular matrix You may not use y You have to overwrite x without using work space Hint: hink carefully about the order in which elements of x are computed and overwritten You may want to do this exercise hand-in-hand with the implementation in the next homework Homework 4325 Write routines [ x out ] rmv un unb var1 U, x ; and [ x out ] rmv un unb var2 U, x that implement the algorithms for computing x : Ux from Homework 4324 Homework 4326 Modify the algorithms in Figure 47 to compute x : Lx, where L is a lower triangular matrix You may not use y You have to overwrite x without using work space Hint: hink carefully about the order in which elements of x are computed and overwritten his question is VERY tricky You may want to do this exercise hand-in-hand with the implementation in the next homework
21 43 Matrix-Vector Multiplication with Special Matrices 137 Algorithm: y : RMVP LN UNB VAR1L, x, y Partition L L L L R, L BL L BR x x, y y where L L is 0 0, x, y are 0 1 while ml L < ml do Repartition L L 00 l 01 L 02 L L R l10 λ 11 l 12, L BL L BR L 20 l 21 L 22 x x 0 χ 1, y y 0 ψ 1 Algorithm: y : RMVP LN UNB VAR2L, x, y Partition L L L L R, L BL L BR x x, y y where L L is 0 0, x, y are 0 1 while ml L < ml do Repartition L L 00 l 01 L 02 L L R l10 λ 11 l 12, L BL L BR L 20 l 21 L 22 x x 0 χ 1, y y 0 ψ 1 y 0 : χ 1 l 01 + y 0 ψ 1 : l 10 x 0 + λ 11 χ 1 + l 12 + ψ 1 ψ 1 : χ 1 λ 11 + ψ 1 : χ 1 l 21 + Continue with L L 00 l 01 L 02 L L R l10 λ 11 l 12, L BL L BR L 20 l 21 L 22 x x 0 χ 1, y y 0 ψ 1 endwhile Continue with L L 00 l 01 L 02 L L R l10 λ 11 l 12, L BL L BR L 20 l 21 L 22 x x 0 χ 1, y y 0 ψ 1 endwhile Figure 45: Algorithms to be used in Homework 4322 Homework 4327 Write routines [ y out ] rmv ln unb var1 L, x ; and [ y out ] rmv ln unb var2 L, x that implement the algorithms from Homework 4326 for computing x : Lx Homework 4328 Develop algorithms for computing y : U x + y and y : L x + y, where U and L are respectively upper triangular and lower triangular Do not explicitly transpose matrices U and L Write routines [ y out ] rmvp ut unb var1 U, x, y ; and [ y out ] rmvp ut unb var2 U, x, y [ y out ] rmvp lt unb var1 L, x, y ; and [ y out ] rmvp ln unb var2 L, x, y that implement these algorithms
22 Week 4 From Matrix-Vector Multiplication to Matrix-Matrix Multiplication 138 Algorithm: y : RMVP UN UNB VAR1U, x, y Partition U U L U R, U BL U BR x x, y y where U L is 0 0, x, y are 0 1 while mu L < mu do Repartition U U 00 u 01 U 02 L U R u 10 υ 11 u 12, U BL U BR U 20 u 21 U 22 x x 0 χ 1, y y 0 ψ 1 Algorithm: y : RMVP UN UNB VAR2U, x, y Partition U U L U R, U BL U BR x x, y y where U L is 0 0, x, y are 0 1 while mu L < mu do Repartition U U 00 u 01 U 02 L U R u 10 υ 11 u 12, U BL U BR U 20 u 21 U 22 x x 0 χ 1, y y 0 ψ 1 y 0 : χ 1 u 01 + y 0 ψ 1 : u 10 x 0+ υ 11 χ 1 + u 12 + ψ 1 ψ 1 : χ 1 υ 11 + ψ 1 : χ 1 u 21 + Continue with U L U R U BL U BR x U 00 u 01 U 02 u 10 υ 11 u 12, U 20 u 21 A 22 x 0 χ 1, y y 0 ψ 1 Continue with U L U R U BL U BR x U 00 u 01 U 02 u 10 υ 11 u 12, U 20 u 21 A 22 x 0 χ 1, y y 0 ψ 1 endwhile endwhile Figure 46: Algorithms to be used in Homework 4324 Homework 4329 Develop algorithms for computing x : U x and x : L x, where U and L are respectively upper triangular and lower triangular Do not explicitly transpose matrices U and L Write routines [ y out ] rmv ut unb var1 U, x ; and [ y out ] rmv ut unb var2 U, x [ y out ] rmv lt unb var1 L, x ; and [ y out ] rmv ln unb var2 L, x that implement these algorithms Cost Let us analyze the algorithms for computing y : Ux + y he analysis of all the other algorithms is very similar For the dot product based algorithm, the cost is in the update ψ 1 : υ 11 χ 1 + u 12 + ψ 1 which is typically computed in two steps: ψ 1 : υ 11 χ 1 + ψ 1 ; followed by a dot product ψ 1 : u 12 + ψ 1
23 43 Matrix-Vector Multiplication with Special Matrices 139 Algorithm: y : RMVP LN UNB VAR1L, x, y Partition L L L L R, L BL L BR x x, y y where L L is 0 0, x, y are 0 1 while ml L < ml do Repartition L L 00 l 01 L 02 L L R l10 λ 11 l 12, L BL L BR L 20 l 21 L 22 x x 0 χ 1, y y 0 ψ 1 Algorithm: y : RMVP LN UNB VAR2L, x, y Partition L L L L R, L BL L BR x x, y y where L L is 0 0, x, y are 0 1 while ml L < ml do Repartition L L 00 l 01 L 02 L L R l10 λ 11 l 12, L BL L BR L 20 l 21 L 22 x x 0 χ 1, y y 0 ψ 1 y 0 : χ 1 l 01 + y 0 ψ 1 : l 10 x 0 + λ 11 χ 1 + l 12 + ψ 1 ψ 1 : χ 1 λ 11 + ψ 1 : χ 1 l 21 + Continue with L L 00 l 01 L 02 L L R l10 λ 11 l 12, L BL L BR L 20 l 21 L 22 x x 0 χ 1, y y 0 ψ 1 endwhile Continue with L L 00 l 01 L 02 L L R l10 λ 11 l 12, L BL L BR L 20 l 21 L 22 x x 0 χ 1, y y 0 ψ 1 endwhile Figure 47: Algorithms to be used in Homework 4326 Now, during the first iteration, u 12 and are of length n 1, so that that iteration requires 2n n flops for the first step During the kth iteration starting with k 0, u 12 and are of length n k 1 so that the cost of that iteration is 2n k flops hus, if A is an n n matrix, then the total cost is given by n 1 k0 n 1 [2n k] 2 k0 n k 2n + n n k1 k 2n + 1n/2 flops Recall that we proved in the second week that n i1 i nn+1 2 Homework Compute the cost, in flops, of the algorithm for computing y : Lx + y that uses AXPY s
24 Week 4 From Matrix-Vector Multiplication to Matrix-Matrix Multiplication 140 Homework As hinted at before: Implementations achieve better performance finish faster if one accesses data consecutively in memory Now, most scientific computing codes store matrices in column-major order which means that the first column of a matrix is stored consecutively in memory, then the second column, and so forth Now, this means that an algorithm that accesses a matrix by columns tends to be faster than an algorithm that accesses a matrix by rows hat, in turn, means that when one is presented with more than one algorithm, one should pick the algorithm that accesses the matrix by columns Our FLAME notation makes it easy to recognize algorithms that access the matrix by columns For example, in this unit, if the algorithm accesses submatrix a 01 or a 21 then it accesses columns If it accesses submatrix a 10 or a 12, then it accesses the matrix by rows For each of these, which algorithm accesses the matrix by columns: For y : Ux + y, RSVP UN UNB VAR1 or RSVP UN UNB VAR2? Does the better algorithm use a dot or an axpy? For y : Lx + y, RSVP LN UNB VAR1 or RSVP LN UNB VAR2? Does the better algorithm use a dot or an axpy? For y : U x + y, RSVP U UNB VAR1 or RSVP U UNB VAR2? Does the better algorithm use a dot or an axpy? For y : L x + y, RSVP L UNB VAR1 or RSVP L UNB VAR2? Does the better algorithm use a dot or an axpy? 433 Symmetric Matrix-Vector Multiplication View at edx Motivation Consider A 00 a 01 A 02 a 10 α 11 a 12, Here we purposely chose the matrix on the right to be symmetric We notice that a 10 a 01, A 20 A 02, and a 12 a 21 A moment of reflection will convince you that this is a general principle, when A 00 is square Moreover, notice that A 00 and A 22 are then symmetric as well heory Consider A A L A BL A R A BR where A L and A 00 are square matrices If A is symmetric then A L, A BR, A 00, and A 22 are symmetric; a 10 a 01 and a 12 a 21 ; and A 00 a 01 A 02 a 10 α 11 a 12,
25 43 Matrix-Vector Multiplication with Special Matrices 141 Algorithm: y : SYMV U UNB VAR1A, x, y Partition A A L A R, A BL A BR x x, y y where A L is 0 0, x, y are 0 1 while ma L < ma do Repartition A A 00 a 01 A 02 L A R a 10 α 11 a 12, A BL A BR x x 0 χ 1, y y 0 ψ 1 Algorithm: y : SYMV U UNB VAR2A, x, y Partition A A L A R, A BL A BR x x, y y where A L is 0 0, x, y are 0 1 while ma L < ma do Repartition A A 00 a 01 A 02 L A R a 10 α 11 a 12, A BL A BR x x 0 χ 1, y y 0 ψ 1 ψ 1 : a 01 {}}{ a 10 x 0 + α 11 χ 1 + a 12 + ψ 1 y 0 : χ 1 a 01 + y 0 ψ 1 : χ 1 α 11 + ψ 1 : χ 1 a 21 }{{} + Continue with A A 00 a 01 A 02 L A R a 10 α 11 a 12, A BL A BR x x 0 χ 1, y y 0 ψ 1 endwhile a 12 Continue with A A 00 a 01 A 02 L A R a 10 α 11 a 12, A BL A BR x x 0 χ 1, y y 0 ψ 1 endwhile Figure 48: Algorithms for computing y : Ax + y where A is symmetric, where only the upper triangular part of A is stored A 20 A 02 We will just state this as intuitively obvious Algorithms Consider computing y : Ax+y where A is a symmetric matrix Since the upper and lower triangular part of a symmetric matrix are simply the transpose of each other, it is only necessary to store half the matrix: only the upper triangular part or only the lower triangular part In Figure 48 we repeat the algorithms for matrix-vector multiplication from an earlier unit, and annotate them for the case where A is symmetric and only stored in the upper triangle he change is simple: a 10 and a 21 are not stored and thus For the left algorithm, the update ψ 1 : a 10 x 0 +α 11 χ 1 +a 12 +ψ 1 must be changed to ψ 1 : a 01 x 0 +α 11 χ 1 +a 12 +ψ 1 For the algorithm on the right, the update : χ 1 a 21 + must be changed to : χ 1 a 12 + or, more precisely, : χ 1 a 12 + since a 12 is the label for part of a row
26 Week 4 From Matrix-Vector Multiplication to Matrix-Matrix Multiplication 142 Algorithm: y : SYMV L UNB VAR1A, x, y Partition A A L A R, A BL A BR x x, y y where A L is 0 0, x, y are 0 1 while ma L < ma do Repartition A A 00 a 01 A 02 L A R a 10 α 11 a 12, A BL A BR x x 0 χ 1, y y 0 ψ 1 Algorithm: y : SYMV L UNB VAR2A, x, y Partition A A L A R, A BL A BR x x, y y where A L is 0 0, x, y are 0 1 while ma L < ma do Repartition A A 00 a 01 A 02 L A R a 10 α 11 a 12, A BL A BR x x 0 χ 1, y y 0 ψ 1 y 0 : χ 1 a 01 + y 0 ψ 1 : a 10 x 0 + α 11 χ 1 + a 12 + ψ 1 ψ 1 : χ 1 α 11 + ψ 1 : χ 1 a 21 + Continue with A A 00 a 01 A 02 L A R a 10 α 11 a 12, A BL A BR x x 0 χ 1, y y 0 ψ 1 endwhile Continue with A A 00 a 01 A 02 L A R a 10 α 11 a 12, A BL A BR x x 0 χ 1, y y 0 ψ 1 endwhile Figure 49: Algorithms for Homework 4332 Homework 4331 Write routines [ y out ] Symv u unb var1 A, x, y ; and [ y out ] Symv u unb var2 A, x, y that implement the algorithms in Figure 48 Homework 4332 Modify the algorithms in Figure 49 to compute y : Ax + y, where A is symmetric and stored in the lower triangular part of matrix You may want to do this in conjunction with the next exercise Homework 4333 Write routines [ y out ] Symv l unb var1 A, x, y ; and [ y out ] Symv l unb var2 A, x, y that implement the algorithms from the previous homework
27 44 Matrix-Matrix Multiplication Product 143 Homework 4334 Challenge question! As hinted at before: Implementations achieve better performance finish faster if one accesses data consecutively in memory Now, most scientific computing codes store matrices in column-major order which means that the first column of a matrix is stored consecutively in memory, then the second column, and so forth Now, this means that an algorithm that accesses a matrix by columns tends to be faster than an algorithm that accesses a matrix by rows hat, in turn, means that when one is presented with more than one algorithm, one should pick the algorithm that accesses the matrix by columns Our FLAME notation makes it easy to recognize algorithms that access the matrix by columns he problem with the algorithms in this unit is that all of them access both part of a row AND part of a column So, your challenge is to devise an algorithm for computing y : Ax + y where A is symmetric and only stored in one half of the matrix that only accesses parts of columns We will call these variant 3 hen, write routines [ y out ] Symv u unb var3 A, x, y ; and [ y out ] Symv l unb var3 A, x, y Hint: Let s focus on the case where only the lower triangular part of A is stored If A is symmetric, then A L + L where L is the lower triangular part of A and L is the strictly lower triangular part of A Identify an algorithm for y : Lx + y that accesses matrix A by columns Identify an algorithm for y : L x + y that accesses matrix A by columns You now have two loops that together compute y : Ax + y L + L x + y Lx + L x + y Can you merge the loops into one loop? 44 Matrix-Matrix Multiplication Product 441 Motivation View at edx he first unit of the week, in which we discussed a simple model for prediction the weather, finished with the following exercise:
28 Week 4 From Matrix-Vector Multiplication to Matrix-Matrix Multiplication 144 Given omorrow oday sunny cloudy rainy sunny cloudy rainy fill in the following table, which predicts the weather the day after tomorrow given the weather today: Day after omorrow sunny cloudy rainy oday sunny cloudy rainy Now here is the hard part: Do so without using your knowledge about how to perform a matrix-matrix multiplication, since you won t learn about that until later this week he entries in the table turn out to be the entries in the transition matrix Q that was described just above the exercise: χ 2 s χ 1 s χ 2 c χ 1 c χ 2 r χ 1 r χ 0 s χ 0 s χ 0 c Q χ 0 c, χ 0 r χ 0 r Now, those of you who remembered from, for example, some other course that χ s χ 0 c χ r χ 0 s χ 0 c χ 0 r would recognize that Q And, if you then remembered how to perform a matrix-matrix multiplication or you did P * P in Python, you would have deduced that Q
29 44 Matrix-Matrix Multiplication Product 145 hese then become the entries in the table If you knew all the above, well, GOOD FOR YOU! However, there are all kinds of issues that one really should discuss How do you know such a matrix exists? Why is matrix-matrix multiplication defined this way? We answer that in the next few units 442 From Composing Linear ransformations to Matrix-Matrix Multiplication View at edx Homework 4421 Let L A : R k R m and L B : R n R k both be linear transformations and, for all x R n, define the function L C : R n R m by L C x L A L B x L C x is a linear transformations Always/Sometimes/Never Now, let linear transformations L A, L B, and L C be represented by matrices A R m k, B R k n, and C R m n, respectively You know such matrices exist since L A, L B, and L C are linear transformations hen Cx L C x L A L B x ABx he matrix-matrix multiplication product is defined as the matrix C such that, for all vectors x, Cx ABx he notation used to denote that matrix is C A B or, equivalently, C AB he operation AB is called a matrix-matrix multiplication or product If A is m A n A matrix, B is m B n B matrix, and C is m C n C matrix, then for C AB to hold it must be the case that m C m A, n C n B, and n A m B Usually, the integers m and n are used for the sizes of C: C R m n and k is used for the other size : A R m k and B R k n : n k n k B m C m A Homework 4422 Let A R m n A A is well-defined By well-defined we mean that A A makes sense In this particular case this means that the dimensions of A and A are such that A A can be computed Always/Sometimes/Never Homework 4423 Let A R m n AA is well-defined Always/Sometimes/Never 443 Computing the Matrix-Matrix Product View at edx he question now becomes how to compute C given matrices A and B For this, we are going to use and abuse the unit basis vectors e j Consider the following Let
30 Week 4 From Matrix-Vector Multiplication to Matrix-Matrix Multiplication 146 C R m n, A R m k, and B R k n ; and C AB; and L C : R n R m equal the linear transformation such that L C x Cx; and L A : R k R m equal the linear transformation such that L A x Ax L B : R n R k equal the linear transformation such that L B x Bx; and e j denote the jth unit basis vector; and c j denote the jth column of C; and b j denote the jth column of B hen c j Ce j L C e j L A L B e j L A Be j L A b j Ab j From this we learn that If C AB then the jth column of C, c j, equals Ab j, where b j is the jth column of B Since by now you should be very comfortable with partitioning matrices by columns, we can summarize this as c 0 c 1 c n 1 C AB A b 0 b 1 b n 1 Ab 0 Ab 1 Ab n 1 Now, let s expose the elements of C, A, and B C γ 0,0 γ 0,1 γ 0,n 1 γ 1,0 γ 1,1 γ 1,n 1, A α 0,0 α 0,1 α 0,k 1 α 1,0 α 1,1 α 1,k 1, γ m 1,0 γ m 1,1 γ m 1,n 1 α m 1,0 α m 1,1 α m 1,k 1 β 0,0 β 0,1 β 0,n 1 β 1,0 β 1,1 β 1,n 1 and B β k 1,0 β k 1,1 β k 1,n 1 We are going to show that γ i, j which you may have learned in a high school algebra course k 1 α i,p β p, j, p0
31 44 Matrix-Matrix Multiplication Product 147 We reasoned that c j Ab j : γ 0, j γ 1, j γ i, j α 0,0 α 0,1 α 0,k 1 α 1,0 α 1,1 α 1,k 1 α i,0 α i,1 α i,k 1 β 0, j β 1, j β k 1, j γ m 1, j α m 1,0 α m 1,1 α m 1,k 1 Here we highlight the ith element of c j, γ i, j, and the ith row of A We recall that the ith element of Ax equals the dot product of the ith row of A with the vector x hus, γ i, j equals the dot product of the ith row of A with the vector b j : γ i, j k 1 α i,p β p, j p0 Let A R m k, B R k n, and C R m n hen the matrix-matrix multiplication product C AB is computed by γ i, j k 1 α i,p β p, j α i,0 β 0, j + α i,1 β 1, j + + α i,k 1 β k 1, j p0 As a result of this definition Cx ABx ABx and can drop the parentheses, unless they are useful for clarity: Cx ABx and C AB Homework 4431 Compute Q P P We emphasize that for matrix-matrix multiplication to be a legal operations, the row and column dimensions of the matrices must obey certain constraints Whenever we talk about dimensions being conformal, we mean that the dimensions are such that the encountered matrix multiplications are valid operations Homework 4432 Let A and B Compute AB BA Homework 4433 Let A R m k and B R k n and AB BA A and B are square matrices Always/Sometimes/Never
32 Week 4 From Matrix-Vector Multiplication to Matrix-Matrix Multiplication 148 Homework 4434 Let A R m k and B R k n AB BA Always/Sometimes/Never Homework 4435 Let A,B R n n AB BA Always/Sometimes/Never Homework 4436 A 2 is defined as AA Similarly A k A k A k 1 A for k > 0 A k is well-defined only if A is a square matrix AA A }{{} k occurrences of A Consistent with this, A 0 I so that rue/false Homework 4437 Let A, B,C be matrix of appropriate size so that ABC is well defined ABC is well defined Always/Sometimes/Never 444 Special Shapes View at edx We now show that if one treats scalars, column vectors, and row vectors as special cases of matrices, then many all? operations we encountered previously become simply special cases of matrix-matrix multiplication In the below discussion, consider C AB where C R m n, A R m k, and B R k n m n k 1 scalar multiplication 1 C 1 A 1 B In this case, all three matrices are actually scalars: γ 0,0 α 0,0 β 0,0 α 0,0 β 0,0 so that matrix-matrix multiplication becomes scalar multiplication Homework 4441 Let A 4 and B 3 hen AB n 1,k 1 SCAL 1 B m C m A
33 44 Matrix-Matrix Multiplication Product 149 Now the matrices look like γ 0,0 γ 1,0 α 0,0 α 1,0 β 0,0 α 0,0 β 0,0 α 1,0 β 0,0 β 0,0 α 0,0 β 0,0 α 1,0 β 0,0 α 0,0 α 1,0 γ m 1,0 α m 1,0 α m 1,0 β 0,0 β 0,0 α m 1,0 α m 1,0 In other words, C and A are vectors, B is a scalar, and the matrix-matrix multiplication becomes scaling of a vector 1 Homework 4442 Let A 3 and B 4 hen AB 2 m 1,k 1 SCAL n 1 C 1 A n 1 B Now the matrices look like γ 0,0 γ 0,1 γ 0,n 1 α 0,0 β 0,0 β 0,1 β 0,n 1 α 0,0 β 0,0 β 0,1 β 0,n 1 α 0,0 β 0,0 α 0,0 β 0,1 α 0,0 β 0,n 1 In other words, C and B are just row vectors and A is a scalar he vector C is computed by scaling the row vector B by the scalar A Homework 4443 Let A 4 and B hen AB m 1,n 1 DO 1 C k 1 A k B he matrices look like γ 0,0 α 0,0 α 0,1 α 0,k 1 β 0,0 β 1,0 k 1 α 0,p β p,0 p0 β k 1,0 In other words, C is a scalar that is computed by taking the dot product of the one row that is A and the one column that is B 2 Homework 4444 Let A and B 1 hen AB 0
34 Week 4 From Matrix-Vector Multiplication to Matrix-Matrix Multiplication 150 k 1 outer product n n 1 B m C m A γ 0,0 γ 0,1 γ 0,n 1 γ 1,0 γ 1,1 γ 1,n 1 α 0,0 α 1,0 β 0,0 β 0,1 β 0,n 1 γ m 1,0 γ m 1,1 γ m 1,n 1 α m 1,0 α 0,0 β 0,0 α 0,0 β 0,1 α 0,0 β 0,n 1 α 1,0 β 0,0 α 1,0 β 0,1 α 1,0 β 0,n 1 α m 1,0 β 0,0 α m 1,0 β 0,1 α m 1,0 β 0,n 1 Homework 4445 Let A and B 1 2 hen AB
35 44 Matrix-Matrix Multiplication Product 151 Homework 4446 Let a and b C 1 c 0 c 1 2 and C ab Partition C by columns and by rows: and C c 0 c 1 c 2 hen c c C c c 1 3 c C rue/false rue/false rue/false rue/false rue/false rue/false rue/false Homework 4447 Fill in the boxes:
36 Week 4 From Matrix-Vector Multiplication to Matrix-Matrix Multiplication 152 Homework 4448 Fill in the boxes: n 1 matrix-vector product m C m A k k B γ 0,0 γ 1,0 γ m 1,0 α 0,0 α 0,1 α 0,k 1 α 1,0 α 1,1 α 1,k 1 α m 1,0 α m 1,1 α m 1,k 1 β 0,0 β 1,0 β k 1,0 We have studied this special case in great detail o emphasize how it relates to have matrix-matrix multiplication is computed, consider the following: γ 0,0 γ i,0 γ m 1,0 α 0,0 α 0,1 α 0,k 1 α i,0 α i,1 α i,k 1 α m 1,0 α m 1,1 α m 1,k 1 β 0,0 β 1,0 β k 1,0 m 1 row vector-matrix product 1 C n 1 A k k B n γ 0,0 γ 0,1 γ 0,n 1 α 0,0 α 0,1 α 0,k 1 β 0,0 β 0,1 β 0,n 1 β 1,0 β 1,1 β 1,n 1 β k 1,0 β k 1,1 β k 1,n 1
37 44 Matrix-Matrix Multiplication Product 153 so that γ 0, j k 1 p0 α 0,pβ p, j o emphasize how it relates to have matrix-matrix multiplication is computed, consider the following: γ 0,0 γ 0, j γ 0,n 1 α 0,0 α 0,1 α 0,k 1 β 0,0 β 0, j β 0,n 1 β 1,0 β 1, j β 1,n 1 β k 1,0 β k 1, j β k 1,n 1 Homework 4449 Let A and B hen AB Homework Let e i R m equal the ith unit basis vector and A R m n hen e i A ǎ i, the ith row of A Always/Sometimes/Never Homework Get as much practice as you want with the MALAB script in LAFF-20xM/Programming/Week04/PracticeGemmm If you understand how to perform a matrix-matrix multiplication, then you know how to perform all other operations with matrices and vectors that we have encountered so far 445 Cost View at edx Consider the matrix-matrix multiplication C AB where C R m n, A R m k, and B R k n Let us examine what the cost of this operation is: We argued that, by definition, the jth column of C, c j, is computed by the matrix-vector multiplication Ab j, where b j is the jth column of B Last week we learned that a matrix-vector multiplication of a m k matrix times a vector of size k requires 2mk floating point operations flops C has n columns since it is a m n matrix Putting all these observations together yields a cost of n 2mk 2mnk flops
38 Week 4 From Matrix-Vector Multiplication to Matrix-Matrix Multiplication 154 ry this! Recall that the dot product of two vectors of size k requires approximately 2k flops We learned in the previous units that if C AB then γ i, j equals the dot product of the ith row of A and the jth column of B Use this to give an alternative justification that a matrix multiplication requires 2mnk flops 45 Enrichment 451 Markov Chains: heir Application Matrices have many real world applications As we have seen this week, one noteworthy use is connected to Markov chains here are many, many examples of the use of Markov chains You can find a brief look at some significant applications in HE FIVE GREAES APPLICAIONS OF MARKOV CHAINS by Philipp von Hilgers and Amy N Langville 46 Wrap Up 461 Homework Homework 4611 Let A R m n and x R n hen Ax x A Always/Sometimes/Never Homework 4612 Our laff library has a routine that has the following property laff gemv trans, alpha, A, x, beta, y laff gemv No transpose, alpha, A, x, beta, y computes y : αax + βy laff gemv ranspose, alpha, A, x, beta, y computes y : αa x + βy he routine works regardless of whether x and/or y are column and/or row vectors Our library does NO include a routine to compute y : x A What call could you use to compute y : x A if y is stored in yt and x in xt? laff gemv No transpose, 10, A, xt, 00, yt laff gemv No transpose, 10, A, xt, 10, yt laff gemv ranspose, 10, A, xt, 10, yt laff gemv ranspose, 10, A, xt, 00, yt Homework 4613 Let A Compute A 2 A 3 For k > 1, A k
39 46 Wrap Up 155 Homework 4614 Let A A 2 A 3 For n 0, A 2n For n 0, A 2n+1 Homework 4615 Let A A 2 A 3 For n 0, A 4n For n 0, A 4n+1 Homework 4616 Let A be a square matrix If AA 0 the zero matrix then A is a zero matrix AA is often written as A 2 rue/false Homework 4617 here exists a real valued matrix A such that A 2 I Recall: I is the identity Homework 4618 here exists a matrix A that is not diagonal such that A 2 I rue/false rue/false 462 Summary Partitioned matrix-vector multiplication A 0,0 A 0,1 A 0,N 1 A 1,0 A 1,1 A 1,N 1 A M 1,0 A M 1,1 A M 1,N 1 x 0 x 1 x N 1 A 0,0 x 0 + A 0,1 x A 0,N 1 x N 1 A 1,0 x 0 + A 1,1 x A 1,N 1 x N 1 A M 1,0 x 0 + A M 1,1 x A M 1,N 1 x N 1 ransposing a partitioned matrix A 0,0 A 0,1 A 0,N 1 A 1,0 A 1,1 A 1,N 1 A 0,0 A 1,0 A M 1,0 A 0,1 A 1,1 A M 1,1 A M 1,0 A M 1,1 A M 1,N 1 A 0,N 1 A 1,N 1 A M 1,N 1
40 Week 4 From Matrix-Vector Multiplication to Matrix-Matrix Multiplication 156 Composing linear transformations Let L A : R k R m and L B : R n R k both be linear transformations and, for all x R n, define the function L C : R n R m by L C x L A L B x hen L C x is a linear transformations Matrix-matrix multiplication AB A b 0 b 1 b n 1 Ab 0 Ab 1 Ab n 1 If C γ 0,0 γ 0,1 γ 0,n 1 γ 1,0 γ 1,1 γ 1,n 1, A α 0,0 α 0,1 α 0,k 1 α 1,0 α 1,1 α 1,k 1, γ m 1,0 γ m 1,1 γ m 1,n 1 α m 1,0 α m 1,1 α m 1,k 1 β 0,0 β 0,1 β 0,n 1 β 1,0 β 1,1 β 1,n 1 and B β k 1,0 β k 1,1 β k 1,n 1 then C AB means that γ i, j k 1 p0 α i,pβ p, j A table of matrix-matrix multiplications with matrices of special shape is given at the end of this week Outer product Let x R m and y R n hen the outer product of x and y is given by xy Notice that this yields an m n matrix: xy χ 0 χ 1 ψ 0 ψ 1 χ 0 χ 1 ψ 0 ψ 1 ψ n 1 χ m 1 ψ n 1 χ m 1 χ 0 ψ 0 χ 0 ψ 1 χ 0 ψ n 1 χ 1 ψ 0 χ 1 ψ 1 χ 1 ψ n 1 χ m 1 ψ 0 χ m 1 ψ 1 χ m 1 ψ n 1
41 46 Wrap Up 157 m n k Shape Comment C 1 A 1 B Scalar multiplication 1 B m 1 1 m C m A Vector times scalar scalar times vector 1 n 1 n 1 C 1 A n 1 B Scalar times row vector 1 1 k 1 C k 1 A k B Dot product with row and column n n 1 B m n 1 m C m A Outer product k m 1 k m C m A k B Matrix-vector multiplication 1 n k n 1 C k 1 A k n B Row vector times matrix multiply
Matrix-Vector Operations
Week3 Matrix-Vector Operations 31 Opening Remarks 311 Timmy Two Space View at edx Homework 3111 Click on the below link to open a browser window with the Timmy Two Space exercise This exercise was suggested
More informationMatrix-Vector Operations
Week3 Matrix-Vector Operations 31 Opening Remarks 311 Timmy Two Space View at edx Homework 3111 Click on the below link to open a browser window with the Timmy Two Space exercise This exercise was suggested
More informationExercise 1.3 Work out the probabilities that it will be cloudy/rainy the day after tomorrow.
54 Chapter Introduction 9 s Exercise 3 Work out the probabilities that it will be cloudy/rainy the day after tomorrow Exercise 5 Follow the instructions for this problem given on the class wiki For the
More informationWeek6. Gaussian Elimination. 6.1 Opening Remarks Solving Linear Systems. View at edx
Week6 Gaussian Elimination 61 Opening Remarks 611 Solving Linear Systems View at edx 193 Week 6 Gaussian Elimination 194 61 Outline 61 Opening Remarks 193 611 Solving Linear Systems 193 61 Outline 194
More informationMatrix-Matrix Multiplication
Chapter Matrix-Matrix Multiplication In this chapter, we discuss matrix-matrix multiplication We start by motivating its definition Next, we discuss why its implementation inherently allows high performance
More informationMatrix-Matrix Multiplication
Week5 Matrix-Matrix Multiplication 51 Opening Remarks 511 Composing Rotations Homework 5111 Which of the following statements are true: cosρ + σ + τ cosτ sinτ cosρ + σ sinρ + σ + τ sinτ cosτ sinρ + σ cosρ
More informationPractical Linear Algebra. Class Notes Spring 2009
Practical Linear Algebra Class Notes Spring 9 Robert van de Geijn Maggie Myers Department of Computer Sciences The University of Texas at Austin Austin, TX 787 Draft January 8, Copyright 9 Robert van de
More informationOrthogonal Projection, Low Rank Approximation, and Orthogonal Bases
Week Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases. Opening Remarks.. Low Rank Approximation 38 Week. Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases 38.. Outline..
More informationEigenvalues, Eigenvectors, and Diagonalization
Week12 Eigenvalues, Eigenvectors, and Diagonalization 12.1 Opening Remarks 12.1.1 Predicting the Weather, Again Let us revisit the example from Week 4, in which we had a simple model for predicting the
More informationLAFF Linear Algebra Foundations and Frontiers. Daniel Homola
LAFF Linear Algebra Foundations and Frontiers Daniel Homola Notes for the MOOC course by The University of Texas at Austin September 2015 Contents I Week 1-4 5 1 Vectors 6 1.1 What is a vector.............................
More informationEigenvalues, Eigenvectors, and Diagonalization
Week12 Eigenvalues, Eigenvectors, and Diagonalization 12.1 Opening Remarks 12.1.1 Predicting the Weather, Again View at edx Let us revisit the example from Week 4, in which we had a simple model for predicting
More informationNotes on Householder QR Factorization
Notes on Householder QR Factorization Robert A van de Geijn Department of Computer Science he University of exas at Austin Austin, X 7872 rvdg@csutexasedu September 2, 24 Motivation A fundamental problem
More informationOrthogonal Projection, Low Rank Approximation, and Orthogonal Bases
Week Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases. Opening Remarks.. Low Rank Approximation 463 Week. Orthogonal Projection, Low Rank Approximation, and Orthogonal Bases 464.. Outline..
More informationMore Gaussian Elimination and Matrix Inversion
Week7 More Gaussian Elimination and Matrix Inversion 7 Opening Remarks 7 Introduction 235 Week 7 More Gaussian Elimination and Matrix Inversion 236 72 Outline 7 Opening Remarks 235 7 Introduction 235 72
More informationChapter 3 - From Gaussian Elimination to LU Factorization
Chapter 3 - From Gaussian Elimination to LU Factorization Maggie Myers Robert A. van de Geijn The University of Texas at Austin Practical Linear Algebra Fall 29 http://z.cs.utexas.edu/wiki/pla.wiki/ 1
More informationVector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture
Week9 Vector Spaces 9. Opening Remarks 9.. Solvable or not solvable, that s the question Consider the picture (,) (,) p(χ) = γ + γ χ + γ χ (, ) depicting three points in R and a quadratic polynomial (polynomial
More informationMore on Matrix Inversion
Week8 More on Matrix Inversion 8.1 Opening Remarks 8.1.1 When LU Factorization with Row Pivoting Fails View at edx The following statements are equivalent statements about A R n n : A is nonsingular. A
More informationVector Spaces, Orthogonality, and Linear Least Squares
Week Vector Spaces, Orthogonality, and Linear Least Squares. Opening Remarks.. Visualizing Planes, Lines, and Solutions Consider the following system of linear equations from the opener for Week 9: χ χ
More informationExample: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3
Linear Algebra Row Reduced Echelon Form Techniques for solving systems of linear equations lie at the heart of linear algebra. In high school we learn to solve systems with or variables using elimination
More informationNotes on vectors and matrices
Notes on vectors and matrices EE103 Winter Quarter 2001-02 L Vandenberghe 1 Terminology and notation Matrices, vectors, and scalars A matrix is a rectangular array of numbers (also called scalars), written
More informationRoberto s Notes on Linear Algebra Chapter 4: Matrix Algebra Section 4. Matrix products
Roberto s Notes on Linear Algebra Chapter 4: Matrix Algebra Section 4 Matrix products What you need to know already: The dot product of vectors Basic matrix operations. Special types of matrices What you
More informationSection Vectors
Section 1.2 - Vectors Maggie Myers Robert A. van de Geijn The University of Texas at Austin Practical Linear Algebra Fall 2009 http://z.cs.utexas.edu/wiki/pla.wiki/ 1 Vectors We will call a one-dimensional
More informationNotes on LU Factorization
Notes on LU Factorization Robert A van de Geijn Department of Computer Science The University of Texas Austin, TX 78712 rvdg@csutexasedu October 11, 2014 The LU factorization is also known as the LU decomposition
More information8.3 Householder QR factorization
83 ouseholder QR factorization 23 83 ouseholder QR factorization A fundamental problem to avoid in numerical codes is the situation where one starts with large values and one ends up with small values
More informationA primer on matrices
A primer on matrices Stephen Boyd August 4, 2007 These notes describe the notation of matrices, the mechanics of matrix manipulation, and how to use matrices to formulate and solve sets of simultaneous
More information. =. a i1 x 1 + a i2 x 2 + a in x n = b i. a 11 a 12 a 1n a 21 a 22 a 1n. i1 a i2 a in
Vectors and Matrices Continued Remember that our goal is to write a system of algebraic equations as a matrix equation. Suppose we have the n linear algebraic equations a x + a 2 x 2 + a n x n = b a 2
More informationLecture 7: Vectors and Matrices II Introduction to Matrices (See Sections, 3.3, 3.6, 3.7 and 3.9 in Boas)
Lecture 7: Vectors and Matrices II Introduction to Matrices (See Sections 3.3 3.6 3.7 and 3.9 in Boas) Here we will continue our discussion of vectors and their transformations. In Lecture 6 we gained
More informationLinear Transformations and Matrices
Week2 Linear Transformations and Matrices 2 Opening Remarks 2 Rotating in 2D View at edx Let R θ : R 2 R 2 be the function that rotates an input vector through an angle θ: R θ (x) θ x Figure 2 illustrates
More informationMatrix operations Linear Algebra with Computer Science Application
Linear Algebra with Computer Science Application February 14, 2018 1 Matrix operations 11 Matrix operations If A is an m n matrix that is, a matrix with m rows and n columns then the scalar entry in the
More informationAppendix C Vector and matrix algebra
Appendix C Vector and matrix algebra Concepts Scalars Vectors, rows and columns, matrices Adding and subtracting vectors and matrices Multiplying them by scalars Products of vectors and matrices, scalar
More informationA primer on matrices
A primer on matrices Stephen Boyd August 4, 2007 These notes describe the notation of matrices, the mechanics of matrix manipulation, and how to use matrices to formulate and solve sets of simultaneous
More informationLinear Algebra. The analysis of many models in the social sciences reduces to the study of systems of equations.
POLI 7 - Mathematical and Statistical Foundations Prof S Saiegh Fall Lecture Notes - Class 4 October 4, Linear Algebra The analysis of many models in the social sciences reduces to the study of systems
More informationMATH 320, WEEK 7: Matrices, Matrix Operations
MATH 320, WEEK 7: Matrices, Matrix Operations 1 Matrices We have introduced ourselves to the notion of the grid-like coefficient matrix as a short-hand coefficient place-keeper for performing Gaussian
More informationRoberto s Notes on Linear Algebra Chapter 4: Matrix Algebra Section 7. Inverse matrices
Roberto s Notes on Linear Algebra Chapter 4: Matrix Algebra Section 7 Inverse matrices What you need to know already: How to add and multiply matrices. What elementary matrices are. What you can learn
More informationMAT 2037 LINEAR ALGEBRA I web:
MAT 237 LINEAR ALGEBRA I 2625 Dokuz Eylül University, Faculty of Science, Department of Mathematics web: Instructor: Engin Mermut http://kisideuedutr/enginmermut/ HOMEWORK 2 MATRIX ALGEBRA Textbook: Linear
More informationICS 6N Computational Linear Algebra Matrix Algebra
ICS 6N Computational Linear Algebra Matrix Algebra Xiaohui Xie University of California, Irvine xhx@uci.edu February 2, 2017 Xiaohui Xie (UCI) ICS 6N February 2, 2017 1 / 24 Matrix Consider an m n matrix
More informationLinear Algebra and Matrix Inversion
Jim Lambers MAT 46/56 Spring Semester 29- Lecture 2 Notes These notes correspond to Section 63 in the text Linear Algebra and Matrix Inversion Vector Spaces and Linear Transformations Matrices are much
More informationMath Lecture 3 Notes
Math 1010 - Lecture 3 Notes Dylan Zwick Fall 2009 1 Operations with Real Numbers In our last lecture we covered some basic operations with real numbers like addition, subtraction and multiplication. This
More informationNumerical Linear Algebra
Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix
More informationMath 308 Midterm Answers and Comments July 18, Part A. Short answer questions
Math 308 Midterm Answers and Comments July 18, 2011 Part A. Short answer questions (1) Compute the determinant of the matrix a 3 3 1 1 2. 1 a 3 The determinant is 2a 2 12. Comments: Everyone seemed to
More informationMATH Mathematics for Agriculture II
MATH 10240 Mathematics for Agriculture II Academic year 2018 2019 UCD School of Mathematics and Statistics Contents Chapter 1. Linear Algebra 1 1. Introduction to Matrices 1 2. Matrix Multiplication 3
More informationFinal Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2
Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch
More informationL. Vandenberghe EE133A (Spring 2017) 3. Matrices. notation and terminology. matrix operations. linear and affine functions.
L Vandenberghe EE133A (Spring 2017) 3 Matrices notation and terminology matrix operations linear and affine functions complexity 3-1 Matrix a rectangular array of numbers, for example A = 0 1 23 01 13
More informationChapter 2. Matrix Arithmetic. Chapter 2
Matrix Arithmetic Matrix Addition and Subtraction Addition and subtraction act element-wise on matrices. In order for the addition/subtraction (A B) to be possible, the two matrices A and B must have the
More informationAnnouncements Wednesday, October 10
Announcements Wednesday, October 10 The second midterm is on Friday, October 19 That is one week from this Friday The exam covers 35, 36, 37, 39, 41, 42, 43, 44 (through today s material) WeBWorK 42, 43
More informationAnswers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3
Answers in blue. If you have questions or spot an error, let me know. 3 4. Find all matrices that commute with A =. 4 3 a b If we set B = and set AB = BA, we see that 3a + 4b = 3a 4c, 4a + 3b = 3b 4d,
More informationMatrix Arithmetic. j=1
An m n matrix is an array A = Matrix Arithmetic a 11 a 12 a 1n a 21 a 22 a 2n a m1 a m2 a mn of real numbers a ij An m n matrix has m rows and n columns a ij is the entry in the i-th row and j-th column
More informationMath 360 Linear Algebra Fall Class Notes. a a a a a a. a a a
Math 360 Linear Algebra Fall 2008 9-10-08 Class Notes Matrices As we have already seen, a matrix is a rectangular array of numbers. If a matrix A has m columns and n rows, we say that its dimensions are
More informationB553 Lecture 5: Matrix Algebra Review
B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations
More informationTwo matrices of the same size are added by adding their corresponding entries =.
2 Matrix algebra 2.1 Addition and scalar multiplication Two matrices of the same size are added by adding their corresponding entries. For instance, 1 2 3 2 5 6 3 7 9 +. 4 0 9 4 1 3 0 1 6 Addition of two
More informationBasic Linear Algebra in MATLAB
Basic Linear Algebra in MATLAB 9.29 Optional Lecture 2 In the last optional lecture we learned the the basic type in MATLAB is a matrix of double precision floating point numbers. You learned a number
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationDon t forget to think of a matrix as a map (a.k.a. A Linear Algebra Primer)
Don t forget to think of a matrix as a map (aka A Linear Algebra Primer) Thomas Yu February 5, 2007 1 Matrix Recall that a matrix is an array of numbers: a 11 a 1n A a m1 a mn In this note, we focus on
More informationMath 416, Spring 2010 Matrix multiplication; subspaces February 2, 2010 MATRIX MULTIPLICATION; SUBSPACES. 1. Announcements
Math 416, Spring 010 Matrix multiplication; subspaces February, 010 MATRIX MULTIPLICATION; SUBSPACES 1 Announcements Office hours on Wednesday are cancelled because Andy will be out of town If you email
More informationChapter 2: Matrix Algebra
Chapter 2: Matrix Algebra (Last Updated: October 12, 2016) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). Write A = 1. Matrix operations [a 1 a n. Then entry
More informationA Review of Matrix Analysis
Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value
More informationMatrix Arithmetic. a 11 a. A + B = + a m1 a mn. + b. a 11 + b 11 a 1n + b 1n = a m1. b m1 b mn. and scalar multiplication for matrices via.
Matrix Arithmetic There is an arithmetic for matrices that can be viewed as extending the arithmetic we have developed for vectors to the more general setting of rectangular arrays: if A and B are m n
More informationBasic Concepts in Linear Algebra
Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear
More informationReview of Matrices and Block Structures
CHAPTER 2 Review of Matrices and Block Structures Numerical linear algebra lies at the heart of modern scientific computing and computational science. Today it is not uncommon to perform numerical computations
More informationLinear algebra and differential equations (Math 54): Lecture 10
Linear algebra and differential equations (Math 54): Lecture 10 Vivek Shende February 24, 2016 Hello and welcome to class! As you may have observed, your usual professor isn t here today. He ll be back
More informationMath 123, Week 2: Matrix Operations, Inverses
Math 23, Week 2: Matrix Operations, Inverses Section : Matrices We have introduced ourselves to the grid-like coefficient matrix when performing Gaussian elimination We now formally define general matrices
More informationMatrix & Linear Algebra
Matrix & Linear Algebra Jamie Monogan University of Georgia For more information: http://monogan.myweb.uga.edu/teaching/mm/ Jamie Monogan (UGA) Matrix & Linear Algebra 1 / 84 Vectors Vectors Vector: A
More informationQuadratic Equations Part I
Quadratic Equations Part I Before proceeding with this section we should note that the topic of solving quadratic equations will be covered in two sections. This is done for the benefit of those viewing
More informationMATH2210 Notebook 2 Spring 2018
MATH2210 Notebook 2 Spring 2018 prepared by Professor Jenny Baglivo c Copyright 2009 2018 by Jenny A. Baglivo. All Rights Reserved. 2 MATH2210 Notebook 2 3 2.1 Matrices and Their Operations................................
More informationAnnouncements Monday, October 02
Announcements Monday, October 02 Please fill out the mid-semester survey under Quizzes on Canvas WeBWorK 18, 19 are due Wednesday at 11:59pm The quiz on Friday covers 17, 18, and 19 My office is Skiles
More informationLecture 19: Introduction to Linear Transformations
Lecture 19: Introduction to Linear Transformations Winfried Just, Ohio University October 11, 217 Scope of this lecture Linear transformations are important and useful: A lot of applications of linear
More informationA matrix over a field F is a rectangular array of elements from F. The symbol
Chapter MATRICES Matrix arithmetic A matrix over a field F is a rectangular array of elements from F The symbol M m n (F ) denotes the collection of all m n matrices over F Matrices will usually be denoted
More information10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections )
c Dr. Igor Zelenko, Fall 2017 1 10. Linear Systems of ODEs, Matrix multiplication, superposition principle (parts of sections 7.2-7.4) 1. When each of the functions F 1, F 2,..., F n in right-hand side
More informationReview of Basic Concepts in Linear Algebra
Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra
More informationPhys 201. Matrices and Determinants
Phys 201 Matrices and Determinants 1 1.1 Matrices 1.2 Operations of matrices 1.3 Types of matrices 1.4 Properties of matrices 1.5 Determinants 1.6 Inverse of a 3 3 matrix 2 1.1 Matrices A 2 3 7 =! " 1
More informationNext topics: Solving systems of linear equations
Next topics: Solving systems of linear equations 1 Gaussian elimination (today) 2 Gaussian elimination with partial pivoting (Week 9) 3 The method of LU-decomposition (Week 10) 4 Iterative techniques:
More information15-780: LinearProgramming
15-780: LinearProgramming J. Zico Kolter February 1-3, 2016 1 Outline Introduction Some linear algebra review Linear programming Simplex algorithm Duality and dual simplex 2 Outline Introduction Some linear
More information1 Matrices and matrix algebra
1 Matrices and matrix algebra 1.1 Examples of matrices A matrix is a rectangular array of numbers and/or variables. For instance 4 2 0 3 1 A = 5 1.2 0.7 x 3 π 3 4 6 27 is a matrix with 3 rows and 5 columns
More informationLinear Algebra. Linear Equations and Matrices. Copyright 2005, W.R. Winfrey
Copyright 2005, W.R. Winfrey Topics Preliminaries Systems of Linear Equations Matrices Algebraic Properties of Matrix Operations Special Types of Matrices and Partitioned Matrices Matrix Transformations
More informationSection 29: What s an Inverse?
Section 29: What s an Inverse? Our investigations in the last section showed that all of the matrix operations had an identity element. The identity element for addition is, for obvious reasons, called
More informationMatrix Algebra 2.1 MATRIX OPERATIONS Pearson Education, Inc.
2 Matrix Algebra 2.1 MATRIX OPERATIONS MATRIX OPERATIONS m n If A is an matrixthat is, a matrix with m rows and n columnsthen the scalar entry in the ith row and jth column of A is denoted by a ij and
More informationREPRESENTATIONS FOR THE THEORY AND PRACTICE OF HIGH-PERFORMANCE DENSE LINEAR ALGEBRA ALGORITHMS. DRAFT October 23, 2007
REPRESENTATIONS FOR THE THEORY AND PRACTICE OF HIGH-PERFORMANCE DENSE LINEAR ALGEBRA ALGORITHMS ROBERT A VAN DE GEIJN DRAFT October 23, 2007 Abstract The Cholesky factorization operation is used to demonstrate
More informationLinear Algebra: A Constructive Approach
Chapter 2 Linear Algebra: A Constructive Approach In Section 14 we sketched a geometric interpretation of the simplex method In this chapter, we describe the basis of an algebraic interpretation that allows
More informationLinear Equations and Matrix
1/60 Chia-Ping Chen Professor Department of Computer Science and Engineering National Sun Yat-sen University Linear Algebra Gaussian Elimination 2/60 Alpha Go Linear algebra begins with a system of linear
More informationELEMENTARY LINEAR ALGEBRA
ELEMENTARY LINEAR ALGEBRA K R MATTHEWS DEPARTMENT OF MATHEMATICS UNIVERSITY OF QUEENSLAND First Printing, 99 Chapter LINEAR EQUATIONS Introduction to linear equations A linear equation in n unknowns x,
More information1 Matrices and Systems of Linear Equations. a 1n a 2n
March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real
More informationComputing Neural Network Gradients
Computing Neural Network Gradients Kevin Clark 1 Introduction The purpose of these notes is to demonstrate how to quickly compute neural network gradients in a completely vectorized way. It is complementary
More informationMatrices. Chapter Definitions and Notations
Chapter 3 Matrices 3. Definitions and Notations Matrices are yet another mathematical object. Learning about matrices means learning what they are, how they are represented, the types of operations which
More informationMatrices. Chapter What is a Matrix? We review the basic matrix operations. An array of numbers a a 1n A = a m1...
Chapter Matrices We review the basic matrix operations What is a Matrix? An array of numbers a a n A = a m a mn with m rows and n columns is a m n matrix Element a ij in located in position (i, j The elements
More informationn n
Question 1: How is information organized in a matrix? When we use a matrix to organize data, all of the definitions we developed in Chapter 2 still apply. You ll recall that a matrix can have any number
More informationLinear Algebra: Linear Systems and Matrices - Quadratic Forms and Deniteness - Eigenvalues and Markov Chains
Linear Algebra: Linear Systems and Matrices - Quadratic Forms and Deniteness - Eigenvalues and Markov Chains Joshua Wilde, revised by Isabel Tecu, Takeshi Suzuki and María José Boccardi August 3, 3 Systems
More information[ Here 21 is the dot product of (3, 1, 2, 5) with (2, 3, 1, 2), and 31 is the dot product of
. Matrices A matrix is any rectangular array of numbers. For example 3 5 6 4 8 3 3 is 3 4 matrix, i.e. a rectangular array of numbers with three rows four columns. We usually use capital letters for matrices,
More informationa11 a A = : a 21 a 22
Matrices The study of linear systems is facilitated by introducing matrices. Matrix theory provides a convenient language and notation to express many of the ideas concisely, and complicated formulas are
More informationBasic matrix operations
Roberto s Notes on Linear Algebra Chapter 4: Matrix algebra Section 3 Basic matrix operations What you need to know already: What a matrix is. he basic special types of matrices What you can learn here:
More informationMath 291-2: Lecture Notes Northwestern University, Winter 2016
Math 291-2: Lecture Notes Northwestern University, Winter 2016 Written by Santiago Cañez These are lecture notes for Math 291-2, the second quarter of MENU: Intensive Linear Algebra and Multivariable Calculus,
More informationNumerical Methods I Solving Square Linear Systems: GEM and LU factorization
Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,
More informationIntroduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim
Introduction - Motivation Many phenomena (physical, chemical, biological, etc.) are model by differential equations. Recall the definition of the derivative of f(x) f f(x + h) f(x) (x) = lim. h 0 h Its
More informationLecture 2: Linear regression
Lecture 2: Linear regression Roger Grosse 1 Introduction Let s ump right in and look at our first machine learning algorithm, linear regression. In regression, we are interested in predicting a scalar-valued
More informationn n matrices The system of m linear equations in n variables x 1, x 2,..., x n can be written as a matrix equation by Ax = b, or in full
n n matrices Matrices Definitions Diagonal, Identity, and zero matrices Addition Multiplication Transpose and inverse The system of m linear equations in n variables x 1, x 2,..., x n a 11 x 1 + a 12 x
More informationM. Matrices and Linear Algebra
M. Matrices and Linear Algebra. Matrix algebra. In section D we calculated the determinants of square arrays of numbers. Such arrays are important in mathematics and its applications; they are called matrices.
More informationModern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur
Modern Algebra Prof. Manindra Agrawal Department of Computer Science and Engineering Indian Institute of Technology, Kanpur Lecture 02 Groups: Subgroups and homomorphism (Refer Slide Time: 00:13) We looked
More informationMathematics for Graphics and Vision
Mathematics for Graphics and Vision Steven Mills March 3, 06 Contents Introduction 5 Scalars 6. Visualising Scalars........................ 6. Operations on Scalars...................... 6.3 A Note on
More informationMatrices MA1S1. Tristan McLoughlin. November 9, Anton & Rorres: Ch
Matrices MA1S1 Tristan McLoughlin November 9, 2014 Anton & Rorres: Ch 1.3-1.8 Basic matrix notation We have studied matrices as a tool for solving systems of linear equations but now we want to study them
More informationNotes on Solving Linear Least-Squares Problems
Notes on Solving Linear Least-Squares Problems Robert A. van de Geijn The University of Texas at Austin Austin, TX 7871 October 1, 14 NOTE: I have not thoroughly proof-read these notes!!! 1 Motivation
More informationGenerating Function Notes , Fall 2005, Prof. Peter Shor
Counting Change Generating Function Notes 80, Fall 00, Prof Peter Shor In this lecture, I m going to talk about generating functions We ve already seen an example of generating functions Recall when we
More information