Notes on vectors and matrices EE103 Winter Quarter 2001-02 L Vandenberghe 1 Terminology and notation Matrices, vectors, and scalars A matrix is a rectangular array of numbers (also called scalars), written between brackets, as in 0 1 23 01 A 13 4 01 0 41 1 0 17 An important attribute of a matrix is its size or dimensions, ie, thenumbersofrows and columns The matrix A above, for example, has 3 rows and 4 columns, so its size is 3 4 (Size is always given as rows columns) The entries or coefficients or elements of a matrix are the values in the array The i, j entry is the value in the ith row and jth column, denoted by double subscripts: the i, j entry ofamatrixc is denoted C ij (which is a number) The positive integers i and j are called the (row and column, respectively) indices For our example above, A 13 23, A 32 1 The row index of the bottom left entry (which has value 41)is 3; its column index is 1 A matrix with only one column, ie, with size n 1, is called a column vector or just a vector Sometimes the size is specified by calling it an n-vector The entries of a vector are denoted with just one subscript (since the other is 1), as in a 3 The entries are sometimes called the components of the vector As an example, v is a 4-vector (or 4 1 matrix); its third component is v 3 33 We sometimes write a vector by listing its components between parentheses and separated by commas, as in v (1, 2, 33, 03) It is important to keep in mind that this denotes the same column vector as defined above 1 2 33 03 BasedontheMatrix Primer by S Boyd (Stanford University) 1
A matrix with only one row, ie, with size 1 n, is sometimes called a row vector As an example, w 21 3 0 is a row vector (or 1 3 matrix); its third component is w 3 0 Sometimes a 1 1 matrix is considered to be the same as a scalar, ie, anumber Two matrices are equal if they are the same size and all the corresponding entries (which are numbers)are equal The set of all m n matrices is denoted R m n, and the set of all n-vectors is denoted R n In other words, A R m n means the same as A is a matrix with size m n, and x R n means x is an n-vector Notational conventions Some authors try to use notation that helps the reader distinguish between matrices, vectors, and scalars (numbers) For example, Greek letters (α, β, )might be used for numbers, lower-case letters (a, x, f, )for vectors, and capital letters (A, F, )for matrices Other notational conventions include matrices given in bold font (G), or vectors written with arrows above them ( a) Unfortunately, there are about as many notational conventions as authors, so you should be prepared to figure out what things are (ie, scalars, vectors, matrices)despite the author s notational scheme (if any exists) Zero and identity matrices The zero matrix (of size m n)is the matrix with all entries equal to zero Sometimes the zero matrix is written as 0 m n, where the subscript denotes the size But often, a zero matrix is denoted just 0, the same symbol used to denote the number 0 In this case you ll have to figure out the size of the zero matrix from the context (More on this later) Note that zero matrices of different sizes are different matrices, even though we use the same symbol to denote them (ie, 0) In programming we call this overloading: wesaythe symbol 0 is overloaded because it can mean different things depending on its context (ie, the equation it appears in) When a zero matrix is a (row or column)vector, we call it a zero (row or column)vector An identity matrix is another common matrix It is always square, ie, has the same number of rows as columns Its diagonal entries, ie, those with equal row and column index, are all equal to one, and its off-diagonal entries (those with unequal row and column indices) are zero Identity matrices are denoted by the letter I Sometimes a subscript denotes the size, as in I 4 or I 2 2 But more often the size must be determined from context (just like zero matrices) Formally, the identity matrix of size n is defined by { 1 i j I ij 0 i j 2
Perhaps more illuminating are the examples 1 0 0 1, 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 which are the 2 2and4 4 identity matrices (Remember that both are denoted with the same symbol I) The importance of the identity matrix will become clear later 2 Matrix operations Matrices can be combined in various operations to form other matrices Matrix transpose If A is an m n matrix, its transpose, denoted A T (or sometimes A ),isthen m matrix given by ( A T ) ij A ji In words, the rows and columns of A are transposed in A T For example, 0 4 7 0 3 1 T 0 7 3 4 0 1 If we transpose a matrix twice, we get back the original matrix: ( A T ) T A Matrix addition Two matrices of the same size can be added together, to form another matrix (of the same size), by adding the corresponding entries (which are numbers) Matrix addition is denoted by the symbol + (Thus the symbol + is overloaded to mean scalar addition when scalars appear on its left and right hand side, and matrix addition when matrices appear on its left and right hand sides)for example, 0 4 7 0 3 1 + 1 2 2 3 0 4 3 5 Note that (row or column)vectors of the same size can be added, but you cannot add together a row vector and a column vector (except if they are both scalars!) Matrix subtraction is similar As an example, I 0 6 9 2 3
Note that this gives an example where we have to figure out what size the identity matrix is Since you can only add (or subtract)matrices of the same size, we conclude that I must refer to a 2 2 identity matrix Matrix addition is commutative, ie, ifa and B are matrices of the same size, then A + B B + A It s also associative, ie, (A + B)+C A +(B + C), so we write both as A + B + C We always have A +00+A A, ie, adding the zero matrix to a matrix has no effect Scalar multiplication Another operation is scalar multiplication: multiplying a matrix by a scalar (ie, number), which is done by multiplying every entry of the matrix by the scalar Scalar multiplication is usually denoted by juxtaposition, with the scalar on the left, as in ( 2) 6 0 2 12 18 6 12 0 Scalar multiplication obeys several laws you can figure out for yourself, eg, if A is any matrix and α, β are any scalars, then (α + β)a αa + βa It s useful to identify the symbols appearing in this formula The + symbol on the left is addition of scalars, while the + symbol on the right denotes matrix addition Another simple property is (αβ)a (α)(βa), where α and β are scalars and A is a matrix On the left hand side we see scalar-scalar multiplication (αβ)and scalar-matrix multiplication; on the right we see two cases of scalar-matrix multiplication Note that 0 A 0 (where the lefthand zero is the scalar zero, and the righthand zero is a matrix zero of the same size as A) Matrix multiplication It s also possible to multiply two matrices using matrix multiplication You can multiply two matrices A and B provided their dimensions are compatible, which means the number of columns of A equals the number of rows of B Suppose A and B are compatible, ie, A has size m p and B has size p n The product matrix C AB, which has size m n, is defined by p C ij A ik B kj A i1 B 1j + + A ip B pj, i 1,,m, j 1,,n k1 This rule looks complicated, but there are several ways to remember it To find the i, j entry of the product C AB, you need to know the ith row of A and the jth column of B The summation above can be interpreted as moving left to right along the i row of A while moving top to bottom down the jth column of B As you go, you keep a running sum of the product of entries: one from A and one from B 4
Now we can explain why the identity is called the identity: if A is any m n matrix, then AI A and IA A, ie, when you multiply a matrix by an identity matrix, it has no effect (The identity matrices in the formulas AI A and IA A have different sizes what are they?) One very important fact about matrix multiplication is that it is (in general) not commutative: wedon t (in general)have AB BA In fact, BA may not even make sense, or, if it makes sense, may be a different size than AB (so that equality in AB BA is meaningless) For example, if A is 2 3andB is 3 4, then AB makes sense (the dimensions are compatible)but BA doesn t even make sense (much less equal AB) Even when AB and BA both make sense and are the same size, ie, whena and B are square, we don t (in general)have AB BA As a simple example, consider: 0 1 1 2 6 11, 3 3 0 1 1 2 9 3 17 0 Matrix multiplication is associative, ie, (AB)C A(BC)(provided the products make sense) Therefore we write the product simply as ABC Matrix multiplication is also associative with scalar multiplication, ie, α(ab) (αa)b, where α is a scalar and A and B are matrices (that can be multiplied) Matrix multiplication distributes across matrix addition: A(B + C) AB + AC and (A + B)C AC + BC Matrix-vector product A very important and common case of matrix multiplication is y Ax, wherea is an m n matrix, x is an n-vector, and y is an m-vector We can think of matrix vector multiplication (with an m n matrix)as a function that transforms n-vectors into m-vectors The formula is y i A i1 x 1 + + A in x n, i 1,,m Inner product Another important special case of matrix multiplication occurs when v is a row vector and w is a column vector (of the same size) Then vw makes sense, and has size 1 1, ie, isa scalar: vw v 1 w 1 + + v n w n This occurs often in the form x T y where x and y are both n-vectors In this case the product (which is a number)is called the inner product or dot product of the vectors x and y Other notation for the inner product is x, y or x y The inner product of a vector with itself is the square of its (Euclidean)norm x : Matrix powers x x 2 1 + x 2 2 + + x 2 n x T x When a matrix A is square, then it makes sense to multiply A by itself, ie, to form A A We refer to this matrix as A 2 Similarly, k copies of A multiplied together is denoted A k 5
(Non-integer powers, such as A 1/2 (the matrix squareroot), are pretty tricky they might not make sense, or be ambiguous, unless certain conditions on A hold This is an advanced topic in linear algebra) By convention we set A 0 I Useful identities We ve already mentioned a handful of matrix identities, that you could figure out yourself, eg, A +0A Here we list a few others, that are not hard to derive, and quite useful transpose of product: (AB) T B T A T transpose of sum: (A + B) T A T + B T products of powers: A k A l A k+l (for k, l 1 in general, and for all k, l if A is invertible) Block matrices and submatrices In some applications it s useful to form matrices whose entries are themselves matrices, as in F I A B C,, 0 G where A, B, C, F,andGare matrices (as are 0 and I) Such matrices are called block matrices; the entries A, B, etc are called blocks and are sometimes named by indices Thus, F is called the 1, 1 block of the second matrix Of course the block matrices must have the right dimensions to be able to fit together: matrices in the same (block)row must have the same number of rows (ie, the same height ); matrices in the same (block)column must have the same number of columns (ie, thesame width ) Thus in the examples above, A, B and C musthavethesamenumberofrows(eg, they could be 2 3, 2 2, and 2 1) The second example is more interesting Suppose that F is m n Then the identity matrix in the 2, 2 position must have size m m (since it must have the same number of rows as F ) We also see that G must have m columns, say, dimensions p m That fixes the dimensions of the 0 matrix in the 2, 1block itmustbe p m You can also divide a larger matrix (or vector)into blocks In this context the blocks are sometimes called submatrices of the big matrix As a specific example, suppose that 2 2 0 2 3 C, D 1 3 5 4 7 Then we have D C 0 2 3 2 2 5 4 7 1 3 6
It s often useful to write an m n matrix as a 1 n block matrix of m-vectors (which are just its columns), or as an m 1 block matrix of n-row-vectors (which are its rows) Block matrices can be added and multiplied as if the entries were numbers, provided the corresponding entries have the right sizes (ie, conform )and you re careful about the order of multiplication Thus we have A B C D X Y AX + BY CX + DY provided the products AX, BY, CX,andDY makes sense 3 Cost of basic matrix operations Complexity analysis via flop count The cost of a numerical linear algebra algorithm is often expressed by giving the total number of floating-point operations or flops required to carry it out, as a function of various problem dimensions We define a flop as one addition, subtraction, multiplication, or division of two floating-point numbers (Some authors define a flop as one multiplication followed by one addition, so their flop counts are smaller by a factor up to two)to evaluate the complexity of an algorithm, we count the total number of flops, express it as a function (usually a polynomial)of the dimensions of the matrices and vectors involved, and simplify the expression by ignoring all terms except the leading (ie, highest order or dominant) terms As an example, suppose that a particular algorithm requires a total of m 3 +3m 2 n + mn +4mn 2 +5m +22 flops, where m and n are problem dimensions We would normally simplify this flop count to m 3 +3m 2 n +4mn 2 flops, since these are the leading terms in the problem dimensions m and n If in addition we assumed that m n, we would further simplify the flop count to 4mn 2 Flop counts were originally popularized when floating-point operations were relatively slow, so counting the number gave a good estimate of the total computation time This is no longer the case: Issues such as cache boundaries and locality of reference can dramatically affect the computation time of a numerical algorithm However, flop counts can still give us a good rough estimate of the computation time of a numerical algorithm, and how the time grows with increasing problem size Since a flop count no longer accurately predicts the computation time of an algorithm, we usually pay most attention to its order or orders, ie, its largest exponents, and ignore differences in flop counts smaller than a factor of two or so For example, an algorithm with flop count 5n 2 is considered comparable to one with aflopcount4n 2, but faster than an algorithm with flop count (1/3)n 3 7
Vector operations To compute the inner product x T y of two vectors x, y R n we form the products x i y i, and then add them, which requires n multiplies and n 1 additions, or 2n 1 flops As mentioned above, we keep only leading term, and say that the inner product requires 2n flops, or even more approximately, order n flops A scalar-vector multiplication αx, where α R and x R n costs n flops The addition x + y of two vectors x, y R n also costs n flops If the vectors x and y are sparse, ie, have only a few nonzero terms, these basic operations can be carried out faster (assuming the vectors are stored using an appropriate data structure) For example, if x is a sparse vector with N nonzero entries, then the inner product x T y can be computed in 2N flops Matrix-vector multiplication A matrix-vector multiplication y Ax where A R m n costs 2mn flops: wehaveto calculate m components of y, each of which is the product of a row of A with x, ie, an inner product of two vectors in R n Matrix-vector products can often be accelerated by taking advantage of structure in A For example, if A is diagonal, then Ax can be computed in n flops, instead of 2n 2 flops for multiplication by a general n n matrix More generally, if A is sparse, with only N nonzero elements (out of mn),then2n flops are needed to form Ax, since we can skip multiplications and additions with zero As a less obvious example, suppose the matrix A is represented (stored)in the factored form A UV,whereU R m p, V R p n,andp m, p n Then we can compute Ax by first computing V x(which costs 2pn flops), and then computing U(V x)(which costs 2mp flops),sothetotalis2p(m + n)flops Since p min{m, n}, this is small compared to 2mn Matrix-matrix multiplication The matrix-matrix product C AB, wherea R m n and B R n p,costs2mnp flops We have mp elements in C to calculate, each of which is an inner product of two vectors of length n Again, we can often make substantial savings by taking advantage of structure in A and B For example, if A and B are sparse, we can accelerate the multiplication by skipping additions and multiplications with zero If m p and we know that C is symmetric, then we can calculate the matrix product in m 2 n flops, since we only have to compute the (1/2)m(m + 1)in the lower triangular part To form the product of several matrices, we can carry out the matrix-matrix multiplications in different ways, which have different flop counts in general The simplest example is computing the product D ABC, wherea R m n, B R n p,andc R p q Here we can compute D in two ways, using matrix-matrix multiplies One method is to first form the product AB (2mnp flops), and then form D (AB)C (2mpq flops), so the total is 2mp(n + q)flops Alternatively, we can first form the product BC (2npq flops), and then form D A(BC)(2mnq flops), with a total of 2nq(m + p)flops The first method is better 8
when 2mp(n + q) < 2nq(m + p), ie, when 1 n + 1 q < 1 m + 1 p This assumes that no structure of the matrices is exploited in carrying out matrix-matrix products For products of more than three matrices, there are many ways to parse the product into matrix-matrix multiplications Although it is not hard to develop an algorithm that determines the best parsing (ie, the one with the fewest required flops)given the matrix dimensions, in most applications the best parsing is clear 9