Lecture 5: Special Functions and Operations Feedback of Assignment2 Rotation Transformation To rotate by angle θ counterclockwise, set your transformation matrix A as [ ] cos θ sin θ A =. sin θ cos θ We have two way to get the clockwise rotation transformation matrix B by angle θ: 1. θ clockwisely θ counterclockwisely: [ ] cos( θ) sin( θ) B =. sin( θ) cos( θ) 2. Based on 1., we have cos( θ) = cos(θ) and sin( θ) = sin(θ). Then [ ] cos(θ) sin(θ) B =. sin(θ) cos(θ) Review of MATLAB Operations The operations * and.* We ve discussed these operations previously. However, we will be learning new ones shortly, so it is important to have a refresher on the key differences. For multiplication, we have two types: *, which corresponds to matrix multiplication, and.*, which corresponds to index multiplication. As an example, consider two matrices: [ ] 2 0 A =, B = 0 1 [ ] 3 5 2 1 If we were to compute A B in MATLAB, this computes matrix multiplication, i.e. 2(3) + 0(2) 2(5) + 0(1) 6 10 A B = = 0(3) + 1(2) 0(5) + 1(1) 2 1 which we can check by typing >> A = [ 2, 0 ; 0, 1 ] ; B = [ 3, 5 ; 2, 1 ] ; A B 6 10 2 1 1
which is what we got above by hand. However, A. B computes index multiplication, i.e. 2(3) 0(5) 6 0 A. B = = 0(2) 1(1) 0 1 which can again be verified in MATLAB: >> A. B 6 0 0 1 This distinction is important as the answers are different. Further, the operation A B does not commute, while A. B does. Matrix multiplication is most common. However, there are some instances that you may want to use index mulitiplication, such as when checking against logic gates (0 or 1 corresponding to being false or true). The operations ^ and.^ We can now also discuss the difference between matrix exponentials (^) and index exponentials (.^). These are also important to know and are similar to how the multiplication operations worked. You may need for the next project, so it is good to keep in mind. For the matrix exponential, this computed the matrix multiplication the number of times you specify. So, for example, with b defined above, B 2 is 3(3) + 5(2) 3(5) + 5(1) 19 20 B 2 = B B = = 2(3) + 1(2) 2(5) + 1(1) 8 11 Again, this can be verified in MATLAB: >> Bˆ2 19 20 8 11 We can contrast this to index multiplication. So, B.^2 is 3(3) 5(5) 9 25 B.^2 = = 2(2) 1(1) 4 1 which can be checked in MATLAB also: >> B. ˆ 2 2
9 25 4 1 The operation inv() or ˆ-1 Finally, we can define the matrix inverse (and higher power inverses) of matrices. So, for example, we can type either >> inv (B) ; Bˆ 1 0.1429 0.7143 0.2857 0.4286 to compute the matrix inverse. This is again useful in solving equations such as Ax = b. In fact, we can also use higher order negative exponents. So, for example, B^-2 will compute B 2 first, then take the inverse. You can check this is in MATLAB also: >> Bˆ 2, Bsq = B B; Bsqˆ 1, 0.2245 0.4082 0.1633 0.3878 0.2245 0.4082 0.1633 0.3878 Clearly, they match. Thus, B^-k first computes B^k, then takes the inverse of that resulting matrix. This may be useful in one of the later projects. Some New Operations The operations exp() and expm() We will now discuss a few new operations and their differences. Firstly, the function exp() computes the scalar exponential. This is a standard operation that shows up frequently. If applied to a matrix, it will perform the operation on every element of the matrix. So, exp(b) is exp(3) exp(5) 20.0855 148.4132 exp(b) = exp(2) exp(1) 7.3891 2.7183 You can confirm this by typing >> exp (B) 3
20.0855 148.4132 7.3891 2.7183 in MATLAB. However, we also have the function expm(), which computes the matrix exponential. The easiest way to see the difference is to look at the Taylor expansion. For example, if we wanted to compute the matrix exponential of B now, we would quantity such that expm(b) = I + B + B2 2 + B3 3! + Notice that this is quite different than doing the exponential for each scalar value, since B 2 doesn t compute multiplication element-wise either. MATLAB doesn t quite compute it this way, as it can be a little slow in converging to the value and there s faster computational tricks. But, we can have MATLAB compute this quantity for us, try >> expm(b) 132.6494 153.3390 61.3356 71.3138 To give a proof of principle, (again MATLAB doesn t compute it this way), try typing >> Bk = 0 ; >> f o r k = 1:100 Bk = Bk + (Bˆk )/ f a c t o r i a l ( k ) ; end >> disp (Bk) 131.6494 153.3390 61.3356 70.3138 So, comparing the two, we see that it s close, but the diagonal still is converging pretty slow. This comes up in ODE when solving systems of linear ODE s, so it s useful to keep in mind. The operations log() and logm() Similarly, we can define the inverse of the exp() and expm() operations, which are log() and logm(). It s important to know that these are functions corresponding to natural logarithm. We will discuss base 10 briefly at the end. So, let s focus on log(). First, let s define the exponential of B, called eb, then check if log(eb) gives us back B: 4
>> eb = exp (B) ; l o g (eb) 3 5 2 1 We see it does. We can define something similar for logm(). We will set emb as the matrix exponential of B, then check if logm() gives us back B: >> emb = expm(b) ; logm (emb) 3.0000 5.0000 2.0000 1.0000 And indeed, it does. Again, it is important to note that these are natural logarithms we are computing. Rarely, we may also be interested in computed scalar logarithms in base 10. This can be done with the MATLAB function log10(). This is also index based, so if we wanted to do this on matrix B, we d have: log10(3) log10(5) 0.4771 0.6990 log10(b) = log10(2) log10(1) 0.3010 0 Check this in MATLAB by typing >> log10 (B) 0.4771 0.6990 0.3010 0 And we see it agrees. These functions are distinct and it is important to remember when you need to use one over the other. The operations sqrt() and sqrtm() Our final introduction to new operations are the MATLAB functions sqrt() and sqrtm(). Similar to before, the function sqrt() will compute index square roots, while the function sqrtm() will compute a matrix square root. We will discuss what that is shortly. But, we will begin with an example of sqrt(). Since this computes index square roots, we can write this out. So, if we try it on our matrix B, we have sqrt(3) sqrt(5) 1.7321 2.2361 sqrt(b) = sqrt(2) sqrt(1) 1.4142 1.0000 And checking it in MATLAB: 5
>> s q r t (B) 1.7321 2.2361 1.4142 1.0000 Now, for sqrtm(), suppose we have a matrix A. Then, the matrix square root computes the solution to X X = A where X is our output. Try this is MATLAB on the matrix B >> sqrtm (B) 1.5005 + 0.4007 i 1.7380 0.8649 i 0.6952 0.3460 i 0.8053 + 0.7467 i It is important to note two things: 1. The matrix needs to be square. It doesn t make sense to compute a matrix square root of a non-square matrix. 2. The result can be, as in the example above, imaginary, even if the original matrix isn t. The matrix B doesn t even have any negative values in it, however has an imaginary part. Don t be surprised if this is common. Matrix B would need to have special properties associated with it to have a real matrix square root. Matrix Polynomials The last topic to cover this week is on matrix polynomials. So, recall that a scalar polynomial is a function of the form N a k x k = a 0 + a 1 x + a 2 x 2 + + a N x N k=0 These types of equations come up frequently in application, such as pricing in finance or interpolating data, for example. MATLAB has tools to solve polynomial equations. But, we won t dive too much into that right now. But, we d also like to talk about matrix polynomials, which are an extension of scalar polynomials. Suppose we have a matrix A, then a polynomial of A is of the form N a k A k = a 0 I + a 1 A + A 2 A 2 + + a N A N k=0 where A k is the product of matrix A with itself k-times using matrix multiplication. Note: The first term is the identity matrix. It is not a scalar. This is needed to define later theory, 6
such as the matrix Taylor expansion that was given previously. We can also talk about solving polynomial equations, particularly in MATLAB. So, a fifth order matrix polynomial, for example, might look like I + A + 3A 2 + 3A 3 + A 4 + 7A 5 It s also interesting to ask if these polynomial equations can be solved and what their solution might be. They produce a matrix, so the solution should be a matrix. So, if I asked to compute the solution to I + 2B + B 2 for our example matrix B, that would be straightforward by hand. MATLAB, since you can combine operations and functions: >> eye ( 2 ) + 2 B + Bˆ2 26 30 12 14 It s also simple in A more complex question, which we won t really dive into, is how to solve something that looks like I + 2X + X 2 = B In scalar theory, this is straightforward to do. With matrices, how would this be accomplished? If we go by how scalar polynomials work, there should be two solutions to this. Is that true? How would factoring work with matrices? This is really more of a thinking point. The theory and solutions to these questions are left for another course. 7
Exercises 1. In the lecture note, we use a marix polynomial to check the value of expm(b): expm(b) = I + B + B2 2 + B3 3! +. There is another way to check the value of expm(b) for a given matrix B by using eigen-decompostion. Please try the following commands: >> B = [ 3, 5 ; 2, 1 ] ; >>[U,E]= e i g (B) ; >>eb=u expm(e) inv (U) Compare eb and expm(b). 2. Compute matrix exponential of B by using eigen-decompostion: >> B = [ 3, 5 ; 2, 1 ] ; >>[U,E]= e i g (B) ; >>lb=u logm (E) inv (U) Compare lb and logm(b). 3. Compute the matrix square root of B by using eigen-decompostion: >> B = [ 3, 5 ; 2, 1 ] ; >>[U,E]= e i g (B) ; >>sb=u sqrtm (E) inv (U) Compare sb and sqrtm(b). 8