Lecture 5: Special Functions and Operations

Similar documents
Lecture 10: Powers of Matrices, Difference Equations

Lecture 3: Special Matrices

Basic Linear Algebra in MATLAB

Quadratic Equations Part I

Calculus II. Calculus II tends to be a very difficult course for many students. There are many reasons for this.

Lecture 9: Elementary Matrices

Partial Fractions. June 27, In this section, we will learn to integrate another class of functions: the rational functions.

Lesson 21 Not So Dramatic Quadratics

Matrix-Exponentials. September 7, dx dt = ax. x(t) = e at x(0)

Q 2.0.2: If it s 5:30pm now, what time will it be in 4753 hours? Q 2.0.3: Today is Wednesday. What day of the week will it be in one year from today?

Notes on multivariable calculus

Lecture 3f Polar Form (pages )

Algebra & Trig Review

Preface. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

CALCULUS I. Review. Paul Dawkins

Matrices and Deformation

Examples True or false: 3. Let A be a 3 3 matrix. Then there is a pattern in A with precisely 4 inversions.

COMP 175 COMPUTER GRAPHICS. Lecture 04: Transform 1. COMP 175: Computer Graphics February 9, Erik Anderson 04 Transform 1

Mathematics for Graphics and Vision

#29: Logarithm review May 16, 2009

Section 4.5. Matrix Inverses

An introduction to plotting data

Lecture for Week 2 (Secs. 1.3 and ) Functions and Limits

Math Lecture 18 Notes

Eigenvalues & Eigenvectors

Section 8.2 Vector Angles

1.1.1 Algebraic Operations

Definition of a Logarithm

Announcements Wednesday, October 04

What if the characteristic equation has complex roots?

Lecture 11: Linear Regression

Math Lecture 3 Notes

Usually, when we first formulate a problem in mathematics, we use the most familiar

COLLEGE ALGEBRA. Paul Dawkins

Tutorial 2 - Learning about the Discrete Fourier Transform

Homework 5: Sampling and Geometry

Section 1.8/1.9. Linear Transformations

Physics 115C Homework 2

Section 4.6 Negative Exponents

Before we do that, I need to show you another way of writing an exponential. We all know 5² = 25. Another way of writing that is: log

Eigenvalues and Eigenvectors

v = ( 2)

Complex Numbers. April 10, 2015

PHY2048 Physics with Calculus I

Math 5a Reading Assignments for Sections

Algebra. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

1 Matrices and matrix algebra

Topic 15 Notes Jeremy Orloff

Chapter 8B - Trigonometric Functions (the first part)

Solution to Proof Questions from September 1st

MATH 310, REVIEW SHEET 2

Sometimes the domains X and Z will be the same, so this might be written:

5.1 Second order linear differential equations, and vector space theory connections.

What if the characteristic equation has a double root?

Math 138: Introduction to solving systems of equations with matrices. The Concept of Balance for Systems of Equations

Main topics for the First Midterm Exam

Solving systems of ODEs with Matlab

MATLAB Project 2: MATH240, Spring 2013

Math 123, Week 2: Matrix Operations, Inverses

Volume in n Dimensions

Continuity and One-Sided Limits

Vectors. A vector is usually denoted in bold, like vector a, or sometimes it is denoted a, or many other deviations exist in various text books.

Math101, Sections 2 and 3, Spring 2008 Review Sheet for Exam #2:

Number Systems III MA1S1. Tristan McLoughlin. December 4, 2013

15. LECTURE 15. I can calculate the dot product of two vectors and interpret its meaning. I can find the projection of one vector onto another one.

Finding Limits Graphically and Numerically

Eigenvalues and Eigenvectors

NOTES ON LINEAR ALGEBRA CLASS HANDOUT

Definition of a Logarithm

Differential Equations

1 Continuity and Limits of Functions

Interpolation on the unit circle

Lab 2: Static Response, Cantilevered Beam

We can see that f(2) is undefined. (Plugging x = 2 into the function results in a 0 in the denominator)

Sequential Logic (3.1 and is a long difficult section you really should read!)

Math /Foundations of Algebra/Fall 2017 Foundations of the Foundations: Proofs

Solve Wave Equation from Scratch [2013 HSSP]

Lecture 22. r i+1 = b Ax i+1 = b A(x i + α i r i ) =(b Ax i ) α i Ar i = r i α i Ar i

TAYLOR POLYNOMIALS DARYL DEFORD

Roberto s Notes on Linear Algebra Chapter 4: Matrix Algebra Section 7. Inverse matrices

CHAPTER 7: TECHNIQUES OF INTEGRATION

Answers in blue. If you have questions or spot an error, let me know. 1. Find all matrices that commute with A =. 4 3

Lecture 1 - Vectors. A Puzzle... Introduction. Vectors: The quest begins! TA s Information. Vectors

To factor an expression means to write it as a product of factors instead of a sum of terms. The expression 3x

ACCESS TO SCIENCE, ENGINEERING AND AGRICULTURE: MATHEMATICS 2 MATH00040 SEMESTER /2018

Vectors Part 2: Three Dimensions

CHAPTER 4 VECTORS. Before we go any further, we must talk about vectors. They are such a useful tool for

Graphical Analysis and Errors MBL

Calculus I. Here are a couple of warnings to my students who may be here to get a copy of what happened on a day that you missed.

Complex Numbers and Quaternions for Calc III

Understanding Exponents Eric Rasmusen September 18, 2018

Definition of a Logarithm

Section 5-7 : Green's Theorem

Algebra Exam. Solutions and Grading Guide

Quantum Mechanics- I Prof. Dr. S. Lakshmi Bala Department of Physics Indian Institute of Technology, Madras

CH 54 PREPARING FOR THE QUADRATIC FORMULA

A = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].

Recitation 9: Probability Matrices and Real Symmetric Matrices. 3 Probability Matrices: Definitions and Examples

/633 Introduction to Algorithms Lecturer: Michael Dinitz Topic: NP-Completeness I Date: 11/13/18

Definition (T -invariant subspace) Example. Example

Transcription:

Lecture 5: Special Functions and Operations Feedback of Assignment2 Rotation Transformation To rotate by angle θ counterclockwise, set your transformation matrix A as [ ] cos θ sin θ A =. sin θ cos θ We have two way to get the clockwise rotation transformation matrix B by angle θ: 1. θ clockwisely θ counterclockwisely: [ ] cos( θ) sin( θ) B =. sin( θ) cos( θ) 2. Based on 1., we have cos( θ) = cos(θ) and sin( θ) = sin(θ). Then [ ] cos(θ) sin(θ) B =. sin(θ) cos(θ) Review of MATLAB Operations The operations * and.* We ve discussed these operations previously. However, we will be learning new ones shortly, so it is important to have a refresher on the key differences. For multiplication, we have two types: *, which corresponds to matrix multiplication, and.*, which corresponds to index multiplication. As an example, consider two matrices: [ ] 2 0 A =, B = 0 1 [ ] 3 5 2 1 If we were to compute A B in MATLAB, this computes matrix multiplication, i.e. 2(3) + 0(2) 2(5) + 0(1) 6 10 A B = = 0(3) + 1(2) 0(5) + 1(1) 2 1 which we can check by typing >> A = [ 2, 0 ; 0, 1 ] ; B = [ 3, 5 ; 2, 1 ] ; A B 6 10 2 1 1

which is what we got above by hand. However, A. B computes index multiplication, i.e. 2(3) 0(5) 6 0 A. B = = 0(2) 1(1) 0 1 which can again be verified in MATLAB: >> A. B 6 0 0 1 This distinction is important as the answers are different. Further, the operation A B does not commute, while A. B does. Matrix multiplication is most common. However, there are some instances that you may want to use index mulitiplication, such as when checking against logic gates (0 or 1 corresponding to being false or true). The operations ^ and.^ We can now also discuss the difference between matrix exponentials (^) and index exponentials (.^). These are also important to know and are similar to how the multiplication operations worked. You may need for the next project, so it is good to keep in mind. For the matrix exponential, this computed the matrix multiplication the number of times you specify. So, for example, with b defined above, B 2 is 3(3) + 5(2) 3(5) + 5(1) 19 20 B 2 = B B = = 2(3) + 1(2) 2(5) + 1(1) 8 11 Again, this can be verified in MATLAB: >> Bˆ2 19 20 8 11 We can contrast this to index multiplication. So, B.^2 is 3(3) 5(5) 9 25 B.^2 = = 2(2) 1(1) 4 1 which can be checked in MATLAB also: >> B. ˆ 2 2

9 25 4 1 The operation inv() or ˆ-1 Finally, we can define the matrix inverse (and higher power inverses) of matrices. So, for example, we can type either >> inv (B) ; Bˆ 1 0.1429 0.7143 0.2857 0.4286 to compute the matrix inverse. This is again useful in solving equations such as Ax = b. In fact, we can also use higher order negative exponents. So, for example, B^-2 will compute B 2 first, then take the inverse. You can check this is in MATLAB also: >> Bˆ 2, Bsq = B B; Bsqˆ 1, 0.2245 0.4082 0.1633 0.3878 0.2245 0.4082 0.1633 0.3878 Clearly, they match. Thus, B^-k first computes B^k, then takes the inverse of that resulting matrix. This may be useful in one of the later projects. Some New Operations The operations exp() and expm() We will now discuss a few new operations and their differences. Firstly, the function exp() computes the scalar exponential. This is a standard operation that shows up frequently. If applied to a matrix, it will perform the operation on every element of the matrix. So, exp(b) is exp(3) exp(5) 20.0855 148.4132 exp(b) = exp(2) exp(1) 7.3891 2.7183 You can confirm this by typing >> exp (B) 3

20.0855 148.4132 7.3891 2.7183 in MATLAB. However, we also have the function expm(), which computes the matrix exponential. The easiest way to see the difference is to look at the Taylor expansion. For example, if we wanted to compute the matrix exponential of B now, we would quantity such that expm(b) = I + B + B2 2 + B3 3! + Notice that this is quite different than doing the exponential for each scalar value, since B 2 doesn t compute multiplication element-wise either. MATLAB doesn t quite compute it this way, as it can be a little slow in converging to the value and there s faster computational tricks. But, we can have MATLAB compute this quantity for us, try >> expm(b) 132.6494 153.3390 61.3356 71.3138 To give a proof of principle, (again MATLAB doesn t compute it this way), try typing >> Bk = 0 ; >> f o r k = 1:100 Bk = Bk + (Bˆk )/ f a c t o r i a l ( k ) ; end >> disp (Bk) 131.6494 153.3390 61.3356 70.3138 So, comparing the two, we see that it s close, but the diagonal still is converging pretty slow. This comes up in ODE when solving systems of linear ODE s, so it s useful to keep in mind. The operations log() and logm() Similarly, we can define the inverse of the exp() and expm() operations, which are log() and logm(). It s important to know that these are functions corresponding to natural logarithm. We will discuss base 10 briefly at the end. So, let s focus on log(). First, let s define the exponential of B, called eb, then check if log(eb) gives us back B: 4

>> eb = exp (B) ; l o g (eb) 3 5 2 1 We see it does. We can define something similar for logm(). We will set emb as the matrix exponential of B, then check if logm() gives us back B: >> emb = expm(b) ; logm (emb) 3.0000 5.0000 2.0000 1.0000 And indeed, it does. Again, it is important to note that these are natural logarithms we are computing. Rarely, we may also be interested in computed scalar logarithms in base 10. This can be done with the MATLAB function log10(). This is also index based, so if we wanted to do this on matrix B, we d have: log10(3) log10(5) 0.4771 0.6990 log10(b) = log10(2) log10(1) 0.3010 0 Check this in MATLAB by typing >> log10 (B) 0.4771 0.6990 0.3010 0 And we see it agrees. These functions are distinct and it is important to remember when you need to use one over the other. The operations sqrt() and sqrtm() Our final introduction to new operations are the MATLAB functions sqrt() and sqrtm(). Similar to before, the function sqrt() will compute index square roots, while the function sqrtm() will compute a matrix square root. We will discuss what that is shortly. But, we will begin with an example of sqrt(). Since this computes index square roots, we can write this out. So, if we try it on our matrix B, we have sqrt(3) sqrt(5) 1.7321 2.2361 sqrt(b) = sqrt(2) sqrt(1) 1.4142 1.0000 And checking it in MATLAB: 5

>> s q r t (B) 1.7321 2.2361 1.4142 1.0000 Now, for sqrtm(), suppose we have a matrix A. Then, the matrix square root computes the solution to X X = A where X is our output. Try this is MATLAB on the matrix B >> sqrtm (B) 1.5005 + 0.4007 i 1.7380 0.8649 i 0.6952 0.3460 i 0.8053 + 0.7467 i It is important to note two things: 1. The matrix needs to be square. It doesn t make sense to compute a matrix square root of a non-square matrix. 2. The result can be, as in the example above, imaginary, even if the original matrix isn t. The matrix B doesn t even have any negative values in it, however has an imaginary part. Don t be surprised if this is common. Matrix B would need to have special properties associated with it to have a real matrix square root. Matrix Polynomials The last topic to cover this week is on matrix polynomials. So, recall that a scalar polynomial is a function of the form N a k x k = a 0 + a 1 x + a 2 x 2 + + a N x N k=0 These types of equations come up frequently in application, such as pricing in finance or interpolating data, for example. MATLAB has tools to solve polynomial equations. But, we won t dive too much into that right now. But, we d also like to talk about matrix polynomials, which are an extension of scalar polynomials. Suppose we have a matrix A, then a polynomial of A is of the form N a k A k = a 0 I + a 1 A + A 2 A 2 + + a N A N k=0 where A k is the product of matrix A with itself k-times using matrix multiplication. Note: The first term is the identity matrix. It is not a scalar. This is needed to define later theory, 6

such as the matrix Taylor expansion that was given previously. We can also talk about solving polynomial equations, particularly in MATLAB. So, a fifth order matrix polynomial, for example, might look like I + A + 3A 2 + 3A 3 + A 4 + 7A 5 It s also interesting to ask if these polynomial equations can be solved and what their solution might be. They produce a matrix, so the solution should be a matrix. So, if I asked to compute the solution to I + 2B + B 2 for our example matrix B, that would be straightforward by hand. MATLAB, since you can combine operations and functions: >> eye ( 2 ) + 2 B + Bˆ2 26 30 12 14 It s also simple in A more complex question, which we won t really dive into, is how to solve something that looks like I + 2X + X 2 = B In scalar theory, this is straightforward to do. With matrices, how would this be accomplished? If we go by how scalar polynomials work, there should be two solutions to this. Is that true? How would factoring work with matrices? This is really more of a thinking point. The theory and solutions to these questions are left for another course. 7

Exercises 1. In the lecture note, we use a marix polynomial to check the value of expm(b): expm(b) = I + B + B2 2 + B3 3! +. There is another way to check the value of expm(b) for a given matrix B by using eigen-decompostion. Please try the following commands: >> B = [ 3, 5 ; 2, 1 ] ; >>[U,E]= e i g (B) ; >>eb=u expm(e) inv (U) Compare eb and expm(b). 2. Compute matrix exponential of B by using eigen-decompostion: >> B = [ 3, 5 ; 2, 1 ] ; >>[U,E]= e i g (B) ; >>lb=u logm (E) inv (U) Compare lb and logm(b). 3. Compute the matrix square root of B by using eigen-decompostion: >> B = [ 3, 5 ; 2, 1 ] ; >>[U,E]= e i g (B) ; >>sb=u sqrtm (E) inv (U) Compare sb and sqrtm(b). 8