Chapter I. Linear Equations

Size: px
Start display at page:

Download "Chapter I. Linear Equations"

Transcription

1 Chapter I Linear Equations

2 I Linear Equations I. Solving Linear Equations Prerequisites and Learning Goals From your work in previous courses, you should be able to Write a system of linear equations using matrix notation. Use Gaussian elimination to bring a system of linear equations into upper triangular form and reduced row echelon form (rref). Determine whether a system of equations has a unique solution, infinitely many solutions or no solutions, and compute all solutions if they exist; express the set of all solutions in parametric form. Compute the inverse of a matrix when it exists, use the inverse to solve a system of equations, describe for what systems of equations this is possible. Find the transpose of a matrix. Interpret a matrix as a linear transformation acting on vectors. After completing this section, you should be able to Calculate the standard Euclidean norm, the -norm and the infinity norm of a vector. Calculate the Hilbert-Schmidt norm of a matrix. Define the matrix norm of a matrix; describe the connection between the matrix norm and how a matrix stretches the length of vectors; compute the matrix norm of a diagonal matrix. Define the condition number of a matrix and its relation to the matrix norm; use the condition number to estimate relative errors in the solution to a system of linear equations. Explain why a small condition number is desirable in practical computations. Use MATLAB/Octave to enter matrices and vectors, make larger matrices from smaller blocks, multiply matrices, compute the inverse and transpose, extract elements, rows, columns and submatrices, use rref() to find the reduced row echelon form for a matrix, solve linear equations using A\b, use rand() to generate random matrices, use tic() and toc() to time operations, compute norms and condition numbers. Use MATLAB/Octave to test conjectures about norms, condition numbers, etc. 2

3 I. Solving Linear Equations I.. Review: Systems of linear equations The first part of the course is about systems of linear equations. You will have studied such systems in a previous course, and should remember how to find solutions (when they exist) using Gaussian elimination. Many practical problems can be solved by turning them into a system of linear equations. In this chapter we will study a few examples: the problem of finding a function that interpolates a collection of given points, and the approximate solutions of differential equations. In practical problems, the question of existence of solutions, although important, is not the end of the story. It turns out that some systems of equations, even though they may have a unique solution, are very sensitive to changes in the coefficients. This makes them very difficult to solve reliably. We will see some examples of such ill-conditioned systems, and learn how to recognize them using the condition number of a matrix. Recall that a system of linear equations, like this system of 2 equations in 3 unknowns can be written as a matrix equation x +2x 2 +x 3 = x 5x 2 +x 3 = [ 2 5 ] x x 2 = x 3 [ ]. A general system of m linear equations in n unknowns can be written as Ax = b where A is an given m n (m rows, n columns) matrix, b is a given m-component vector, and x is the n-component vector of unknowns. A system of linear equations may have no solutions, a unique solutions, or infinitely many solutions. This is easy to see when there is only a single variable x, so that the equation has the form ax = b where a and b are given numbers. The solution is easy to find if a : x = b/a. If a = then the equation reads x = b. In this case, the equation either has no solutions (when b ) or infinitely many (when b = ), since in this case every x is a solution. To solve a general system Ax = b, form the augmented matrix [A b] and use Gaussian elimination to reduce the matrix to reduced row echelon form. This reduced matrix (which represents a system of linear equations that has exactly the same solutions as the original system) can be used to decide whether solutions exist, and to find them. If you don t remember this procedure, you should review it. 3

4 I Linear Equations In the example above, the augmented matrix is [ 2 5 ]. The reduced row echelon form is [ 2/7 ], /7 which leads to a family of solutions (one for each value of the parameter s) 2/7 x = /7 + s. I..2 Solving a non-singular system of n equations in n unknowns Let s start with a system of equations where the number of equations is the same as the number of unknowns. Such a system can be written as a matrix equation Ax = b, where A is a square matrix, b is a given vector, and x is the vector of unknowns we are trying to find. When A is non-singular (invertible) there is a unique solution. It is given by x = A b, where A is the inverse matrix of A. Of course, computing A is not the most efficient way to solve a system of equations. For our first introduction to MATLAB/Octave, let s consider an example: 3 A = b =. First, we define the matrix A and the vector b in MATLAB/Octave. Here is the input (after the prompt symbol >) and the output (without a prompt symbol). >A=[ ; -; - ] A = - - >b=[3;;] 4

5 I. Solving Linear Equations b = 3 Notice that the entries on the same row are separated by spaces (or commas) while rows are separated by semicolons. In MATLAB/Octave, column vectors are n by matrices and row vectors are by n matrices. The semicolons in the definition of b make it a column vector. In MATLAB/Octave, X denotes the transpose of X. Thus we get the same result if we define b as >b=[3 ] b = 3 The solution can be found by computing the inverse of A and multiplying >x = A^(-)*b x = However if A is a large matrix we don t want to actually calculate the inverse. The syntax for solving a system of equations efficiently is >x = A\b x = 5

6 I Linear Equations If you try this with a singular matrix A, MATLAB/Octave will complain and print an warning message. If you see the warning, the answer is not reliable! You can always check to see that x really is a solution by computing Ax. >A*x ans = 3 As expected, the result is b. By the way, you can check to see how much faster A\b is than A^(-)*b by using the functions tic() and toc(). The function tic() starts the clock, and toc() stops the clock and prints the elapsed time. To try this out, let s make A and b really big with random entries. A=rand(,); b=rand(,); Here we are using the MATLAB/Octave command rand(m,n) that generates an m n matrix with random entries chosen between and. Each time rand is used it generates new numbers. Notice the semicolon ; at the end of the inputs. This suppresses the output. Without the semicolon, MATLAB/Octave would start writing the,, random entries of A to our screen! Now we are ready to time our calculations. tic();a^(-)*b;toc(); Elapsed time is 44 seconds. tic();a\b;toc(); Elapsed time is 3.55 seconds. So we see that A\b quite a bit faster. 6

7 I. Solving Linear Equations I..3 Reduced row echelon form How can we solve Ax = b when A is singular, or not a square matrix (that is, the number of equations is different from the number of unknowns)? In your previous linear algebra course you learned how to use elementary row operations to transform the original system of equations to an upper triangular system. The upper triangular system obtained this way has exactly the same solutions as the original system. However, it is much easier to solve. In practice, the row operations are performed on the augmented matrix [A b]. If efficiency is not an issue, then addition row operations can be used to bring the system into reduced row echelon form. In the this form, the pivot columns have a in the pivot position and zeros elsewhere. For example, if A is a square non-singular matrix then the reduced row echelon form of [A b] is [I x], where I is the identity matrix and x is the solution. In MATLAB/Octave you can compute the reduced row echelon form in one step using the function rref(). For the system we considered above we do this as follows. First define A and b as before. This time I ll suppress the output. >A=[ ; -; - ]; >b=[3 ] ; In MATLAB/Octave, the square brackets [... ] can be used to construct larger matrices from smaller building blocks, provided the sizes match correctly. So we can define the augmented matrix C as >C=[A b] C = Now we compute the reduced row echelon form. >rref(c) ans = - 7

8 I Linear Equations The solution appears on the right. Now let s try to solve Ax = b with 2 3 A = b = This time the matrix A is singular and doesn t have an inverse. Recall that the determinant of a singular matrix is zero, so we can check by computing it. >A=[ 2 3; 4 5 6; 7 8 9]; >det(a) ans = However we can still try to solve the equation Ax = b using Gaussian elimination. >b=[ ] ; >rref([a b]) ans = Letting x 3 = s be a parameter, and proceeding as you learned in previous courses, we arrive at the general solution x = + s 2. On the other hand, if then >rref([ 2 3 ;4 5 6 ;7 8 9 ]) ans = 2 3 A = b =, tells us that there is no solution. 8

9 I. Solving Linear Equations I..4 Gaussian elimination steps using MATLAB/Octave IfCis a matrix in MATLAB/Octave, thenc(,2) is the entry in the st row and 2nd column. The whole first row can be extracted using C(,:) while C(:,2) yields the second column. Finally we can pick out the submatrix of C consisting of rows -2 and columns 2-4 with the notation C(:2,2:4). Let s illustrate this by performing a few steps of Gaussian elimination on the augmented matrix from our first example. Start with C=[ 3; - ; - ]; The first step in Gaussian elimination is to subtract the first row from the second. >C(2,:)=C(2,:)-C(,:) C = Next, we subtract the first row from the third. >C(3,:)=C(3,:)-C(,:) C = To bring the system into upper triangular form, we need to swap the second and third rows. Here is the MATLAB/Octave code. >temp=c(3,:);c(3,:)=c(2,:);c(2,:)=temp C =

10 I Linear Equations I..5 Norms for a vector Norms are a way of measuring the size of a vector. They are important when we study how vectors change, or want to know how close one vector is to another. A vector may have many components and it might happen that some are big and some are small. A norm is a way of capturing information about the size of a vector in a single number. There is more than one way to define a norm. In your previous linear algebra course, you probably have encountered the most common norm, called the Euclidean norm (or the 2-norm). The word norm without qualification usually refers to this norm. What is the Euclidean norm of the vector [ ] 4 a =? 3 When you draw the vector as an arrow on the plane, this norm is the Euclidean distance between the tip and the tail. This leads to the formula a = ( 4) = 5. This is the answer that MATLAB/Octave gives too: > a=[-4 3] a = -4 3 > norm(a) ans = 5 The formula is easily generalized to n dimensions. If x = [x,x 2,...,x n ] T then x = x 2 + x x n 2. The absolute value signs in this formula, which might seem superfluous, are put in to make the formula correct when the components are complex numbers. So, for example [ ] i = i = + = 2. Does MATLAB/Octave give this answer too? There are situations where other ways of measuring the norm of a vector are more natural. Suppose that the tip and tail of the vector a = [ 4,3] T are locations in a city where you can only walk along the streets and avenues.

11 I. Solving Linear Equations 3 4 If you defined the norm to be the shortest distance that you can walk to get from the tail to the tip, the answer would be a = = 7. This norm is called the -norm and can be calculated in MATLAB/Octave by adding as an extra argument in the norm function. > norm(a,) ans = 7 The -norm is also easily generalized to n dimensions. If x = [x,x 2,...,x n ] T then x = x + x x n. Another norm that is often used measures the largest component in absolute value. This norm is called the infinity norm. For a = [ 4,3] T we have a = max{ 4, 3 } = 4. To compute this norm in MATLAB/Octave we use inf as the second argument in the norm function. > norm(a,inf) ans = 4 Here are three properties that the norms we have defined all have in common:. For every vector x and every number s, sx = s x. 2. The only vector with norm zero is the zero vector, that is, x = if and only if x =

12 I Linear Equations 3. For all vectors x and y, x + y x + y. This inequality is called the triangle inequality. It says that the length of the longest side of a triangle is smaller than the sum of the lengths of the two shorter sides. What is the point of introducing many ways of measuring the length of a vector? Sometimes one of the non-standard norms has natural meaning in the context of a given problem. For example, when we study stochastic matrices, we will see that multiplication of a vector by a stochastic matrix preserves the -norm of the vector. So in this situation it is natural to use -norms. However, in this course we will almost always use the standard Euclidean norm. If v a vector then v (without any subscripts) will always denote the standard Euclidean norm. I..6 Matrix norms Just as for vectors, there are many ways to measure the size of a matrix A. For a start we could think of a matrix as a vector whose entries just happen to be written in a box, like [ ] 2 A =, 2 rather than in a row, like a = 2. 2 Taking this point of view, we would define the norm of A to be = 3. In fact, the norm computed in this way is sometimes used for matrices. It is called the Hilbert- Schmidt norm. For a general matrix A = [a i,j ], the formula for the Hilbert-Schmidt norm is A HS = a i,j 2. The Hilbert-Schmidt norm does measure the size of matrix in some sense. It has the advantage of being easy to compute from the entries a i,j. But it is not closely tied to the action of A as a linear transformation. When A is considered as a linear transformation or operator, acting on vectors, there is another norm that is more natural to use. Starting with a vector x the matrix A transforms it to the vector Ax. We want to say that a matrix is big if increases the size of vectors, in other words, if Ax is big compared to x. So it is natural to consider the stretching ratio Ax / x. Of course, this ratio depends on x, since some vectors get stretched more than others by A. Also, the ratio is not defined if x =. But in this case Ax = too, so there is no stretching. i j 2

13 I. Solving Linear Equations We now define the matrix norm of A to be the largest of these ratios, Ax A = max x: x = x. This norm measures the maximum factor by which A can stretch the length of a vector. It is sometimes called the operator norm. Since A is defined to be the maximum of a collection of stretching ratios, it must be bigger than or equal to any particular stretching ratio. In other words, for any non zero vector x we know A Ax / x, or Ax A x. This is how the matrix norm is often used in practice. If we know x and the matrix norm A, then we have an upper bound on the norm of Ax. In fact, the maximum of a collection of numbers is the smallest number that is larger than or equal to every number in the collection (draw a picture on the number line to see this), the matrix norm A is the smallest number that is bigger than Ax / x for every choice of non-zero x. Thus A is the smallest number C for which for every x. An equivalent definition for A is Ax C x A = max x: x = Ax. Why do these definitions give the same answer? The reason is that the quantity Ax / x does not change if we multiply x by a non-zero scalar (convince yourself!). So, when calculating the maximum over all non-zero vectors in the first expression for A, all the vectors pointing in the same direction will give the same value for Ax / x. This means that we need only pick one vector in any given direction, and might as well choose the unit vector. For this vector, the denominator is equal to one, so we can ignore it. Here is another way of saying this. Consider the image of the unit sphere under A. This is the set of vectors {Ax : x = } The length of the longest vector in this set is A. The picture [ below ] is a sketch of the unit sphere (circle) in two dimensions, and its image 2 under A =. This image is an ellipse. 2 A 3

14 I Linear Equations The norm of the matrix is the distance from the origin to the point on the ellipse farthest from the origin. In this case this turns out to be A = 9/2 + (/2) 65. It s hard to see how this expression can be obtained from the entries of the matrix. There is no easy formula. However, if A is a diagonal matrix the norm is easy to compute. To see this, let s consider a diagonal matrix 3 A = 2. If then so that x x = x 2 x 3 3x Ax = 2x 2 x 3 Ax 2 = 3x 2 + 2x x 3 2 = 3 2 x x x x x x 3 2 = 3 2 x 2. This implies that for any unit vector x Ax 3 and taking the maximum over all unit vectors x yields A 3. On the other hand, the maximum of Ax over all unit vectors x is larger than the value of Ax for any particular unit vector. In particular, if e = then Thus we see that A Ae = 3. A = 3. In general, the matrix norm of a diagonal matrix with diagonal entries λ,λ 2,,λ n is the largest value of λ k. 4

15 I. Solving Linear Equations The MATLAB/Octave code for a diagonal matrix with diagonal entries 3, 2 and is diag([3 2 ]) and the expression for the norm of A is norm(a). So for example >norm(diag([3 2 ])) ans = 3 I..7 Condition number Let s return to the situation where A is a square matrix and we are trying to solve Ax = b. If A is a matrix arising from a real world application (for example if A contains values measured in an experiment) then it will almost never happen that A is singular. After all, a tiny change in any of the entries of A can change a singular matrix to a non-singular one. What is much more likely to happen is that A is close to being singular. In this case A will still exist, but will have some enormous entries. This means that the solution x = A b will be very sensitive to the tiniest changes in b so that it might happen that round-off error in the computer completely destroys the accuracy of the answer. To check whether a system of linear equations is well-conditioned, we might therefore think of using A as a measure. But this isn t quite right, since we actually don t care if A is large, provided it stretches each vector about the same amount. For example, if we simply multiply each entry of A by 6 the size of A will go way up, by a factor of 6, but our ability to solve the system accurately is unchanged. The new solution is simply 6 times the old solution, that is, we have simply shifted the position of the decimal point. It turns out that for a square matrix A, the ratio of the largest stretching factor to the smallest stretching factor of A is a good measure of how well conditioned the system of equation Ax = b is. This ratio is called the condition number and is denoted cond(a). Let s first compute an expression for cond(a) in terms of matrix norms. Then we will explain why it measures the conditioning of a system of equations. We already know that the largest stretching factor for a matrix A is the matrix norm A. So let s look at the smallest streching factor. We might as well assume that A is invertible. Otherwise, there is a non-zero vector that A sends to zero, so that the smallest stretching factor is and the condition number is infinite. Ax min x x = min x = min y = max y = A. Ax A Ax y A y A y y 5

16 I Linear Equations Here we used the fact that if x ranges over all non-zero vectors so does y = Ax and that the minimum of a collection of positive numbers is one divided by the maximum of their reciprocals. Thus the smallest stretching factor for A is / A. This leads to the following formula for the condition number of an invertible matrix: cond(a) = A A. In our applications we will use the condition number as a measure of how well we can solve the equations that come up accurately. Now, let us try to see why the condition number of A is a good measure of how well we can solve the equations Ax = b accurately. Starting with Ax = b we change the right side to b = b + b. The new solution is x = A (b + b) = x + x where x = A b is the original solution and the change in the solutions is x = A b. Now the absolute errors b and x are not very meaningful, since an absolute error b = is not very large if b =,,, but is large if b =. What we really care about are the relative errors b / b and x / x. Can we bound the relative error in the solution in terms of the relative error in the equation? The answer is yes. Beginning with x b = A b Ax A b A x, we can divide by b x to obtain x x A A b b = cond(a) b b. This equation gives the real meaning of the condition number. If the condition number is near to then the relative error of the solution is about the same as the relative error in the equation. However, a large condition number means that a small relative error in the equation can lead to a large relative error in the solution. In MATLAB/Octave the condition number is computed using cond(a). > A=[2 ;.5]; > cond(a) ans = 4 6

17 I. Solving Linear Equations I..8 Summary of MATLAB/Octave commands used in this section How to create a row vector [ ] square brackets are used to construct matrices and vectors. Create a row in the matrix by entering elements within brackets. Separate each element with a comma or space. For example, to create a row vector a with three columns (i.e. a -by-3 matrix), type a=[ ] or equivalently a=[,,] How to create a column vector or a matrix with more than one row ; when the semicolon is used inside square brackets, it terminates rows. For example, a=[;;] creates a column vector with three rows B=[ 2 3; 4 5 6] creates a 2 by 3 matrix when a matrix (or a vector) is followed by a single quote (or apostrophe) MATLAB flips rows with columns, that is, it generates the transpose. When the original matrix is a simple row vector, the apostrophe operator turns the vector into a column vector. For example, a=[ ] creates a column vector with three rows B=[ 2 3; 4 5 6] creates a 3 by 2 matrix where the first row is 4 How to use specialized matrix functions rand(n,m) returns a n-by-m matrix with random numbers between and. How to extract elements or submatrices from a matrix A(i,j) returns the entry of the matrix A in the i-th row and the j-th column A(i,:) returns a row vector containing the i-th row of A A(:,j) returns a column vector containing the j-th column of A A(i:j,k:m) returns a matrix containing a specific submatrix of the matrix A. Specifically, it returns all rows between the i-th and the j-th rows of A, and all columns between the k-th and the m-th columns of A. 7

18 I Linear Equations How to perform specific operations on a matrix det(a) returns the determinant of the (square) matrix A rref(a) returns the reduced row echelon form of the matrix A norm(v) returns the 2-norm (Euclidean norm) of the vector V norm(v,) returns the -norm of the vector V norm(v,inf) returns the infinity norm of the vector V 8

19 I.2 Interpolation I.2 Interpolation Prerequisites and Learning Goals From your work in previous courses, you should be able to compute the determinant of a square matrix; apply the basic linearity properties of the determinant, and explain what its value means about existence and uniqueness of solutions. After completing this section, you should be able to give a definition of interpolation function and explain the idea of getting a unique interpolation function by restricting the class of functions under consideration. Define the problem of Lagrange interpolation and express it in terms of a system of equations where the unknowns are the coefficients of a polynomial of given degree; set up the system in matrix form using the Vandermonde matrix, derive the formula for the determinant of the Vandermonde matrix; explain why a solution to the Lagrange interpolation problem always exists. Explain why Lagrange interpolation is not a practical method for large numbers of points. Define the mathematical problem of interpolation using splines, compare and contrast it with Lagrange interpolation. Explain how minimizing the bending energy leads to a description of the shape of the spline as a piecewise polynomial function. Express the interpolation problem of cubic splines in terms of a system of equations where the unknowns are related to the coefficients of the cubic polynomials. Given a set of points, use MATLAB/Octave to calculate and plot the interpolating polynomial in Lagrange interpolation and the piecewise function for cubic splines. Use the MATLAB/Octave functions linspace, vander, polyval, zeros and ones. Use m files in MATLAB/Octave. 9

20 I Linear Equations I.2. Introduction Suppose we are given some points (x,y ),...,(x n,y n ) in the plane, where the points x i are all distinct. Our task is to find a function f(x) that passes through all these points. In other words, we require that f(x i ) = y i for i =,...,n. Such a function is called an interpolating function. Problems like this arise in practical applications in situations where a function is sampled at a finite number of points. For example, the function could be the shape of the model we have made for a car. We take a bunch of measurements (x,y ),...,(x n,y n ) and send them to the factory. What s the best way to reproduce the original shape? Of course, it is impossible to reproduce the original shape with certainty. There are infinitely many functions going through the sampled points. To make our problem of finding the interpolating function f(x) have a unique solution, we must require something more of f(x), either that f(x) lies in some restricted class of functions, or that f(x) is the function that minimizes some measure of badness. We will look at both approaches. I.2.2 Lagrange interpolation For Lagrange interpolation, we try to find a polynomial p(x) of lowest possible degree that passes through our points. Since we have n points, and therefore n equations p(x i ) = y i to solve, it makes sense that p(x) should be a polynomial of degree n p(x) = a x n + a 2 x n a n x + a n with n unknown coefficients a,a 2,...,a n. (Don t blame me for the screwy way of numbering the coefficients. This is the MATLAB/Octave convention.) 2

21 I.2 Interpolation The n equations p(x i ) = y i are n linear equations for these unknown coefficients which we may write as x n x n 2 x 2 a x a 2 y x2 n x2 n 2 x 2 2 x y 2 =... a n 2.. xn n xn n 2 x 2 n x n a n y n a n Thus we see that the problem of Lagrange interpolation reduces to solving a system of linear equations. If this system has a unique solution, then there is exactly one polynomial p(x) of degree n running through our points. This matrix for this system of equations has a special form and is called a Vandermonde matrix. To decide whether the system of equations has a unique solution we need to determine whether the Vandermonde matrix is invertible or not. One way to do this is to compute the determinant. It turns out that the determinant of a Vandermonde matrix has a particularly simple form, but it s a little tricky to see this. The 2 2 case is simple enough: ([ ]) x det = x x 2 x 2. To go on to the 3 3 case we won t simply expand the determinant, but recall that the determinant is unchanged under row (and column) operations of the type add a multiple of one row (column) to another. Thus if we start with a 3 3 Vandermonde determinant, add x times the second column to the first, and then add x times the third column to the second, the determinant doesn t change and we find that x 2 x det x 2 2 x 2 = det x 2 3 x 3 x = det x 2 2 x x 2 x 2 x 2 3 x x 3 x 3 x 2 2 x x 2 x 2 x. x 2 3 x x 3 x 3 x Now we can take advantage of the zeros in the first row, and calculate the determinant by expanding along the top row. This gives x 2 x ([ ]) ([ ]) det x 2 2 x 2 x 2 = det 2 x x 2 x 2 x x2 (x x 2 x 3 x x = det 2 x ) x 2 x. x 3 x 3 x x 3 (x 3 x ) x 3 x Now, we recall that the determinant is linear in each row separately. This implies that ([ ]) ([ ]) x2 (x det 2 x ) x 2 x x = (x x 3 (x 3 x ) x 3 x 2 x )det 2 x 3 (x 3 x ) x 3 x ([ ]) x2 = (x 2 x )(x 3 x )det. x 3 But the determinant on the right is a 2 2 Vandermonde determinant that we have already 2

22 I Linear Equations computed. Thus we end up with the formula x 2 x det x 2 2 x 2 = (x 2 x )(x 3 x )(x 3 x 2 ). x 2 3 x 3 The general formula is det x n x n 2 x2 n x n 2. x n n x 2 x 2 x 2 2 x = ± (x i x j ), xn n 2 x 2 i>j n x n where ± = ( ) n(n )/2. It can be proved by induction using the same strategy as we used for the 3 3 case. The product on the right is the product of all differences x i x j. This product is non-zero, since we are assuming that all the points x i are distinct. Thus the Vandermonde matrix is invertible, and a solution to the Lagrange interpolation problem always exists. Now let s use MATLAB/Octave to see how this interpolation works in practice. We begin by putting some points x i into a vector X and the corresponding points y i into a vector Y. >X=[ ] >Y=[ ] We can use the plot command in MATLAB/Octave to view these points. The command plot(x,y) will pop open a window and plot the points (x i,y i ) joined by straight lines. In this case we are not interested in joining the points (at least not with straight lines) so we add a third argument: o plots the points as little circles. (For more information you can type help plot on the MATLAB/Octave command line.) Thus we type >plot(x,y, o ) >axis([-.,.,,.5]) >hold on The axis command adjusts the axis. Normally when you issue a new plot command, the existing plot is erased. The hold on prevents this, so that subsequent plots are all drawn on the same graph. The original behaviour is restored with hold off. When you do this you should see a graph appear that looks something like this. 22

23 I.2 Interpolation Now let s compute the interpolation polynomial. Luckily there are build in functions in MATLAB/Octave that make this very easy. To start with, the function vander(x) returns the Vandermonde matrix corresponding to the points in X. So we define >V=vander(X) V = We saw above that the coefficients of the interpolation polynomial are given by the solution a to the equation V a = y. We find those coefficients using >a=v\y Let s have a look at the interpolating polynomial. The MATLAB/Octave function polyval(a,x) takes a vector X of x values, say x,x 2,... x k and returns a vector containing the values p(x ),p(x 2 ),... p(x k ), where p is the polynomial whose coefficients are in the vector a, that is, p(x) = a x n + a 2 x n a n x + a n So plot(x,polyval(a,x)) would be the command we want, except that with the present definition of X this would only plot the polynomial at the interpolation points. What we want is to plot the polynomial for all points, or at least for a large number. The command linspace(,,) produces a vector of linearly spaced points between and, so the following commands do the job. >XL=linspace(,,); >YL=polyval(a,XL); >plot(xl,yl); >hold off 23

24 I Linear Equations The result looks pretty good The MATLAB/Octave commands for this example are in lagrange.m. Unfortunately, things get worse when we increase the number of interpolation points. One clue that there might be trouble ahead is that even for only six points the condition number of V is quite high (try it!). Let s see what happens with 8 points. We will take the x values to be equally spaced between and. For the y values we will start off by taking y i = sin(2πx i ). We repeat the steps above. >X=linspace(,,8); >Y=sin(2*pi*X); >plot(x,y, o ) >axis([ ]) >hold on >V=vander(X); >a=v\y ; >XL=linspace(,,5); >YL=polyval(a,XL); >plot(xl,yl); The resulting picture looks okay But look what happens if we change one of the y values just a little. We add.2 to the fifth y value, redo the Lagrange interpolation and plot the new values in red. 24

25 I.2 Interpolation >Y(5) = Y(5)+.2; >plot(x(5),y(5), or ) >a=v\y ; >YL=polyval(a,XL); >plot(xl,yl, r ); >hold off The resulting graph makes a wild excursion and even though it goes through the given points, it would not be a satisfactory interpolating function in a practical situation A calculation reveals that the condition number is >cond(v) ans =.8822e+4 If we try to go to 2 points equally spaced between and, the Vandermonde matrix is so ill conditioned that MATLAB/Octave considers it to be singular. 25

26 I Linear Equations I.2.3 Cubic splines In the last section we saw that Lagrange interpolation becomes impossible to use in practice if the number of points becomes large. Of course, the constraint we imposed, namely that the interpolating function be a polynomial of low degree, does not have any practical basis. It is simply mathematically convenient. Let s start again and consider how ship and airplane designers actually drew complicated curves before the days of computers. Here is a picture of a draughtsman s spline (taken from where you can also find a nice photo of such a spline in use) It consists of a bendable but stiff strip held in position by a series of weights called ducks. We will try to make a mathematical model of such a device. We begin again with points (x,y ),(x 2,y 2 ),... (x n,y n ) in the plane. Again we are looking for a function f(x) that goes through all these points. This time, we want to find the function that has the same shape as a real draughtsman s spline. We will imagine that the given points are the locations of the ducks. Our first task is to identify a large class of functions that represent possible shapes for the spline. We will write down three conditions for a function f(x) to be acceptable. Since the spline has no breaks in it the function f(x) should be continuous. Moreover f(x) should pass through the given points. Condition : f(x) is continuous and f(x i ) = y i for i =,...,n. The next condition reflects the assumption that the strip is stiff but bendable. If the strip were not stiff, say it were actually a rubber band that just is stretched between the ducks, then our resulting function would be a straight line between each duck location (x i,y i ). At each duck location there would be a sharp bend in the function. In other words, even though the function itself would be continuous, the first derivative would be discontinuous at the duck locations. We will interpret the words bendable but stiff to mean that the first derivatives of f(x) exist. This leads to our second condition. 26

27 I.2 Interpolation Condition 2: The first derivative f (x) exists and is continuous everywhere, including each interior duck location x i. In between the duck locations we will assume that f(x) is perfectly smooth and that higher derivatives behave nicely when we approach the duck locations from the right or the left. This leads to Condition 3: For x in between the duck points x i the higher order derivatives f (x),f (x),... all exist and have left and right limits as x approaches each x i. In this condition we are allowing for the possibility that f (x) and higher order derivatives have a jump at the duck locations. This happens if the left and right limits are different. The set of functions satisfying conditions, 2 and 3 are all the possible shapes of the spline. How do we decide which one of these shapes is the actual shape of the spline? To do this we need to invoke a bit of the physics of bendable strips. The bending energy E[f] of a strip whose shape is described by the function f is given by the integral E[f] = xn x ( f (x) ) 2 dx The actual spline will relax into the shape that makes E[f] as small as possible. Thus, among all the functions satisfying conditons, 2 and 3, we want to choose the one that minimizes E[f]. This minimization problem is similiar to ones considered in calculus courses, except that instead of real numbers, the variables in this problem are functions f satisfying conditons, 2 and 3. In calculus, the minimum is calculated by setting the derivative to zero. A similar procedure is described in the next section. Here is the result of that calculation: Let F(x) be the function describing the shape that makes E[f] as small as possible. In other words, F(x) satisfies condtions, 2 and 3. If f(x) also satisfies conditions, 2 and 3, then E[F] E[f]. Then, in addition to conditions, 2 and 3, F(x) satisfies Condition a: In each interval (x i,x i+ ), the function F(x) is a cubic polynomial. In other words, for each interval there are coefficients A i, B i, C i and D i such that F(x) = A i x 3 + B i x 2 + C i x + D i for all x between x i and x i+. The coefficients can be different for different intervals. Condition b: The section derivative F (x) is continuous. Condition c: When x is an endpoint (either x or x n ) then F (x) = As we will see, there is exactly one function satisfying conditions, 2, 3, a, b and c. 27

28 I Linear Equations I.2.4 The minimization procedure In this section we explain the minimization procedure leading to a mathematical description of the shape of a spline. In other words, we show that if among all functions f(x) satisfying conditions, 2 and 3, the function F(x) is the one with E[f] the smallest, then F(x) also satisfies conditions a, b and c. The idea is to assume that we have found F(x) and then try to deduce what properties it must satisfy. There is actually a is a hidden assumption here we are assuming that the minimizer F(x) exists. This is not true for every minimization problem (think of minimizing the function (x 2 +) for < x < ). However the spline problem does have a minimizer, and we will leave out the step of proving it exists. Given the minimizer F(x) we want to wiggle it a little and consider functions of the form F(x)+ǫh(x), where h(x) is another function and ǫ be a number. We want to do this in such a way that for every ǫ, the function F(x) + ǫh(x) still satisfies conditions, 2 and 3. Then we will be able to compare E[F] with E[F + ǫh]. A little thought shows that functions of form F(x) + ǫh(x) will satsify conditions, 2 and 3 for every value of ǫ if h satisfies Condition : h(x i ) = for i =,...,n. together with conditions 2 and 3 above. Now, the minimization property of F says that each fixed function h satisfying, 2 and 3 the function of ǫ given by E[F + ǫh] has a local minimum at ǫ =. From Calculus we know that this implies that de[f + ǫh] dǫ =. (I.) ǫ= Now we will actually compute this derivative with respect to ǫ and see what information we can get from the fact that it is zero for every choice of h(x) satisfying conditions, 2 and 3. To simplify the presentation we will assume that there are only three points (x,y ), (x 2,y 2 ) and (x 3,y 3 ). The goal of this computation is to establish that equation (??) can be rewritten as (??). To begin, we compute = de[f + ǫh] dǫ = ǫ= = = 2 = 2 d(f (x) + ǫh (x)) 2 x dǫ dx ǫ= x3 x3 x x3 x x2 2 (F (x) + ǫh (x))h (x) ǫ= dx F (x)h (x)dx x F (x)h (x)dx + 2 x3 x 2 F (x)h (x)dx 28

29 I.2 Interpolation We divide by 2 and integrate by parts in each integral. This gives = F (x)h (x) x=x 2 x=x x2 x F (x)h (x)dx + F (x)h (x) x=x 3 x=x 2 x3 x 2 F (x)h (x)dx In each boundary term we have to take into account the possibility that F (x) is not continuous across the points x i. Thus we have to use the appropriate limit from the left or the right. So, for the first boundary term F (x)h (x) x=x 2 x=x = F (x 2 )h (x 2 ) F (x +)h (x ) Notice that since h (x) is continuous across each x i we need not distinguish the limits from the left and the right. Expanding and combining the boundary terms we get = F (x +)h (x ) + ( F (x 2 ) F (x 2 +) ) h (x 2 ) + F (x 3 )h (x 3 ) x2 x F (x)h (x)dx x3 x 2 F (x)h (x)dx Now we integrate by parts again. This time the boundary terms all vanish because h(x i ) = for every i. Thus we end up with the equation as desired. = F (x +)h (x ) + ( F (x 2 ) F (x 2 +) ) h (x 2 ) + F (x 3 )h (x 3 ) + x2 x F (x)h(x)dx x3 x 2 F (x)h(x)dx (I.2) Recall that this equation has to be true for every choice of h satisfying conditions, 2 and 3. For different choices of h(x) we can extract different pieces of information about the minimizer F(x). To start, we can choose h that is zero everywhere except in the open interval (x,x 2 ). For all such h we then obtain = x 2 x F (x)h(x)dx. This can only happen if F (x) = for x < x < x 2 Thus we conclude that the fourth derivative F (x) is zero in the interval (x,x 2 ). Once we know that F (x) = in the interval (x,x 2 ), then by integrating both sides we can conclude that F (x) is constant. Integrating again, we find F (x) is a linear polynomial. By integrating four times, we see that F(x) is a cubic polynomial in that interval. When doing the integrals, we must not extend the domain of integration over the boundary point x 2 since F (x) may not exist (let alone by zero) there. Similarly F (x) must also vanish in the interval (x 2,x 3 ), so F(x) is a (possibly different) cubic polynomial in the interval (x 2,x 3 ). 29

30 I Linear Equations (An aside: to understand better why the polynomials might be different in the intervals (x,x 2 ) and (x 3,x 4 ) consider the function g(x) (unrelated to the spline problem) given by { for x < x < x 2 g(x) = for x 2 < x < x 3 Then g (x) = in each interval, and an integration tells us that g is constant in each interval. However, g (x 2 ) does not exist, and the constants are different.) We have established that F(x) satisfies condition a. Now that we know that F (x) vanishes in each interval, we can return to (??) and write it as = F (x +)h (x ) + ( F (x 2 ) F (x 2 +) ) h (x 2 ) + F (x 3 )h (x 3 ) Now choose h(x) with h (x ) = and h (x 2 ) = h (x 3 ) =. Then the equation reads F (x +) = Similarly, choosing h(x) with h (x 3 ) = and h (x ) = h (x 2 ) = we obtain This establishes condition c. F (x 3 ) = Finally choosing h(x) with h (x 2 ) = and h (x ) = h (x 3 ) = we obtain F (x 2 ) F (x 2 +) = In other words, F must be continuous across the interior duck position. Thus shows that condition b holds, and the derivation is complete. This calculation is easily generalized to the case where there are n duck positions x,...,x n. A reference for this material is Essentials of numerical analysis, with pocket calculator demonstrations, by Henrici. I.2.5 The linear equations for cubic splines Let us now turn this description into a system of linear equations. In each interval (x i,x i+ ), for i =,... n, f(x) is given by a cubic polynomial p i (x) which we can write in the form p i (x) = a i (x x i ) 3 + b i (x x i ) 2 + c i (x x i ) + d i for coefficients a i, b i, c i and d i to be determined. For each i =,... n we require that p i (x i ) = y i and p i (x i+ ) = y i+. Since p i (x i ) = d i, the first of these equations is satisfied if d i = y i. So let s simply make that substitution. This leaves the n equations p i (x i+ ) = a i (x i+ x i ) 3 + b i (x i+ x i ) 2 + c i (x i+ x i ) + y i = y i+. 3

31 I.2 Interpolation Secondly, we require continuity of the first derivative across interior x i s. This translates to p i (x i+) = p i+ (x i+) or 3a i (x i+ x i ) 2 + 2b i (x i+ x i ) + c i = c i+ for i =,..., n 2, giving an additional n 2 equations. Next, we require continuity of the second derivative across interior x i s. This translates to p i (x i+) = p i+ (x i+) or 6a i (x i+ x i ) + 2b i = 2b i+ for i =,...,n 2, once more giving an additional n 2 equations. Finally, we require that p (x ) = p n (x n) =. This yields two more equations 2b = 6a n (x n x n ) + 2b n = for a total of 3(n ) equations for the same number of variables. We now specialize to the case where the distances between the points x i are equal. Let L = x i+ x i be the common distance. Then the equations read a i L 3 + b i L 2 +c i L = y i+ y i 3a i L 2 + 2b i L +c i c i+ = 6a i L + 2b i 2b i+ = for i =... n 2 together with a n L 3 + b n L 2 +c n L = y n y n + 2b = 6a n L + 2b n = We make one more simplification. After multiplying some of the equations with suitable powers of L we can write these as equations for α i = a i L 3, β i = b i L 2 and γ i = c i L. They have a very simple block structure. For example, when n = 4 the matrix form of the equations is α β γ α 2 β 2 γ 2 α 3 β 3 γ 3 y 2 y y 3 y 2 = y 4 y 3 Notice that the matrix in this equation does not depend on the points (x i,y i ). It has a 3 3 3

32 I Linear Equations block structure. If we define the 3 3 blocks N = M = 2 = T = 2 V = 6 2 then the matrix in our equation has the form N M S = N M T V Once we have solved the equation for the coefficients α i, β i and γ i, the function F(x) in the interval (x i,x i+ ) is given by F(x) = p i (x) = α i ( x xi L ) 3 ( ) x 2 ( ) xi x xi + β i + γ i + y i L L Now let us use MATLAB/Octave to plot a cubic spline. To start, we will do an example with four interpolation points. The matrix S in the equation is defined by >N=[ ;3 2 ;6 2 ]; >M=[ ; -; -2 ]; >Z=zeros(3,3); >T=[ ; 2 ; ]; >V=[ ; ;6 2 ]; >S=[N M Z; Z N M; T Z V] S =

33 I.2 Interpolation Here we used the function zeros(n,m) which defines an n m matrix filled with zeros. To proceed we have to know what points we are trying to interpolate. We pick four (x,y) values and put them in vectors. Remember that we are assuming that the x values are equally spaced. >X=[,.5, 2, 2.5]; >Y=[.5,.8,.2,.4]; We plot these points on a graph. >plot(x,y, o ) >hold on Now let s define the right side of the equation >b=[y(2)-y(),,,y(3)-y(2),,,y(4)-y(3),,]; and solve the equation for the coefficients. >a=s\b ; Now let s plot the interpolating function in the first interval. We will use 5 closely spaced points to get a smooth looking curve. >XL = linspace(x(),x(2),5); Put the first set of coefficients (α, β, γ, y ) into a vector >p = [a() a(2) a(3) Y()]; 33

34 I Linear Equations Now we put the values p (x) into the vector YL. First we define the values (x x )/L and put them in the vector XLL. To get the values x x we want to subtract the vector with X() in every position from X. The vector with X() in every position can be obtained by taking a vector with in every position (in MATLAB/Octave this is obtained using the function ones(n,m)) and multiplying by the number X(). Then we divide by the (constant) spacing between the x i values. >L = X(2)-X(); >XLL = (XL - X()*ones(,5))/L; Now we evaluate the polynomial p (x) and plot the resulting points. >YL = polyval(p,xll); >plot(xl,yl); To complete the plot, we repeat this steps for the intervals (x 2,x 3 ) and (x 3,x 4 ). >XL = linspace(x(2),x(3),5); >p = [a(4) a(5) a(6) Y(2)]; >XLL = (XL - X(2)*ones(,5))/L; >YL = polyval(p,xll); >plot(xl,yl); >XL = linspace(x(3),x(4),5); >p = [a(7) a(8) a(9) Y(3)]; >XLL = (XL - X(3)*ones(,5))/L; >YL = polyval(p,xll); >plot(xl,yl); The result looks like this: 34

35 I.2 Interpolation I have automated the procedure above and put the result in two files splinemat.m and plotspline.m. splinemat(n) returns the 3(n ) 3(n ) matrix used to compute a spline through n points while plotspline(x,y) plots the cubic spline going through the points in X and Y. If you put these files in you MATLAB/Octave directory you can use them like this: >splinemat(3) ans = and >X=[,.5, 2, 2.5]; >Y=[.5,.8,.2,.4]; >plotspline(x,y) 35

36 I Linear Equations to produce the plot above. Let s use these functions to compare the cubic spline interpolation with the Lagrange interpolation by using the same points as we did before. Remember that we started with the points >X=linspace(,,8); >Y=sin(2*pi*X); Let s plot the spline interpolation of these points >plotspline(x,y); Here is the result with the Lagrange interpolation added (in red). The red (Lagrange) curve covers the blue one and its impossible to tell the curves apart Now we move one of the points slightly, as before. >Y(5) = Y(5)+.2; Again, plotting the spline in blue and the Lagrange interpolation in red, here are the results. 36

37 I.2 Interpolation This time the spline does a much better job! Let s check the condition number of the matrix for the splines. Recall that there are 8 points. >cond(splinemat(8)) ans = Recall the Vandermonde matrix had a condition number of.8822e+4. This shows that the system of equations for the splines is very much better conditioned, by 3 orders of magnitude!! Code for splinemat.m and plotspline.m function S=splinemat(n) L=[ ;3 2 ;6 2 ]; M=[ ; -; -2 ]; Z=zeros(3,3); T=[ ; 2 ; ]; V=[ ; ;6 2 ]; S=zeros(3*(n-),3*(n-)); for k=[:n-2] 37

38 I Linear Equations for l=[:k-] S(3*k-2:3*k,3*l-2:3*l) = Z; end S(3*k-2:3*k,3*k-2:3*k) = L; S(3*k-2:3*k,3*k+:3*k+3) = M; for l=[k+2:n-] S(3*k-2:3*k,3*l-2:3*l) = Z; end end S(3*(n-)-2:3*(n-),:3)=T; for l=[2:n-2] S(3*(n-)-2:3*(n-),3*l-2:3*l) = Z; end S(3*(n-)-2:3*(n-),3*(n-)-2:3*(n-))=V; end function plotspline(x,y) n=length(x); L=X(2)-X(); S=splinemat(n); b=zeros(,3*(n-)); for k=[:n-] b(3*k-2)=y(k+)-y(k); b(3*k-)=; b(3*k)=; end a=s\b ; npoints=5; XL=[]; YL=[]; for k=[:n-] XL = [XL linspace(x(k),x(k+),npoints)]; p = [a(3*k-2),a(3*k-),a(3*k),y(k)]; XLL = (linspace(x(k),x(k+),npoints) - X(k)*ones(,npoints))/L; YL = [YL polyval(p,xll)]; end plot(x,y, o ) 38

39 I.2 Interpolation hold on plot(xl,yl) hold off I.2.6 Summary of MATLAB/Octave commands used in this section How to access elements of a vector a(i) returns the i-th element of the vector a How to create a vector with linearly spaced elements linspace(x,x2,n) generates n points between the values x and x2. How to create a matrix by concatenating other matrices C= [A B] takes two matrices A and B and creates a new matrix C by concatenating A and B horizontally Other specialized matrix functions zeros(n,m) creates a n-by-m matrix filled with zeros ones(n,m) creates a n-by-m matrix filled with ones vander(x) creates the Vandermonde matrix corresponding to the points in the vector X. Note that the columns of the Vandermonde matrix are powers of the vector X. Other useful functions and commands polyval(a,x) takes a vector X of x values and returns a vector containing the values of a polynomial p evaluated at the x values. The coefficients of the polynomial p (in descending powers) are the values in the vector a. sin(x) takes a vector X of values x and returns a vector containing the values of the function sin x plot(x,y) plots vector Y versus vector X. Points are joined by a solid line. To change line types (solid, dashed, dotted, etc.) or plot symbols (point, circle, star, etc.), include an additional argument. For example, plot(x,y, o ) plots the points as little circle. 39

40 I Linear Equations I.3 Finite difference approximations Prerequisites and Learning Goals From your work in previous courses, you should be able to explain what it is meant by a boundary value problem. After completing this section, you should be able to Take a second order linear boundary value problem and write down the corresponding finite difference equation. Use the finite difference equation and MATLAB/Octave to compute an approximate solution. Use the MATLAB/Octave command diag. Describe the action of. (period) before a MATLAB/Octave operator. I.3. Introduction and example One of the most important applications of linear algebra is the approximate solution of differential equations. In a differential equation we are trying to solve for an unknown function. The basic idea is to turn a differential equation into a system of N N linear equations. As N becomes large, the vector solving the system of linear equations becomes a better and better approximation to the function solving the differential equation. In this section we will learn how to use linear algebra to find approximate solutions to a boundary value problem of the form subject to boundary conditions f (x) + q(x)f(x) = r(x) for x f() = A, f() = B. This is a differential equation where the unknown quantity to be found is a function f(x). The functions q(x) and r(x) are given (known) functions. As differential equations go, this is a very simple one. For one thing it is an ordinary differential equation (ODE), because it only involves one independent variable x. But the finite difference methods we will introduce can also be applied to partial differential equations (PDE). It can be useful to have a picture in your head when thinking about an equation. Here is a situation where an equation like the one we are studying arises. Suppose we want to find the shape of a stretched hanging cable. The cable is suspended above the points x = and x = at heights of A and B respectively and hangs above the interval x. Our goal is to find the height f(x) of the cable above the ground at every point x between and. 4

41 I.3 Finite difference approximations u(x) A B x The loading of the cable is described by a function 2r(x) that takes into account both the weight of the cable and any additional load. Assume that this is a known function. The height function f(x) is the function that minimizes the sum of the stretching energy and the gravitational potential energy given by E[f] = [(f (x)) 2 + 2r(x)f(x)]dx subject to the condition that f() = A and f() = B. An argument similar (but easier) to the one we did for splines shows that the minimizer satisfies the differential equation f (x) = r(x). So we end up with the special case of our original equation where q(x) =. Actually, this special case can be solved by simply integrating twice and adjusting the constants of integration to ensure f() = A and f() = B. For example, when r(x) = r is constant and A = B =, the solution is f(x) = rx/2 + rx 2 /2. We can use this exact solution to compare against the approximate solution that we will compute. I.3.2 Discretization In the finite difference approach to solving differential equations approximately, we want to approximate a function by a vector containing a finite number of sample values. Pick equally spaced points x k = k/n, k =,...,N between and. We will represent a function f(x) by its values f k = f(x k ) at these points. Let f f F =.. f N 4

42 I Linear Equations f(x) f f f2 f3 f4 f5 f6 f7 f8 x x x x x x x x x x At this point we throw away all the other information about the function, keeping only the values at the sampled points. f(x) f f f2 f3 f4 f5 f6 f7 f8 x x x x x x x x x x If this is all we have to work with, what should we use as an approximation to f (x)? It seems reasonable to use the slopes of the line segments joining our sampled points. f(x) f f f2 f3 f4 f5 f6 f7 f8 x x x x x x x x x x Notice, though, that there is one slope for every interval (x i,x i+ ) so the vector containing the slopes has one fewer entry than the vector F. The formula for the slope in the interval 42

43 I.3 Finite difference approximations (x i,x i+ ) is (f i+ f i )/ x where the distance x = x i+ x i (in this case x = /N). Thus the vector containing the slopes is f f f f 2 f f F = ( x) f 3 f 2 = ( x) f f 3 = ( x) D N F.. f N f N where D N is the N (N + ) finite difference matrix in the formula above. The vector F is our approximation to the first derivative function f (x). f N To approximate the second derivative f (x), we repeat this process to define the vector F. There will be one entry in this vector for each adjacent pair of slopes, that is, each adjacent pair of entries of F. These are naturally labelled by the interior points x,x 2,...,x n. Thus we obtain f 2 2 f F = ( x) 2 D N D N F = ( x) 2 2 f f f N Let r k = r(x k ) be the sampled points for the load function r(x) and define the vector approximation for r at the interior points r = r. r N The reason we only define this vector for interior points is that that is where F is defined. Now we can write down the finite difference approximation to f (x) = r(x) as. ( x) 2 D N D N F = r or D N D N F = ( x) 2 r This is a system of N equations in N + unknowns. To get a unique solution, we need two more equations. That is where the boundary conditions come in! We have two boundary conditions, which in this case can simply be written as f = A and f N = B. Combining these 43

44 I Linear Equations with the N equations for the interior points, we may rewrite the system of equations as 2 A 2 ( x) 2 r 2 ( x) 2 r 2 F = ( x) 2 r N B Note that it is possible to incorporate other types of boundary conditions by simply changing the first and last equations. Let s define L to be the (N + ) (N + ) matrix of coefficients for this equation, so that the equation has the form LF = b. The first thing to do is to verify that L is invertible, so that we know that there is a unique solution to the equation. It is not too difficult to compute the determinant if you recall that the elementary row operations that add a multiple of one row to another do not change the value of the determinant. Using only this type of elementary row operation, we can reduce L to an upper triangular matrix whose diagonal entries are, 2, 3/2, 4/3, 5/4,..., N/(N ),. The determinant is the product of these entries, and this equals ±N. Since this value is not zero, the matrix L is invertible. It is worthwhile pointing out that a change in boundary conditions (for example, prescribing the values of the derivative f () and f () rather than f() and f()) results in a different matrix L that may fail to be invertible. We should also ask about the condition number of L to determine how large the relative error of the solution can be. We will compute this using MATLAB/Octave below. Now let s use MATLAB/Octave to solve this equation. We will start with the test case where r(x) = and A = B =. In this case we know that the exact solution is f(x) = x/2 + x 2 /2. We will work with N = 5. Notice that, except for the first and last rows, L has a constant value of 2 on the diagonal, and a constant value of on the off-diagonals immediately above and below. Before proceeding, we introduce the MATLAB/Octave command diag. For any vector D, diag(d) is a diagonal matrix with the entries of D on the diagonal. So for example >D=[ ]; >diag(d) 44

45 I.3 Finite difference approximations ans = An optional second argument offsets the diagonal. So, for example >D=[ 2 3 4]; >diag(d,) ans = >diag(d,-) ans = Now returning to our matrix L we can define it as >N=5; >L=diag(-2*ones(,N+)) + diag(ones(,n),) + diag(ones(,n),-); >L(,) = ; >L(,2) = ; >L(N+,N+) = ; >L(N+,N) = ; The condition number of L for N = 5 is 45

46 I Linear Equations >cond(l) ans = 2.7 We will denote the right side of the equation byb. To start, we will definebto be ( x) 2 r(x i ) and then adjust the first and last entries to account for the boundary values. Recall that r(x) is the constant function, so its sampled values are all too. >dx = /N; >b=ones(n+,)*dx^2; >A=; B=; >b() = A; >b(n+) = B; Now we solve the equation for F. >F=L\b; The x values are N + equally spaced points between and, >X=linspace(,,N+); Now we plot the result. >plot(x,f)

47 I.3 Finite difference approximations Let s superimpose the exact solution in red. >hold on >plot(x,ones(,n+)-x/2+x.^2/2, r ) (The. before an operator tells MATLAB/Octave to apply that operator element by element, so X.^2 returns an array with each element the corresponding element of X squared.) The two curves are indistinguishable. What happens if we increase the load at a single point? Recall that we have set the loading function r(x) to be everywhere. Let s increase it at just one point. Adding, say, 5 to one of the values of r is the same as adding 5( x) 2 to the right side b. So the following commands do the job. We are changing b which corresponds to changing r(x) at x =.2. >b() = b() + 5*dx^2; >F=L\b; >hold on >plot(x,f); Before looking at the plot, let s do this one more time, this time making the cable really heavy at the same point. 47

48 I Linear Equations >b() = b() + 5*dx^2; >F=L\b; >hold on >plot(x,f); Here is the resulting plot So far we have only considered the case of our equation f (x) + q(x)f(x) = r(x) where q(x) =. What happens when we add the term containing q? We must sample the function q(x) at the interior points and add the corresponding vector. Since we multiplied the equations for the interior points by ( x) 2 we must do the same to these terms. Thus we must add the term q f q ( x) 2 q 2 f 2 q 2 = ( x) 2 q 3 F.. q N f N q N. In other words, we replace the matrix L in our equation with L + ( x) 2 Q where Q is the (N + ) (N + ) diagonal matrix with the interior sampled points of q(x) on the diagonal.. 48

49 I.3 Finite difference approximations I ll leave it to a homework problem to incorporate this change in a MATLAB/Octave calculation. One word of caution: the matrix L by itself is always invertible (with reasonable condition number). However L + ( x) 2 Q may fail to be invertible. This reflects the fact that the original differential equation may fail to have a solution for some choices of q(x) and r(x). I.3.3 Another example: the heat equation In the previous example involving the loaded cable there was only one independent variable, x, and as a result we ended up with an ordinary differential equation which determined the shape. In this example we will have two independent variables, time t, and one spatial dimension x. The quantities of interest can now vary in both space and time. Thus we will end up with a partial differential equation which will describe how the physical system behaves. Imagine a long thin rod (a one-dimensional rod) where the only important spatial direction is the x direction. Given some initial temperature profile along the rod and boundary conditions at the ends of the rod, we would like to determine how the temperature, T = T(x,t), along the rod varies over time. Consider a small section of the rod between x and x + x. The rate of change of internal energy, Q(x,t), in this section is proportional to the heat flux, q(x,t), into and out of the section. That is Q (x,t) = q(x + x,t) + q(x,t). t Now the internal energy is related to the temperature by Q(x,t) = ρc p xt(x,t), where ρ and C p are the density and specific heat of the rod (assumed here to be constant). Also, from Fourier s law, the heat flux through a point in the rod is proportional to the (negative) temperature gradient at the point, i.e., q(x,t) = K T(x,t)/ x, where K is a constant (the thermal conductivity); this basically says that heat flows from hotter to colder regions. Substituting these two relations into the above energy equation we get ρc p x T t (x,t) = K T t (x,t) = K ρc p Taking the limit as x goes to zero we obtain ( T T (x + x,t) x x (x,t) T T x (x + x,t) x T t (x,t) = k 2 T x 2 (x,t), x (x,t). where k = K /ρc p is a constant. This partial differential equation is known as the heat equation and describes how the temperature along a one-dimensional rod evolves. ) 49

50 I Linear Equations We can also include other effects. If there was a temperature source or sink, S(x,t), then this will contribute to the local change in temperature: T t (x,t) = T k 2 (x,t) + S(x,t). x2 And if we also allow the rod to cool down along its length (because, say, the surrounding air is a different temperature than the rod), then the differential equation becomes T t (x,t) = T k 2 (x,t) HT(x,t) + S(x,t), x2 where H is a constant (here we have assumed that the surrounding air temperature is zero). In certain cases we can think about what the steady state of the rod will be. That is after sufficiently long time (so that things have had plenty of time for the heat to move around and for things to heat up/cool down), the temperature will cease to change in time. Once this steady state is reached, things become independent of time, and the differential equation becomes = k 2 T (x) HT(x) + S(x), x2 which is of the same form as the ordinary differential equation that we considered at the start of this section. 5

51 Chapter II Subspaces, Bases and Dimension 5

52 II Subspaces, Bases and Dimension II. Subspaces, basis and dimension Prerequisites and Learning Goals From your work in previous courses, you should be able to Write down a vector as a linear combination of a set of vectors. Define linear independence for a collection of vectors. Define a basis for a vector subspace. After completing this section, you should be able to Know the definitions of vector addition and scalar multiplication for vector spaces of functions Decide whether a given collection of vectors forms a subspace. Recast the dependence or independence of a collection of vectors in R n or C n as a statement about existence of solutions to a system of linear equations. Decide if a collection of vectors are dependent or independent. Define the span of a collection of vectors; show that given a set of vectors v,...,v k the span span(v,...,v k ) is a subspace. Describe the significance of the two parts (independence and span) of the definition of a basis. Check if a collection of vectors is a basis. Show that any basis for a subspace has the same number of elements. Show that any set of k linearly independent vectors v,...,v k in a k dimensional subspace S is a basis of S. Define the dimension of a subspace. 52

53 II. Subspaces, basis and dimension II.. Vector spaces and subspaces In your previous linear algebra course, and for most of this course, vectors are n-tuples of numbers, either real or complex. The sets of all n-tuples, denoted R n or C n, are examples of vector spaces. In more advanced applications vector spaces of functions often occur. For example, an electrical signal can be thought of as a real valued function x(t) of time t. If two signals x(t) and y(t) are superimposed, the resulting signal is the sum that has the value x(t) + y(t) at time t. This motivates the definition of vector addition for functions: the vector sum of the functions x and y is the new function x + y defined by (x + y)(t) = x(t) + y(t). Similarly, if s is a scalar, the scalar multiple sx is defined by (sx)(t) = sx(t). If you think of t as being a continuous index, these definitions mirror the componentwise definitions of vector addition and scalar multiplication for vectors in R n or C n. It is possible to give an abstract definition of a vector space as any collection of objects (the vectors) that can be added and multiplied by scalars, provided the addition and scalar multiplication rules obey a set of rules. We won t follow this abstract approach in this course. A collection of vectors V contained in a given vector space is called a subspace if vector addition and scalar multiplication of vectors in V stay in V. In other words, for any vectors v,v 2 V and any scalars c and c 2, the vector c v + c 2 v 2 lies in V too. In three dimensional space R 3, examples of subspaces are lines and planes through the origin. If we add or scalar multiply two vectors lying on the same line (or plane) the resulting vector remains on the same line (or plane). Additional examples of subspaces are the trivial subspace, containing the single vector, as well as the whole space itself. Here is another example of a subspace. The set of n n matrices can be thought of as an n 2 dimensional vector space. Within this vector space, the set of symmetric matrices (satisfying A T = A) is a subspace. To see this, suppose A and A 2 are symmetric. Then, using the linearity property of the transpose, we see that (c A + c 2 A 2 ) T = c A T + c 2A T 2 = c A + c 2 A 2 which shows that c A + c 2 A 2 is symmetric too. We have encountered subspaces of functions in the section on interpolation. In Lagrange interpolation we considered the set of all polynomials of degree at most m. This is a subspace of the space of functions, since adding two polynomials of degree at most m results in another polynomial, again of degree at most m, and scalar multiplication of a polynomial of degree at most m yields another polynomial of degree at most m. Another example of a subspace of functions is the set of all functions y(t) that satisfy the differential equation y (t) + y(t) =. To check that this is a subspace, we must verify that if y (t) and y 2 (t) both solve the differential equation, then so does c y (t) + c 2 y 2 (t) for any choice of scalars c and c 2. x. x n 53

54 II Subspaces, Bases and Dimension II..2 Linear dependence and independence To begin we review the definition of linear dependence and independence. A linear combination of vectors v,...,v k is a vector of the form k c i v i = c v + c 2 v c k v k i= for some choice of numbers c,c 2,...,c k. The vectors v,...,v k are called linearly dependent if there exist numbers c,c 2,...,c k that are not all zero, such that the linear combination k i= c iv i = On the other hand, the vectors are called linearly independent if the only linear combination of the vectors equaling zero has every c i =. In other words k c i v i = implies c = c 2 = = c k = i= 7 For example, the vectors, and are linearly dependent because = 7 If v,...,v k are linearly dependent, then at least one of the v i s can be written as a linear combination of the others. To see this suppose that c v + c 2 v c k v k = with not all of the c i s zero. Then we can solve for any of the v i s whose coefficient c i is not zero. For instance, if c is not zero we can write v = (c 2 /c )v 2 (c 3 /c )v 3 (c k /c )v k This means any linear combination we can make with the vectors v,...,v k can be achieved without using v, since we can simply replace the occurrence of v with the expression on the right. Sometimes it helps to have a geometrical picture. In three dimensional space R 3, three vectors are linearly dependent if they lie in the same plane. The columns of a matrix in echelon form are linearly independent if and only if every column is a pivot column. We illustrate this with two examples. 54

55 II. Subspaces, basis and dimension The matrix 2 is an example of a matrix in echelon form where each column is a 3 pivot column. Here denotes an arbitrary entry. To see that the columns are linearly independent suppose that c + c c 3 = 3 Then, equating the bottom entries we find 3c 3 = so c 3 =. But once we know c 3 = then the equation reads c + c 2 2 = which implies that c 2 = too, and similarly c = Similarly, for a matrix in echelon form (even if, as in the example below, it is not completely reduced), the pivot columns are linearly independent. For example the first, second and fifth columns in the matrix are independent. However, the non-pivot columns can be written as linear combination of the pivot columns. For example 2 = + 2 so if there are non-pivot columns, then the set of all columns is linearly dependent. This is particularly easy to see for a matrix in reduced row echelon form, like 2 5. In this case the pivot columns are standard basis vectors (see below), which are obviously independent. It is easy to express the other columns as linear combinations of these. Recall that for a matrix U in echelon form, the presence or absence of non-pivot columns determines whether the homogeneous equation U x = has any non-zero solutions. By the discussion above, we can say that the columns of a matrix U in echelon form are linearly dependent exactly when the homogeneous equation Ux = has a non-zero solution. In fact, this is true for any matrix. Suppose that the vectors v,...,v k are the columns of a matrix A so that A = [ v v 2 v k ]. 55

56 II Subspaces, Bases and Dimension If we put the coefficients c,c 2,...,c k into a vector then c c 2 c =. c k Ac = c v + c 2 v c k v k is the linear combination of the columns v,...,v k with coefficients c i. Now it follows directly from the definition of linear dependence that the columns of A are linearly dependent if there is a non-zero solution c to the homogeneous equation Ac = On the other hand, if the only solution to the homogeneous equation is c = then the columns v,...,v k are linearly independent. To compute whether a given collection of vectors is dependent or independent we can place them in the columns of a matrix A and reduce to echelon form. If the echelon form has only pivot columns, then the vectors are independent. On the other hand, if the echelon form has some non-pivot columns, then the equation Ac = has some non-zero solutions and so the vectors are dependent. Let s try this with the vectors in the example above in MATLAB/Octave. >V=[ ] ; >V2=[ ] ; >V3=[7 7] ; >A=[V V2 V3] A = 7 7 >rref(a) ans = 6 Since the third column is a non-pivot column, the vectors are linearly dependent. 56

57 II. Subspaces, basis and dimension II..3 Span Given a collection of vectors v,...,v k we may form a subspace of all possible linear combinations. This subspace is called span(v,...,v k ) or the space spanned by the v i s. It is a subspace because if we start with any two elements of span(v,...,v k ), say c v +c 2 v 2 + +c k v k and d v + d 2 v d k v k then a linear combination of these linear combinations is again a linear combination since s (c v + c 2 v c k v k ) + s 2 (d v + d 2 v d k v k ) = (s c + s 2 d )v + (s c 2 + s 2 d 2 )v (s c k + s 2 d k )v k For example the span of the three vectors, and is the whole three dimensional space, because every vector is a linear combination of these. The span of the four vectors,, and is the same. II..4 Basis A collection of vectors v,...,v k contained in a subspace V is called a basis for that subspace if. span(v,...,v k ) = V, and 2. v,...,v k are linearly independent. Condition () says that any vector in V can be written as a linear combination of v,...,v k. Condition (2) says that there is exactly one way of doing this. Here is the argument. Suppose there are two ways of writing the same vector v V as a linear combination: v = c v + c 2 v c k v k v = d v + d 2 v d k v k Then by subtracting these equations, we obtain = (c d )v + (c 2 d 2 )v (c k d k )v k Linear independence now says that every coefficient in this sum must be zero. This implies c = d, c 2 = d 2... c k = d k. 57

58 II Subspaces, Bases and Dimension Example: R n has the standard basis e,e 2,...,e n where e = e 2 =.. [ ] [ ] Another basis for R 2 is,. To see this, notice that saying that any vector y can [ ] [ ] be written in a unique way as c + c 2 is the same as saying that the equation [ always has a unique solution. This is true. ][ c c 2 ] = x A basis for the vector space P 2 of polynomials of degree at most two is given by {,x,x 2 }. These polynomials clearly span P 2 since every polynomial p P 2 can be written as a linear combination p(x) = c +c x+c 2 x 2. To show independence, suppose that c +c x+c 2 x 2 is the zero polynomial. This means that c + c x + c 2 x 2 = for every value of x. Taking the first and second derivatives of this equation yields that c + 2c 2 x = and 2c 2 = for every value of x. Substituting x = into each of these equations we find c = c = c 2 =. Notice that if we represent the polynomial p(x) = c + c x + c 2 x 2 P 2 by the vector of coefficients c c c 2 R 3, then the vector space operations in P 2 are mirrored perfectly in R 3. In other words, adding or scalar multiplying polynomials in P 2 is the same as adding or scalar multiplying the corresponding vectors of coefficients in R 3. This sort of correspondence can be set up whenever we have a basis v,v 2,...,v k for a vector space V. In this case every vector v has a unique representation c v +c 2 v 2 + +c k v k c 2 and we can represent the vector v V by the vector. Rk (or C k ). In some sense this says that we can always think of finite dimensional vector spaces as being copies of R n or C n. The only catch is that the the correspondence that gets set up between vectors in V and vectors in R n or C n depends on the choice of basis. It is intuitively clear that, say, a plane in three dimensions will always have a basis of two vectors. Here is an argument that shows that any two bases for a subspace S of R k or C k will always have the same number of elements. Let v,...,v n and w,...,w m be two bases c c k 58

59 II. Subspaces, basis and dimension for a subspace S. Let s try to show that n must be the same as m. Since the v i s span V we can write each w i as a linear combination of v i s. We write w j = n a i,j v i i= for each j =,...,m. Let s put all the coefficients into an n m matrix A = [a i,j ]. If we form the matrix k m matrix W = [w w 2 w m ] and the k n matrix V = [v v 2 v m ] then the equation above can be rewritten W = V A 4 To understand this construction consider the two bases, and 2, 2 6 for a subspace in R 3 (in fact this subspace is the plane through the origin with normal vector ). Then we may write 4 2 = = + 2 and the equation W = V A for this example reads 4 [ ] 2 2 = Returning now to the general case, suppose that m > n. Then A has more columns than rows. So its echelon form must have some non-pivot columns which implies that there must be some non-zero solution to Ac =. Let c be such a solution. Then Wc = V Ac = But this is impossible because the columns of W are linearly dependent. So it can t be true that m > n. Reversing the roles of V and W we find that n > m is impossible too. So it must be that m = n. We have shown that any basis for a subspace S has the same number of elements. Thus it makes sense to define the dimension of a subspace S to be the number of elements in any 59

60 II Subspaces, Bases and Dimension basis for S. Here is one last fact about bases: any set of k linearly independent vectors {v,...,v k } in a k dimensional subspace S automatically spans S and is therefore a basis. To see this (in the case that S is a subspace of R n or C n ) we let {w,...,w k } be a basis for S, which also will have k elements. Form V = [v v k ] and W = [w w k ]. Then the construction above gives V = WA for a k k matrix A. The matrix A must be invertible. Otherwise there would be a non-zero solution c to Ac =. This would imply V c = W Ac = contradicting the independence of the rows of V. Thus we can write W = V A which shows that every w k is a linear combination of v i s. This shows that the v i s must span S because every vector in S is a linear combination of the basis vectors w k s which in turn are linear combinations of the v i s. As an example of this, consider again the space P 2 of polynomials of degree at most 2. We claim that the polynomials {,(x a),(x a) 2 } (for any constant a) form a basis. We already know that the dimension of this space is 3, so we only need to show that these three polynomials are independent. The argument for that is almost the same as before. II.2 The four fundamental subspaces for a matrix From your work in previous courses, you should be able to Recognize and use the property of transposes for which (AB) T = B T A T for any matrices A and B. Define the inner (dot) product of two vectors, and its properties (symmetry, linearity), and explain its geometrical meaning. Use the inner product to decide if two vectors are orthogonal, and to compute the angle between two vectors. State the Cauchy-Schwarz inequality and know for which vectors the inequality is an equality. After completing this section, you should be able to Define the four fundamental subspaces N(A), R(A), N(A T ), and R(A T ), associated to a matrix A and its transpose A T. Express the Gaussian elimination process performed to reduce a matrix A to its row reduced echelon form matrix U as a matrix factorization, A = EU, using elementary matrices, and be able to perform the steps using MATLAB/Octave. Compute bases for each of the four fundamental subspaces N(A), R(A), N(A T ) and R(A T ) of a matrix A. 6

61 II.2 The four fundamental subspaces for a matrix Be able to compute the rank of a matrix. State the formulas for the dimension of each of the four subspaces and be able to explain why they are true. Explain what it means for two subspaces to be orthogonal (V W) and for one subspace to be the orthogonal complement of another (V = W ). State which of the fundamental subspaces are orthogonal to each other and explain why, verify the orthogonality relations in examples, and use the orthogonality relation for R(A) to test whether the equation Ax = b has a solution. Use MATLAB/Octave to compute the inner product of two vectors and the angle between them. Be familiar with the MATLAB/Octave function eye(). II.2. Nullspace N(A) and Range R(A) There are two important subspaces associated to any matrix. Let A be an n m matrix. If x is m dimensional, then Ax makes sense and is a vector in n dimensional space. The first subspace associated to A is the nullspace (or kernel) of A denoted N(A) (or Ker(A)). It is defined as all vectors x solving the homogeneous equation for A, that is N(A) = {x : Ax = } This is a subspace because if Ax = and Ax 2 = then A(c x + c 2 x 2 ) = c Ax + c 2 Ax 2 = + =. The nullspace is a subspace of m dimensional space R m. The second subspace is the range (or column space) of A denoted R(A) (or C(A)). It is defined as all vectors of the form Ax for some x. From our discussion above, we see that R(A) is the the span (or set of all possible linear combinations) of its columns. This explains the name column space. The range is a subspace of n dimensional space R n. The four fundamental subspaces for a matrix are the nullspace N(A) and range R(A) for A together with the nullspace N(A T ) and range R(A T ) for the transpose A T. 6

62 II Subspaces, Bases and Dimension II.2.2 Finding basis and dimension of N(A) Example: Let 3 3 A = To calculate a basis for the nullspace N(A) and determine its dimension we need to find the solutions to Ax =. To do this we first reduce A to reduced row echelon form U and solve Ux = instead, since this has the same solutions as the original equation. >A=[ 3 3 ; ; 3 4]; >rref(a) ans = 3 3 This means that x = x 2 x 3 is in N(A) if x 4 x x 3 3 x 2 x 3 = x 4 We now divide the variables into basic variables, corresponding to pivot columns, and free variables, corresponding to non-pivot columns. In this example the basic variables are x and x 3 while the free variables are x 2 and x 4. The free variables are the parameters in the solution. We can solve for the basic variables in terms of the free ones, giving x 3 = 3x 4 and x = 3x 2 x 4. This leads to x x 2 x 3 = x 4 3x 2 x 4 x 2 3x 4 x 4 = x x The vectors and 3 span the nullspace since every element of N(A) is a linear combination of them. They are also linearly independent because if the linear combination 62

63 II.2 The four fundamental subspaces for a matrix on the right of the equation above is zero, then by looking at the second entry of the vector (corresponding to the first free variable) we find x 2 = and looking at the last entry (corresponding to the second free variable) we find x 4 =. So both coefficients must be zero. To find a basis for N(A) in general we first compute U = rref(a) and determine which variables are basic and which are free. For each free variable we form a vector as follows. First put a in the position corresponding to that free variable and a zero in every other free variable position. Then fill in the rest of the vector in such a way that Ux =. (This is easy to do!) The set all such vectors - one for each free variable - is a basis for N(A). II.2.3 The matrix version of Gaussian elimination How are a matrix A and its reduced row echelon form U = rref(a) related? If A and U are n m matrices, then there exists an invertible n n matrix such that A = EU E A = U This immediately explains why the N(A) = N(U), because if Ax = then Ux = E Ax = and conversely if Ax = then Ux = EAx =. What is this matrix E? It can be thought of as a matrix record of the Gaussian elimination steps taken to reduce A to U. It turns out performing an elementary row operation is the same as multiplying on the left by an invertible square matrix. This invertible square matrix, called an elementary matrix, is obtained by doing the row operation in question to the identity matrix. Suppose we start with the matrix >A=[ 3 3 ; ; 3 4] A = The first elementary row operation that we want to do is to subtract twice the first row from the second row. Let s do this to the 3 3 identity matrix I (obtained with eye(3) in MATLAB/Octave) and call the result E >E = eye(3) E = 63

64 II Subspaces, Bases and Dimension >E(2,:) = E(2,:)-2*E(,:) E = -2 Now if we multiply E and A we obtain >E*A ans = which is the result of doing that elementary row operation to A. Let s do one more step. The second row operation we want to do is to subtract the first row from the third. Thus we define >E2 = eye(3) E2 = >E2(3,:) = E2(3,:)-E2(,:) E2 = - and we find 64

65 II.2 The four fundamental subspaces for a matrix >E2*E*A ans = which is one step further along in the Gaussian elimination process. Continuing in this way until we eventually arrive at U so that Thus A = EU with E = E E 2 E which we can check: E k E k E 2 E A = U k E k 3 6 E = For the example above it turns out that >A=[ 3 3 ; ; 3 4] A = >U=rref(A) U = 3 3 >E=[ 3-6; ; -9]; >E*U ans =

66 II Subspaces, Bases and Dimension If we do a partial elimination then at each step we can write A = E U where U is the resulting matrix at the point we stopped, and E is obtained from the Gaussian elimination step up to that point. A common place to stop is when U is in echelon form, but the entries above the pivots have not been set to zero. If we can achieve this without doing any row swaps along the way then E turns out to be lower triangular matrix. Since U is upper triangular, this is called the LU decomposition of A. II.2.4 A basis for R(A) The ranges or column spaces R(A) and R(U) are not the same in general, but they are related. In fact, the vectors in R(A) are exactly all the vectors in R(U) multiplied by E, where E is the invertible matrix in the equation A = EU. We can write this relationship as R(A) = ER(U) To see this notice that if x R(U), that is, x = Uy for some y then Ex = EUy = Ay is in R(A). Conversely if x R(A), that is, x = Ay for some y then x = EE Ay = EUy so x is E times a vector in R(U). Now if we can find a basis u,u 2,...,u k for R(U), the vectors Eu,Eu 2,...,Eu k form a basis for R(A). (Homework exercise) But a basis for the column space R(U) is easy to find. They are exactly the pivot columns of U. If we multiply these by E we get a basis for R(A). But if [ ] [ ] A = a a 2 a m, U = u u 2 u m then the equation A = EU can be written [ ] [ ] a a 2 a m = Eu Eu 2 Eu m From this we see that the columns of A that correspond to pivot columns of U form a basis for R(A). This implies that the dimension of R(A) is the number of pivot columns in U. II.2.5 The rank of a matrix We define the rank of the matrix A, denoted r(a) to be the number of pivot columns of U. Then we have shown that for an n m matrix A dim(r(a)) = r(a) dim(n(a)) = m r(a) 66

67 II.2 The four fundamental subspaces for a matrix II.2.6 Bases for R(A T ) and N(A T ) Of course we could find R(A T ) and N(A T ) by computing the reduced row echelon form for A T and following the steps above. But then we would miss an important relation between the dimensions of these spaces. Let s start with the column space R(A T ). The columns of A T are the rows of A (written as column vectors instead of row vectors). So R(A T ) is the row space of A. It turns out that R(A T ) and R(U T ) are the same. This follows from A = EU. To see this take the transpose of this equation. Then A T = U T E T. Now suppose that x R(A T ). This means that x = A T y for some y. But then x = U T E T y = U T y where y = E T y so x R(U T ). Similarly, if x = U T y for some y then x = U T E T (E T ) y = A T (E T ) y = A T y for y = (E T ) y. So every vector in R(U T ) is also in R(A T ). Here we used that E and hence E T is invertible. Now we know that R(A T ) = R(U T ) is spanned by the columns of U T. But since U T is in reduced row echelon form, its non-zero columns are independent. Therefore, the non-zero columns of U T form a basis for R(A T ). There is one of these for every pivot. This leads to dim(r(a T )) = r(a) = dim(r(a)) The final subspace to consider is N(A T ). From our work above we know that dim(n(a T )) = n dim(r(a T )) = n r(a). Finding a basis is trickier. It might be easiest to find the reduced row echelon form of A T. But if we insist on using A = EU or A T = U T E T we could proceed by multiplying on the right be the inverse of E T. This gives A T (E T ) = U T Now notice that the last n r(a) columns of U T are zero, since U is in reduced row echelon form. So the last n r(a) columns of (E T ) are in the the nullspace of A T. They also have to be independent, since (E T ) is invertible. Thus the last n r(a) of (E T ) form a basis for N(A T ). From a practical point of view, this is not so useful since we have to compute the inverse of a matrix. It might be just as easy to reduce A T. (Actually, things are slightly better if we use the LU decomposition. The same argument shows that the last n r(a) columns of (L T ) also form a basis for N(A T ). But L T is an upper triangular matrix, so its inverse is faster to compute.) 67

68 II Subspaces, Bases and Dimension II.2.7 Orthogonal vectors and subspaces In preparation for our discussion of the orthogonality relations for the fundamental subspaces of matrix we review some facts about orthogonal vectors and subspaces. Recall that the dot product, or inner product of two vectors x is denoted by x y or x,y and defined by y x 2 x =. y = y 2. x n y n x T y = [ ] y 2 n x x 2 x n. = x i y i i= y n Some important properties of the inner product are symmetry x y = y x y and linearity (c x + c 2 x 2 ) y = c x y + c 2 x 2 y. The (Euclidean) norm, or length, of a vector is given by x = x x = n An important property of the norm is that x = implies that x =. The geometrical meaning of the inner product is given by i= x y = x y cos(θ) where θ is the angle between the vectors. The angle θ can take values from to π. The Cauchy Schwarz inequality states x y x y. It follows from the previous formula because cos(θ). The only time that equality occurs in the Cauchy Schwarz inequality, that is x y = x y, is when cos(θ) = ± and θ is either or π. This means that the vectors are pointed in the same or in the opposite directions. x 2 i 68

69 II.2 The four fundamental subspaces for a matrix The vectors x and y are orthogonal if x y =. Geometrically this means either that one of the vectors is zero or that they are at right angles. This follows from the formula above, since cos(θ) = implies θ = π/2. Another way to see that x y = means that vectors are orthogonal is from Pythagoras formula. If x and y are at right angles then x 2 + y 2 = x + y 2. x + y y x But x + y 2 = (x + y) (x + y) = x 2 + y 2 + 2x y so Pythagoras formula holds exactly when x y =. To compute the inner product of (column) vectors X and Y in MATLAB/Octave we use the formula x y = x T y. Thus the inner product can be computed using X *Y. (If X and Y are row vectors, the formula is X*Y.) The norm of a vector X is computed by norm(x). In MATLAB/Octave inverse trig functions are computed with asin(), acos() etc. So the angle between column vectors X and Y could be computed as > acos(x *Y/(norm(X)*norm(Y))) Two subspaces V and W are said to be orthogonal if every vector in V is orthogonal to every vector in W. In this case we write V W. W V S T 69

70 II Subspaces, Bases and Dimension In this figure V W and also S T. A related concept is the orthogonal complement. The orthogonal complement of V, denoted V, is the subspace containing all vectors orthogonal to V. In the figure W = V but T S since T contains only some of the vectors orthogonal to S. If we take the orthogonal complement of V we get back the original space V : This is certainly plausible from the pictures. It is also obvious that V (V ), since any vector in V is perpendicular to vectors in V. If there were a vector in (V ) not contained in V we could subtract its projection onto V (defined in the next chapter) and end up with a non-zero vector in (V ) that is also in V. Such a vector would be orthogonal to itself, which is impossible. This shows that (V ) = V. One consequence of this formula is that V = W implies V = W. Just take the orthogonal complement of both sides and use (W ) = W. II.2.8 Orthogonality relations for the fundamental subspaces of a matrix Let A be an n m matrix. Then N(A) and R(A T ) are subspaces of R m while N(A T ) and R(A) are subspaces of R n. These two pairs of subspaces are orthogonal: N(A) = R(A T ) N(A T ) = R(A) We will show that the first equality holds for any A. The second equality then follows by applying the first one to A T. These relations are based on the formula (A T x) y = x (Ay) This formula follows from the product formula (AB) T = B T A T for transposes, since (A T x) y = (A T x) T y = x T (A T ) T y = x T Ay = x (Ay) First, we show that N(A) R(A T ). To do this, start with any vector x N(A). This means that Ax =. If we compute the inner product of x with any vector in R(A T ), that is, any vector of the form A T y, we get (A T y) x = y Ax = y =. Thus x R(A T ). This shows N(A) R(A T ). Now we show the opposite inclusion, R(A T ) N(A). This time we start with x R(A T ). This means that x is orthogonal to every vector in R(A T ), that is, to every 7

71 II.2 The four fundamental subspaces for a matrix vector of the form A T y. So (A T y) x = y (Ax) = for every y. Pick y = Ax. Then (Ax) (Ax) = Ax 2 =. This implies Ax = so x N(A). We can conclude that R(A T ) N(A). These two inclusions establish that N(A) = R(A T ). Let s verify these orthogonality relations in an example. Let 2 A = Then Thus we get 3 rref(a) = rref(a T ) = 3 N(A) = span, 2 R(A) = span, N(A T ) = span R(A T ) = span 3, We can now verify directly that every vector in the basis for N(A) is orthogonal to every vector in the basis for R(A T ), and similarly for N(A T ) and R(A). Does the equation 2 Ax = 3 have a solution? We can use the ideas above to answer this question easily. We are really 2 asking whether is contained in R(A). But, according to the orthogonality relations, this 3 7

72 II Subspaces, Bases and Dimension 2 is the same as asking whether is contained in N(A T ). This is easy to check. Simply 3 compute the dot product 2 = =. 3 Since the result is zero, we conclude that a solution exists. 72

73 II.3 Graphs and Networks II.3 Graphs and Networks Prerequisites and Learning Goals From your work in previous courses you should be able to State Ohm s law for a resistor. State Kirchhoff s laws for a resistor network. After completing this section, you should be able to Be able to write down the incidence matrix of a directed graph, and to draw the graph given the incidence matrix. Define the Laplace operator or Laplacian for a graph and be able to write it down. When the edges of a graph represent resistors or batteries in a circuit, you should be able to interpret each of the four subspaces associated with the incidence matrix and their dimensions in terms of voltage and current vectors, and verify their orthogonality relations. write down Ohm s law for all the edges of the graph in matrix form using the Laplacian. express the connection between Kirchoff s law and the nullspace of the Laplacian. use the voltage-to-current map to calculate the voltages and currents in the network when a battery is attached. use the voltage-to-current map to calculate the effective resistance between two nodes in the network. Re-order rows and columns of a matrix and extract submatrices in MATLAB/Octave. II.3. Directed graphs and their incidence matrix A directed graph is a collection of vertices (or nodes) connected by edges with arrows. Here is a graph with 4 vertices and 5 edges. 73

74 II Subspaces, Bases and Dimension Graphs come up in many applications. For example, the nodes could represent computers and the arrows internet connections. Or the nodes could be factories and the arrows represent movement of goods. We will mostly focus on a single interpretation where the edges represent resistors or batteries hooked up in a circuit. In this interpretation we will be assigning a number to each edge to indicate the amount of current flowing through that edge. This number can be positive or negative. The arrows indicate the direction associated to a positive current. The incidence matrix of a graph is an n m matrix, where n is the number of edges and m is the number of vertices. We label the rows by the edges in the graph and the columns by the vertices. Each row of the matrix corresponds to an edge in the graph. It has a in the place corresponding to the vertex where the arrow starts and a in the place corresponding to the vertex where the arrow ends. Here is the incidence matrix for the illustrated graph The columns of the matrix have the following interpretation. The column representing a given vertex has a + for each arrow coming in to that vertex and a for each arrow leaving the vertex. Given an incidence matrix, the corresponding graph can easily be drawn. What is the graph for? (Answer: a triangular loop.) 74

75 II.3 Graphs and Networks II.3.2 Nullspace and range of incidence matrix and its transpose We now wish to give an interpretation of the fundamental subspaces associated with the incidence matrix of a graph. Let s call the matrix D. In our example D acts on vectors v R 4 and produces a vector Dv in R 5. We can think of the vector v = v v 2 v 3 v 4 as an v 2 v v 3 v 2 assignment of a voltage to each of the nodes in the graph. Then the vector Dv = v 4 v 3 v 4 v 2 v v 4 assigns to each edge the voltage difference across that edge. The matrix D is similar to the derivative matrix when we studied finite difference approximations. It can be thought of as the derivative matrix for a graph. II.3.3 The null space N(D) This is the set of voltages v for which the voltage differences in Dv are all zero. This means that any two nodes connected by an edge will have the same voltage. In our example, this implies all the voltages are the same, so every vector in N(D) is of the form v = s for some s. In other words, the null space is one dimensional with basis. For a graph that has several disconnected pieces, Dv = will force v to be constant on each connected component of the graph. Each connected component will contribute one basis vector to N(D). This is the vector that is equal to on that component and zero everywhere else. Thus dim(n(d)) will be equal to the number of disconnected pieces in the graph. II.3.4 The range R(D) The range of D consists of all vectors b in R 5 that are voltage differences, i.e., b = Dv for some v. We know that the dimension of R(D) is 4 dim(n(d)) = 4 = 3. So the set of voltage difference vectors must be restricted in some way. In fact a voltage difference vector will have the property that the sum of the differences around a closed loop is zero. In the 75

76 II Subspaces, Bases and Dimension b 2 example the edges, 4, 5 form a loop, so if b = b 3 is a voltage difference vector then b 4 b 5 v 2 v v 3 v 2 b + b 4 + b 5 = We can check this directly in the example. Since b = Dv = v 4 v 3 v 4 v 2 v v 4 we check that (v 2 v ) + (v 4 v 2 ) + (v v 4 ) =. In the example graph there are three loops, namely, 4, 5 and 2, 3, 4 and, 2, 3, 5. The corresponding equations that the components of a vector b must satisfy to be in the range of D are b b + b 4 + b 5 = b 2 + b 3 b 4 = b + b 2 + b 3 + b 5 = Notice the minus sign in the second equation corresponding to a backwards arrow. However these equations are not all independent, since the third is obtained by adding the first two. There are two independent equations that the components of b must satisfy. Since R(D) is 3 dimensional, there can be no additional constraints. Now we wish to find interpretations for the null space and the range of D T. Let y = be a vector in R 5 which we interpret as being an assignment of a current to each edge in y 5 y the graph. Then D T y = y y 2 y 4 y 2 y 3. This vector assigns to each node the amount of y 3 + y 4 y 5 current collecting at that node. II.3.5 The null space N(D T ) This is the set of current vectors y R 5 which do not result in any current building up (or draining away) at any of the nodes. We know that the dimension of this space must be 5 dim(r(d T )) = 5 dim(r(d)) = 5 3 = 2. We can guess at a basis for this space by noting that current running around a loop will not build up at any of the nodes. The loop vector represents a current running around the loop, 4, 5. We can verify that this y y 2 y 3 y 4 y 5 76

77 II.3 Graphs and Networks vector lies in the null space of D T : = The current vectors corresponding to the other two loops are and. However these three vectors are not linearly independent. Any choice of two of these vectors are independent, and form a basis. II.3.6 The range R(D T ) This is the set of vectors in R 4 of the form x x 2 x 3 x 4 = DT y. With our interpretation these are vectors which measure how the currents in y are building up or draining away from each node. Since the current that is building up at one node must have come from some other nodes, it must be that x + x 2 + x 3 + x 4 = In our example, this can be checked directly. This one condition in R 4 results in a three dimensional subspace. II.3.7 Summary and Orthogonality relations The two subspaces R(D) and N(D T ) are subspaces of R 5. The subspace N(D T ) contains all linear combination of loop vectors, while R(D) contains all vectors whose dot product with loop vectors is zero. This verifies the orthogonality relation R(D) = N(D T ). The two subspaces N(D) and R(D T ) are subspaces of R 4. The subspace N(D) contains constant vectors, while R(D T ) contains all vectors orthogonal to constant vectors. This verifies the other orthogonality relation N(D) R(D T ). 77

78 II Subspaces, Bases and Dimension II.3.8 Resistors and the Laplacian Now we suppose that each edge of our graph represents a resistor. This means that we associate with the ith edge a resistance R i. Sometimes it is convenient to use conductances γ i which are defined to be the reciprocals of the resistances, that is, γ i = /R i. R 2 R 5 4 R 4 R 3 R2 3 If we begin by an assignment of voltage to every node, and put these numbers in a vector v R 4. Then Dv R 5 represents the vector of voltage differences for each of the edges. Given the resistance R i for each edge, we can now invoke Ohm s law to compute the current flowing through each edge. For each edge, Ohm s law states that ( V ) i = j i R i, where ( V ) i is the voltage drop across the edge, j i is the current flowing through that edge, and R i is the resistance. Solving for the current we obtain j i = R i ( V ) i. Notice that the voltage drop ( V ) i in this formula is exactly the ith component of the vector Dv. So if we collect all the currents flowing along each edge in a vector j indexed by the edges, then Ohm s law for all the edges can be written as where j = R Dv R R 2 R = R 3 R 4 R 5 is the diagonal matrix with the resistances on the diagonal. 78

79 II.3 Graphs and Networks Finally, if we multiply j by the matrix D T the resulting vector J = D T j = D T R Dv has one entry for each node, representing the total current flowing in or out of that node along the edges that connect to it. The matrix L = D T R D appearing in this formula is called the Laplacian. It is similar to the second derivative matrix that appeared when we studied finite difference approximations. One important property of the Laplacian is symmetry, that is the fact that L T = L. To see this recall that the transpose of a product of matrices is the product of the transposes in reverse order ((ABC) T = C T B T A T ). This implies that L T = (D T R D) T = D T R T D = L Here we used that D T T = D and that R, being a diagonal matrix, satisfies R T = R. Let s determine the entries of L. To start we consider the case where all the resistances have the same value so that R = R = I. In this case L = D T D. Let s start with the example graph above. Then L = 2 = Notice that the ith diagonal entry is the total number of edges connected to the ith node. The i,j entry is if the ith node is connected to the jth node, and otherwise. This pattern describes the Laplacian L for any graph. To see this, write D = [d d 2 d 3 d m ] Then the i,j entry of D T D is d T i d j. Recall that d i has an entry of for every edge leaving the ith node, and a for every edge coming in. So d T i d i, the diagonal entries of D T D, are the sum of (±) 2, with one term for each edge connected to the ith node. This sum gives the total number of edges connected to the ith node. To see this in the example graph, let s consider the first node. This node has two edges connected to it and d = 79

80 II Subspaces, Bases and Dimension Thus the, entry of the Laplacian is d T d = ( ) = 2 On the other hand, if i j then the vectors d i and d j have a non-zero entry in the same position only if one of the edges leaving the ith node is coming in to the jth node or vice versa. For a graph with at most one edge connecting any two nodes (we usually assume this) this means that d T i d j will equal if the ith and jth nodes are connected by an edge, and zero otherwise. For example, in the graph above the first edge leaves the first node, so that d has a in the first position. This first edge comes in to the second node so d 2 has a + in the first position. Otherwise, there is no overlap in these vectors, since no other edges touch both these nodes. Thus d T d 2 = [ ] = What happens if the resistances are not all equal to one? In this case we must replace D with R D in the calculation above. This multiplies the kth row of D with γ k = /R k. Making this change in the calculations above leads to the following prescription for calculating the entries of L. The diagonal entries are given by L i,i = k γ k Where the sum goes over all edges touching the ith node. When i j then { γ k if nodes i and j are connected with edge k L i,j = if nodes i and j are not connected II.3.9 Kirchhoff s law and the null space of L Kirchhoff s law states that currents cannot build up at any node. If v is the voltage vector for a circuit, then we saw that Lv is the vector whose ith entry is the total current building up at the ith node. Thus, for an isolated circuit that is not hooked up to any batteries, Kirchhoff s law can be written as Lv = By definition, the solutions are exactly the vectors in the nullspace N(L) of L. It turns out that N(L) is the same as N(D), which contains all constant voltage vectors. This is what we should expect. If there are no batteries connected to the circuit the voltage will be the same everywhere and no current will flow. 8

81 II.3 Graphs and Networks To see N(L) = N(D) we start with a vector v N(D). Then Dv = implies Lv = D T R Dv = D T R =. This show that v N(L) too, that is, N(D) N(L) To show the opposite inclusion we first note that the matrix R can be factored into a product of invertible matrices R = R /2 R /2 where R /2 is the diagonal matrix with diagonal entries / R i. This is possible because each R i is a positive number. Also, since R /2 is a diagonal matrix it is equal to its transpose, that is, R /2 = (R /2 ) T. Now suppose that Lv =. This can be written D T (R /2 ) T R /2 Dv =. Now we multiply on the left with v T. This gives v T D T (R /2 ) T R /2 Dv = (R /2 Dv) T R /2 Dv = But for any vector w, the number w T w is the dot product of w with itself which is equal to the length of w squared. Thus the equation above can be written R /2 Dv 2 = This implies that R /2 Dv =. Finally, since R /2 is invertible, this yields Dv =. We have shown that any vector in N(L) also is contained in N(D). Thus N(L) N(D) and together with the previous inclusion this yields N(L) = N(D). II.3. Connecting a battery To see more interesting behaviour in a circuit, we pick two nodes and connect them to a battery. For example, let s take our example circuit above and connect the nodes and 2. R 2 R 5 4 R 4 R 3 R 2 3 The terminals of a battery are kept at a fixed voltage. Thus the voltages v and v 2 are now known, say, v = b v 2 = b 2 8

82 II Subspaces, Bases and Dimension Of course, it is only voltage differences that have physical meaning, so we could set b =. Then b 2 would be the voltage of the battery. At the first and second nodes there now will be current flowing in and out from the battery. Let s call these currents J and J 2. At all the other nodes the total current flowing in and out is still zero, as before. How are the equations for the circuit modified? For simplicity let s set all the resistances R i =. The new equations are 2 b J b 2 v 3 v 4 = Two of the voltages v and v 2 have changed their role in these equations from being unknowns to being knowns. On the other hand, the first two currents, which were originally known quantities (namely zero) are now unknowns. Since the current flowing into the network should equal the current flowing out, we expect that J = J 2. This follows from the orthogonality relations for L. The vector J 2 is contained in R(L). But R(L) = N(L T ) = N(L) (since L = L T ). But we know that N(L) consists of all constant vectors. Hence J J 2 = J + J 2 = To solve this system of equations we write it in block matrix form [ ][ ] [ ] A B T b J = B C v where and A = [ ] 2 3 b = [ b b 2 ] v = B = [ v3 v 4 [ ] ] J = [ J J 2 C = ] J 2 [ ] 2 3 = Our system of equations can then be written as two 2 2 systems. Ab + B T v = J Bb + Cv = [ ] J 82

83 II.3 Graphs and Networks We can solve the second equation for v. Since C is invertible v = C Bb Using this value of v in the first equation yields J = (A B T C B)b The matrix A B T C B is the voltage-to-current map. In our example [ ] A B T C B = (8/5) In fact, for any circuit the voltage to current map is given by [ ] A B T C B = γ This ([ can be ]) deduced from two facts: (i) A B T C B is symmetric and (ii) R(A B T C B) = span. You are asked to carry this out in a homework problem. Notice that this form of the matrix implies that if b = [ b 2 ] then the currents are zero. b Another way of seeing this is to notice that if b = b 2 then is orthogonal to the range of A B T C B by (ii) and hence in the nullspace N(A B T C B). The number R = γ is the ratio of the applied voltage to the resulting current, is the effective resistance of the network between the two nodes. So in our example circuit, the effective resistance between nodes and 2 is 5/8. If the battery voltages are b = and b 2 = b then the voltages at the remaining nodes are [ ] [ ] [ ] v3 = C 4/5 B = b b 3/5 v 4 b 2 II.3. Two resistors in series Let s do a trivial example where we know the answer. If we connect two resistors in series, the resistances add, and the effective resistance is R +R 2. The graph for this example looks like 83

84 II Subspaces, Bases and Dimension R 2 R 2 3 The Laplacian for this circuit is γ γ L = γ γ + γ 2 γ 2 γ 2 γ 2 with γ i = /R i, as always. We want the effective resistance between nodes and 3. Although it is not strictly necessary, it is easier to see what the submatrices A, B and C are if we reorder the vertices so that the ones we are connecting, namely and 3, come first. This reshuffles the rows and columns of L yielding 3 2 γ γ 3 γ 2 γ 2 2 γ γ 2 γ + γ 2 Here we have labelled the re-ordered rows and columns with the nodes they represent. Now the desired submatrices are [ ] γ A = B = [ ] γ γ γ 2 C = [ ] γ + γ 2 2 and [ ] [ ] A B T C γ γ 2 B = γ γ 2 γ 2 γ + γ 2 γ γ 2 γ2 2 This gives an effective resistance of as expected. R = γ + γ 2 γ γ 2 = γ + γ 2 = R + R 2 = γ γ 2 γ + γ 2 [ ] 84

85 II.3 Graphs and Networks II.3.2 Example: a resistor cube Hook up resistors along the edges of a cube. If each resistor has resistance R i =, what is the effective resistance between opposite corners of the cube? We will use MATLAB/Octave to solve this problem. To begin we define the Laplace matrix L. Since each node has three edges connecting it, and all the resistances are, the diagonal entries are all 3. The off-diagonal entries are or, depending on whether the corresponding nodes are connected or not. >L=[ ; ; ; ; ; ; ; ]; We want to find the effective resistance between and 7. To compute the submatrices A, B and C it is convenient to re-order the nodes so that and 7 come first. In MATLAB/Octave, this can be achieved with the following statement. >L=L([,7,2:6,8],[,7,2:6,8]); In this statement the entries in the first bracket [,7,2:6,8] indicates the new ordering of the rows. Here 2:6 stands for 2,3,4,5,6. The second bracket indicates the re-ordering of the columns, which is the same as for the rows in our case. Now it is easy to extract the submatrices A, B and C and compute the voltage-to-current map DN >N = length(l); >A = L(:2,:2); >B = L(3:N,:2); >C = L(3:N,3:N); >DN = A - B *C^(-)*B; 85

86 II Subspaces, Bases and Dimension The effective resistance is the reciprocal of the first entry in DN. The command format rat gives the answer in rational form. (Note: this is just a rational approximation to the floating point answer, not an exact rational arithmetic as in Maple or Mathematica.) >format rat >R = /DN(,) R = 5/6 86

87 Chapter III Orthogonality 87

88 III Orthogonality III. Projections Prerequisites and Learning Goals After completing this section, you should be able to write down the definition of an orthogonal projection matrix use propeties of a projection matrix to deduce facts like the orthogonality of the null space and range. compute the orthogonal projection matrix whose range in the span of a given collection of vectors. use orthogonal projection matrices to decompose a vector in to components parallel to and perpendicular to a given subspace. use least squares to compute approximate solutions to systems of equations with no solutions. perform least squares calculations in applications where overdetermined systems arise. III.. Projections onto lines and planes in R 3 Recall the projection of a vector x onto a line containing the non-zero vector a is given by p = Px, where P is the projection matrix P = a 2aaT. Let s review why this formula is true, using properties of the dot product. Here is a diagram of the situation. a p θ x 88

89 III. Projections The length of the projected vector p is p = x cos(θ) = a x cos(θ) a = a x a = at x a To get the vector p start with the unit vector a/ a and stretch it by an amount p. This gives p = p a a = a 2aaT x This can be written p = Px where P is the projection matrix above. Notice that the matrix P satisfies P 2 = P since In addition, P T = P since P 2 = a 4aaT aa T = a 4a a 2 a T a 2aaT = P P T = a 2(aaT ) T = a 2(aT ) T A T = a 2aaT = P We will discuss the significance of the two properties P 2 = P and P T = P below. Example: What is the projection of x = in the direction of a = 2? Let s calculate the projection matrix P and compute Px and verify that P 2 = P and P T = P. >x = [ ] ; >a = [ 2 -] ; >P = (a *a)^(-)*a*a P = >P*x ans = >P*P 89

90 III Orthogonality ans = The projection of x on to the plane orthogonal to a is given by q = x p. a x q p Thus we can write q = x Px = (I P)x = Qx where Q = I P Notice that, like the matrix P, the matrix Q also satisfies Q 2 = Q and Q T = Q since Q 2 = (I P)(I P) = I 2P + P 2 = I P = Q and Q T = I T P T = I P = Q. Continuing with the example above, if we want to compute the projection matrix onto the plane perpendicular to a we compute Q = I P. Then Qx is the projection of x onto the plane. We can also check that Q 2 = Q. > Q = eye(3) - P Q =

91 III. Projections >Q*x ans = >Q^2 ans = III..2 Orthogonal Projections Any matrix P satisfying P 2 = P is called a projection matrix. If, in addition P T = P then P is P is called an orthogonal projection. (Warning: this is a different concept than that of an orthogonal matrix which we will see later.) The property P 2 = P says that any vector in the range of P is not changed by P, since P(Px) = P 2 x = Px. The property P T = P implies that N(P) = R(P). This follows from the orthogonality relation N(P) = R(P T ) If P is an orthgonal projection, so is Q = I P, as you can check. Clearly P + Q = I. Also PQ = P(I P) = P P 2 = P P = and similarly QP =. These formulas show that R(P) = N(Q). To see this, first notice that if x R(P), so that x = Px, then Qx = QPx =, which means x N(Q). Conversely if x N(Q) then x = (P + Q)x = Px R(P). As a consequence (since N(Q) = R(Q T ) = R(Q) ) we see that the ranges of P and Q are orthogonal complements, that is, R(P) = R(Q). In the example of the last section R(P) = N(Q) is the line through a while N(P) = R(Q) is the plane orthogonal to a. The orthgonality of Pa and Qb implies that Pa + Qb 2 = (Pa + Qb) (Pa + Qb) = Pa 2 + Qb 2 9

92 III Orthogonality since the cross terms vanish. Let P be an orthogonal projection. Let s show that given any vector x, the vector Px is the vector in R(P) that is closest to x. First we compute the square of the distance from Px to x. This is given by Px x 2 = Qx 2 Now let Py be any other vector in the range R(P). Then the square of the distance from Py to x is Py x 2 = Py (P + Q)x 2 = P(y x) + Qx 2 = P(y x) 2 + Qx 2 This implies that Py x 2 Qx 2 = Px x 2, or equivalently Py x Px x, with equality exactly when Px = Py. III..3 Least squares solutions We now consider linear equations Ax = b That do not have a solution. This is the same as saying that b R(A) What vector x is closest to being a solution? b Ax Ax b R(A) = possible values of Ax We want to determine x so that Ax is as close as possible to b. In other words, we want to minimize Ax b. This will happen when Ax is the projection of b onto R(A), that is, Ax = Pb, where P is the projection matrix. In this case Qb = (I P)b is orthogonal to R(A). But (I P)b = b Ax. Therefore (and this is also clear from the picture), we see that Ax b is orthogonal to R(A). But the vectors orthogonal to R(A) are exactly the vectors in N(A T ). Thus the vector we are looking for will satisfy A T (Ax b) = or the equation A T Ax = A T b This is the least squares equation, and a solution to this equation is called a least squares solution. (Aside: We can also use Calculus to derive the least squares equation. We want to minimize Ax b 2. Computing the gradient and setting it to zero results in the same equations.) 92

93 III. Projections It turns out that the least squares equation always has a solution. Another way of saying this is R(A T ) = R(A T A). Instead of checking this, we can verify that the orthogonal complements N(A) and N(A T A) are the same. But this is something we showed before, when we considered the incidence matrix D for a graph. If x solves the least squares equation, the vector Ax is the projection of b onto the range R(A), since Ax is the closest vector to x in the range of A. In the case where A T A is invertible (this happens when N(A) = N(A T A) = {}), we can obtain a formula for the projection. Starting with the least squares equation we multiply by (A T A) to obtain so that Thus the projection matrix is given by x = (A T A) A T b Ax = A(A T A) A T b. P = A(A T A) A T Notice that the formula for the projection onto a line through a is a special case of this, since then A T A = a 2. It is worthwhile pointing out that if we say that the solution of the least squares equation gives the best approximation to a solution, what we really mean is that it minimizes the distance, or equivalently, its square Ax b 2 = ((Ax) i b i ) 2. There are other ways of measuring how far Ax is from b, for example the so-called L norm Ax b = (Ax) i b i Minimizing the L norm will result in a different best solution, that may be preferable under some circumstances. However, it is much more difficult to find! III..4 Polynomial fit Suppose we have some data points (x,y ),(x 2,y 2 ),...,(x n,y n ) and we want to fit a polynomial p(x) = a x m + a 2 x m a m x + a m through them. This is like the Lagrange interpolation problem we considered before, except that now we assume that n > m. This means that in general there will no such polynomial. However we can look for the least squares solution. To begin, let s write down the equations that express the desired equalities p(x i ) = y i for i =,... m. These can be written in matrix form 93

94 III Orthogonality x m x m 2 x m 2 x m 2... x m n x 2 x x m 2 n x n a a 2. a m y y 2 =... or Aa = y. where A is a submatrix of the Vandermonde matrix. To find the least squares approximation we solve A T Aa = A T y. In a homework problem, you are asked to do this using MATLAB/Octave. In case where the polynomial has degree one this is a straight line fit, and the equation we want to solve are x y x 2 [ ] a y 2 =. a 2. x n y n These equations will not have a solution (unless the points really do happen to lie on the same line.) To find the least squares solution, we compute and [ x x 2 x n [ x x 2 x n This results in the familiar equations [ x 2 i xi xi n which are easily solved. x ] x 2. = x n ] ][ a a 2 y y 2. y n ] = = y n [ ] x 2 i xi xi n [ ] xi yi y i [ ] xi yi y i III..5 Football rankings We can try to use least squares to rank football teams. To start with, suppose we have three teams. We pretend each team has a value v, v 2 and v 3 such that when two teams play, the 94

95 III. Projections difference in scores is the difference in values. So, if the season s games had the following results vs vs vs. 3 3 vs. 5 3 vs then the v i s would satisfy the equations v 2 v = v 2 v = 2 v 2 v 3 = v v 3 = 5 v 2 v 3 = Of course, there is no solution to these equations. Nevertheless we can find the least squares solution. The matrix form of the equations is Dv = b with The least squares equation is or D = b = 2 5 D T Dv = D T v v = Before going on, notice that D is an incidence matrix. What is the graph? (Answer: the nodes are the teams and they are joined by an edge with the arrow pointing from the losing team to the winning team. This graph may have more than one edge joining to nodes, if two teams play more than once. This is sometimes called a multi-graph.). We saw that in this situation N(D) is not empty, but contains vectors whose entries are all the same. The situation is the same as for resistances, it is only differences in v i s that have a meaning. We can solve this equation in MATLAB/Octave. The straightforward way is to compute >L = [3-2 -;-2 4-2;- -2 3]; 95

96 III Orthogonality >b = [-35; 4; -5]; >rref([l b]) ans = As expected, the solution is not unique. The general solution, depending on the parameter s is 7.5 v = s We can choose s so that the v i for one of the teams is zero. This is like grounding a node in a circuit. So, by choosing s = 7.5, s = 6.25 and s = we obtain the solutions 3.75, or Actually, it is easier to compute a solution with one of the v i s equal to zero directly. If [ ] v = v2 then v 2 = satisfies the equation L 2 v 2 = b 2 where the matrix L 2 is the bottom v 2 v 3 v 3 right 2 2 block of L and b 2 are the last two entries of b. >L2 = L(2:3,2:3); >b2 = b(2:3); >L2\b2 ans = We can try this on real data. The football scores for the 27 CFL season can be found at The differences in scores for the first 2 games are in cfl.m. The order of the teams is BC, Calgary, Edmonton, Hamilton, Montreal, Saskatchewan, Toronto, Winnipeg. Repeating the computation above for this data we find the ranking to be (running the file cfl.m) 96

97 III. Projections v = Not very impressive, if you consider that the second-lowest ranked team (Winnipeg) ended up in the Grey Cup game! 97

98 III Orthogonality III.2 Orthonormal bases and Orthogonal Matrices Prerequisites and Learning Goals After completing this section, you should be able to write down the definition of an orthonormal basis. compute the coefficients in the expansion of an orthonormal basis compute the norm of a vector from its coefficients in an orthonormal basis write down the definition of an orthogonal matrix recognize an orthogonal matrix by its rows or columns know how to characterize an orthogonal matrix by its action on vectors III.2. Orthonormal bases A basis q,q 2,... is called orthonormal if. q i = for every i (normal) 2. q i q j = for i j (ortho). The standard basis for R n given by e =, e 2 =, e 3 =,... is an orthonormal basis for R n. Another orthonormal basis for R 2 is q = [ ], q 2 = [ ] 2 If you expand a vector in an orthonormal basis, it s very easy to find the coefficients in the expansion. Suppose v = c q + c 2 q c n q n 98

99 III.2 Orthonormal bases and Orthogonal Matrices for some orthonormal basis q,q 2,... Then, if we take the dot product of both sides with q k, we get q k v = c q k q + c 2 q k q c k q k q k + c n q k q n = c k + + = c k This gives a convenient formula for each c k. For example, in the expansion [ ] [ ] [ ] = c c 2 2 we have c = [ ] 2 c 2 = [ ] 2 [ ] 2 [ ] 2 = 3 2 = 2 Notice also that the norm of v is easily expressed in terms of the coefficients c i. We have v 2 = v v = (c q + c 2 q c n q n ) (c q + c 2 q c n q n ) = c 2 + c c 2 n Another way of saying this is that the vector c = [c,c 2,...,c n ] of coefficients has the same norm as v. III.2.2 Orthogonal matrices An n n matrix Q is called orthogonal if Q T Q = I (equivalently if Q T = Q ). If the columns of Q are q,q 2,...,q n then Q is orthogonal if q T q q q q 2 q q n Q T q T 2 [ ] q 2 q q 2 q 2 q 2 q n Q = q q 2 q n =..... =..... q n q q n q 2 q n q n q T n This is the same as saying that the columns of Q form an orthonormal basis. Another way of recognizing orthogonal matrices is by their action on vectors. Suppose Q is orthogonal. Then Qv 2 = (Qv) (Qv) = v (Q T Qv) = v v = v 2 99

100 III Orthogonality This implies that Qv = v. In other words, orthogonal matrices don t change the lengths of vectors. The converse is also true. If a matrix Q doesn t change the lengths of vectors then it must be orthogonal. To see this, suppose that Qv = v for every v. Then the calculation above shows that v (Q T Qv) = v v for every v. Applying this to v + w we find Expanding, this gives (v + w) (Q T Q(v + w) ) = (v + w) (v + w) v (Q T Qv) + w (Q T Qw) + v (Q T Qw) + w (Q T Qv) = v v + w w + v w + w v Since v (Q T Qv) = v v and w (Q T Qw) = w w we can cancel these terms. Also w (Q T Qv) = ((Q T Q) T w) v = (Q T Qw) v = v (Q T Qw). So on each side of the equation, the two remaining terms are the same. Thus v (Q T Qw) = v w This equation holds for every choice of vectors v and w. If v = e i and w = e j then the left side is the i,jth matrix element Q i,j of Q while the right side is the e i e j, which is i,jth matrix element of the identity matrix. Thus Q T Q = I and Q is orthogonal. We can recast the problem of finding coefficients c,c 2,...,c n in the expansion v = c q + c 2 q c n q n in an orthonormal basis as the solution of the matrix equation Qc = v where Q is the orthogonal matrix whose columns contain the orthonormal basis vectors. The solution is obtained by multiplying by Q T. Since Q T Q = I multiplying both sides by Q T gives c = Q T v. The fact that c = v follows from the length preserving property of orthogonal matrices. Recall that for square matrices a left inverse is automatically also a right inverse. So if Q T Q = I then QQ T = I too. This means that Q T is an orthogonal matrix whenever Q is. This proves the (non-obvious) fact that if the columns of an square matrix form an orthonormal basis, then so do the rows! A set G of invertible matrices is called a (matrix) group if. I G (G contains the identity matrix) 2. If A,B G then AB G (G is closed under matrix multiplication) 3. If A G then A is invertible and A G (G is closed under taking the inverse) In a homework problem, you are asked to verify that the set of n n orthogonal matrices is a group.

101 III.3 Complex vector spaces and inner product III.3 Complex vector spaces and inner product Prerequisites and Learning Goals From your work in previous courses, you should be able to perform arithmetic with complex numbers. write down the definition of complex conjugate, modulus and argument of a complex number write down the definition of the complex exponential, addition formula, differentiation and integration of complex exponentials. After completing this section, you should be able to write down the definition of complex inner product and the norm of a complex vector write down the definition of the matrix adjoint, its relation to the complex inner product. write down the definition of a unitary matrix and its properties. use complex numbers in MATLAB/Octave computations, specifically real(z), imag(z), conj(z), abs(z), exp(z) and A for complex matrices. III.3. Review of complex numbers [ ] x Complex numbers can be thought of as points on the (x, y) plane. The point, thought y of as a complex number, is written x + iy (or x + jy if you are an electrical engineer). If z = x + iy then x is called the real part of z and y is called the imaginary part of z. Complex numbers are added just as if they were vectors in two dimensions. If z = x + iy and w = s + it, then z + w = (x + iy) + (s + it) = (x + s) + i(y + t) To multiply two complex numbers, just remember that i 2 =. So if z = x + iy and w = s + it, then zw = (x + iy)(s + it) = xs + i 2 yt + iys + ixt = (xs yt) + i(xt + ys)

102 III Orthogonality The modulus of a complex number, denoted z is simply the length of the corresponding vector in two dimensions. If z = x + iy z = x + iy = x 2 + y 2 An important property is zw = z w The complex conjugate of a complex number z, denoted z, is the reflection of z across the x axis. Thus x + iy = x iy. Thus complex conjugate is obtained by changing all the i s to i s. We have zw = z w and z z = z 2 This last equality is useful for simplifying fractions of complex numbers by turning the denominator into a real number, since z w = z w w 2 For example, to simplify ( + i)/( i) we can write + i i = ( + i) 2 ( i)( + i) = + 2i = i 2 A complex number z is real (i.e. the y part in x + iy is zero) whenever z = z. We also have the following formulas for the real and imaginary part. If z = x + iy then x = (z + z)/2 and y = (z z)/(2i) We define the exponential, e it, of a purely imaginary number it to be the number lying on the unit circle in the complex plane. e it = cos(t) + isin(t) The complex exponential satisfies the familiar rule e i(s+t) = e is e it since by the addition formulas for sine and cosine e i(s+t) = cos(s + t) + isin(s + t) = cos(s)cos(t) sin(s)sin(t) + i(sin(s)cos(t) + cos(s)sin(t)) = (cos(s) + isin(s))(cos(t) + isin(t)) = e is e it The exponential of a number that has both a real and imaginary part is defined in the natural way. e a+ib = e a e ib = e a (cos(b) + isin(b)) 2

103 III.3 Complex vector spaces and inner product The derivative of a complex exponential is given by the formula while the anti-derivative, for (a + ib) is e (a+ib)t dt = d dt e(a+ib)t = (a + ib)e (a+ib)t (a + ib) e(a+ib)t + C If (a + ib) = then e (a+ib)t = e = so in this case e (a+ib)t dt = dt = t + C III.3.2 Complex vector spaces and inner product So far in this course, our scalars have been real numbers. We now want to allow complex numbers. The basic example of a complex vector space is the space C n of n-tuples of complex numbers. Vector addition and scalar multiplication are defined as before: z w z + w z sz z 2 w 2 z 2 + w 2 z 2 sz 2 +. =., z n + w n. z n w n where now z i, w i and s are complex numbers. s. z n =., sz n For complex matrices (or vectors) we define the complex conjugate matrix (or vector) by conjugating each entry. Thus, if A = [a i,j ], then A = [a i,j ]. The product rule for complex conjugation extends to matrices and we have AB = Ā B w w 2 The inner product of two complex vectors w =. and z = w,z = w T z = w n n w i z i i= z z 2. z n is defined by 3

104 III Orthogonality With this definition the norm of z is always positive since n z,z = z 2 = z i 2 i= For complex matrices and vectors we have to modify the rule for bringing a matrix to the other side of an inner product. w,az = w T Az = (A T w) T z ( T = (A w)) T z = A T w,z This leads to the definition of the adjoint of a matrix A = A T. (In physics you will also see the notation A.) With this notation w,az = A w,z. The complex analogue of an orthogonal matrix is called a unitary matrix. A unitary matrix U is a square matrix satisfying U U = UU = I. Notice that a unitary matrix with real entries is an orthogonal matrix since in that case U = U T. The columns of a unitary matrix form an orthonormal basis (with respect to the complex inner product.) MATLAB/Octave deals seamlessly with complex matrices and vectors. Complex numbers can be entered like this >z= + 2i z = + 2i There is a slight danger here in that if i has be defined to be something else (e.g. i =6) then z=i would set z to be 6. You could use z=i to get the desired result, or use the alternative syntax >z= complex(,) z = + i 4

105 III.3 Complex vector spaces and inner product The functions real(z), imag(z), conj(z), abs(z) compute the real part, imaginary part, conjugate and modulus of z. The function exp(z) computes the complex exponential if z is complex. If a matrix A has complex entries then A is not the transpose, but the adjoint (conjugate transpose). >z = [; i] z = + i + i z ans = - i - i Thus the square of the norm of a complex vector is given by >z *z ans = 2 This gives the same answer as >norm(z)^2 ans = 2. (Warning: the function dot in Octave does not compute the correct inner product for complex vectors (it doesn t take the complex conjugate). This is probably a bug. In MATLAB dot works correctly for complex vectors.) 5

106 III Orthogonality III.3.3 Vector spaces of complex-valued functions Let [a,b] be an interval on the real line. Recall that we introduced the vector space of real valued functions defined for x [a,b]. The vector sum f + g of two functions f and g was defined to be the function you get by adding the values, that is, (f + g)(x) = f(x) + g(x) and the scalar multiple sf was defined similarly by (sf)(x) = sf(x). In exactly the same way, we can introduce a vector space of complex valued functions. The independent variable x is still real, taking values in [a,b]. But now the values f(x) of the functions may be complex. Examples of complex valued functions are f(x) = x + ix 2 or f(x) = e ix = cos(x) + isin(x). Now we introduce the inner product of two complex valued functions on [a,b]. In analogy with the inner product for complex vectors we define and the assoicated norm defined by f,g = b a f 2 = f,f = f(x)g(x)dx b a f(x) 2 dx For real valued functions we can ignore the complex conjugate. Example: the inner product of f(x) = + ix and g(x) = x 2 over the interval [,] is + ix,x 2 = ( + ix) x 2 dx = ( ix) x 2 dx = x 2 ix 3 dx = 3 i 4 It will often happen that a function, like f(x) = x is defined for all real values of x. In this case we can consider inner products and norms for any interval [a,b] including semi-infinite and infinite intervals, where a may be or b may be +. Of course the values of the inner product an norm depend on the choice of interval. There are technical complications when dealing with spaces of functions. In this course we will deal with aspects of the subject where these complications don t play an important role. However, it is good to aware that they exist, so we will mention a few. One complication is that the integral defining the inner product may not exist. For example for the interval (, ) = R the norm of f(x) = x is infinite since x 2 dx = Even if the interval is finite, like [,], the function might have a spike. For example, if f(x) = /x then x 2dx = 6

107 III.3 Complex vector spaces and inner product too. To overcome this complication we agree to restrict our attention to square integrable functions. For any interval [a,b], these are the functions f(x) for which f(x) 2 is integrable. They form a vector space that is usually denoted L 2 ([a,b]). It is an example of a Hilbert space and is important in Quantum Mechanics. The L in this notation indicates that the integrals should be defined as Lebesgue integrals rather than as Riemann integrals usually taught in elementary calculus courses. This plays a role when discussing convergence theorems. But for any functions that come up in this course, the Lebesgue integral and the Riemann integral will be the same. The question of convergence is another complication that arises in infinite dimensional vector spaces of functions. When discussing infinite orthonormal bases, infinite linear combinations of vectors (functions) will appear. There are several possible meanings for an equation like c i φ i (x) = φ(x). i= since we are talking about convergence of an infinite series of functions. The most obvious interpretation is that for every fixed value of x the infinite sum of numbers on the left hand side equals the number on the right. Here is another interpretation: the difference of φ and the partial sums N i= c iφ i tends to zero when measured in the L 2 norm, that is lim N c i φ i φ = N i= With this definition, it might happen that there are individual values of x where the first equation doesn t hold. This is the meaning that we will give to the equation. 7

108 III Orthogonality III.4 Fourier series Prerequisites and Learning Goals After completing this section, you should be able to compute the Fourier series (in real and complex form) of a function defined on the interval [,L] interpret each of these series as the expansion of a function (vector) in an infinite orthogonal basis. use Parseval s formula to sum certain infinite series use MATLAB/Octave to plot the partial sums of Fourier series. explain what an amplitude-frequency plot is and be able to compute it in examples. III.4. An infinite orthonormal basis for L 2 ([a, b]) Let [a,b] be an interval of length L = b a. For every integer n, define the function Then infinite collection of functions e n (x) = e 2πinx/L. {...,e 2,e,e,e e 2,...} forms an orthonormal basis for the space L 2 ([a,b]), except that each function e n has norm L instead of. (Since this is the usual normalization, we will stick with it. To get a true orthonormal basis, we must divide each function by L.) Let s verify that these function form an orthonormal set (scaled by L). To compute the norm we calculate e n 2 = e n,e n = = = b a b a b = L a e 2πinx/L e 2πinx/L dx e 2πinx/L e 2πinx/L dx dx. 8

109 III.4 Fourier series This shows that e n = L for every n. Next we check that if n m then e n and e m are orthogonal. e n,e m = = = = = b a b a e 2πinx/L e 2πimx/L dx e 2πi(m n)x/l dx L b 2πi(m n) e2πi(m n)x/l x=a L (e 2πi(m n)b/l e 2πi(m n)a/l) 2πi(m n) Here we used that e 2πi(m n)b/l = e 2πi(m n)(b a+a)/l = e 2πi(m n) e 2πi(m n)a/l = e 2πi(m n)a/l. This shows that the functions {...,e 2,e,e,e e 2,...} form an orthonormal set (scaled by L). To show these functions form a basis we have to verify that they span the space L 2 [a,b]. In other words, we must show that any function f L 2 [a,b] can be written as an infinite linear combination f(x) = c n e n (x) = c n e 2πinx/L. n= n= This is a bit tricky, since it involves infinite series of functions. For a finite dimensional space, to show that an orthogonal set forms a basis, it suffices to count that there are the same number of elements in an orthogonal set as there are dimensions in the space. For an infinite dimensional space this is no longer true. For example, the set of e n s with n even is also an infinite orthonormal set, but it doesn t span all of L 2 [a,b]. In this course, we will simply accept that it is true that {...,e 2,e,e,e e 2,...} span L 2 [a,b]. Once we accept this fact, it is very easy to compute the coefficients in a Fourier expansion. The procedure is the same as in finite dimensions. Starting with f(x) = n= c n e n (x) we simply take the inner product of both sides with e m. The only term in the infinite sum that survives is the one with n = m. Thus e m,f = n= c n e m,e n = c m L and we obtain the formula c m = L b a e 2πimx/L f(x)dx 9

110 III Orthogonality III.4.2 Real form of the Fourier series Fourier series are often written in terms of sines and cosines as f(x) = a 2 + (a n cos(2πnx/l) + b n sin(2πnx/l)) n= To obtain this form, recall that e ±2πinx/L = cos(2πnx/l) ± isin(2πnx/l) Using this formula we find c n e 2πnx/L = c + c n e 2πnx/L + c n e 2πnx/L n= n= n= n= = c + c n (cos(2πnx/l) + isin(2πnx/l)) + c n (cos(2πnx/l) isin(2πnx/l)) = c + n= ((c n + c n )cos(2πnx/l) + i(c n c n )sin(2πnx/l))) n= Thus the real form of the Fourier series holds with Equivalently a = 2c a n = c n + c n for n > b n = ic n ic n for n >. c = a 2 c n = a n 2 + b n 2i c n = a n 2 b n 2i for n > for n <. The coefficients a n and b n in the real form of the Fourier series can also be obtained directly. The set of functions {/2,cos(2πx/L),cos(4πx/L),cos(6πx/L),...,sin(2πx/L),sin(4πx/L),sin(6πx/L),...} also form an orthogonal basis where each vector has norm L/2. This leads to the formulas a n = 2 L b a cos(2πnx/l)f(x)

111 III.4 Fourier series for n =,,2,... and b n = 2 L b a sin(2πnx/l)f(x) for n =,2,... The desire to have the formula for a n work out for n = is the reason for dividing by 2 in the constant term a /2 in the real form of the Fourier series. One advantage of the real form of the Fourier series is that if f(x) is a real valued function, then the coefficients a n and b n are real too, and the Fourier series doesn t involve any complex numbers. However, it is often to calculate the coefficients c n because exponentials are easier to integrate than sines and cosines. III.4.3 An example Let s compute the Fourier coefficients for the square wave function. In this example L =. { if x /2 f(x) = if /2 < x If n = then e i2πnx = e = so c is simply the integral of f. Otherwise, we have c = c n = = f(x)dx = /2 = e i2πnx i2πn /2 e i2πnx f(x)dx e i2πnx dx x=/2 x= dx dx = /2 /2 e i2πnx i2πn = 2 2eiπn { 2πin if n is even = 2/iπn if n is odd e i2πnx dx x= x=/2 Thus we conclude that f(x) = n= n odd 2 iπn ei2πnx To see how well this series is approximating f(x) we go back to the real form of the series. Using a n = c n + c n and b n = ic n ic n we find that a n = for all n, b n = for n even and

112 III Orthogonality b n = 4/πn for n odd. Thus f(x) = n= n odd 4 πn sin(2πnx) = n= 4 sin(2π(2n + )x) π(2n + ) We can use MATLAB/Octave to see how well this series is converging. The file ftdemo.m contains a function that take an integer N as an argument and plots the sum of the first 2N + terms in the Fourier series above. Here is a listing: function ftdemo(n) end X=linspace(,,); F=zeros(,); for n=[:n] F = F + 4*sin(2*pi*(2*n+)*X)/(pi*(2*n+)); end plot(x,f) Here are the outputs for N =,, 2,, 5:

113 III.4 Fourier series III.4.4 Parseval s formula If v, v 2,..., v n is an orthonormal basis in a finite dimensional vector space and the vector v has the expansion n v = c v + + c n v n = c i v i then, taking the inner product of v with itself, and using the fact that the basis is orthonormal, we obtain n n n v,v = c i c j v i,v j = c i 2 i= j= The same formula is true in Hilbert space. If f(x) = n= i= c n e n (x) Then f(x) 2 dx = f,f = n= i= c n 2 In the example above, we have f,f = dx = so we obtain or = n= n= n odd n= 4 π 2 n 2 = 2 n= n= n odd 4 π 2 n 2 = 8 π 2 (2n + ) 2 = π2 8 n= (2n + ) 2 III.4.5 Interpretation of Fourier series What is the meaning of a Fourier series in a practical example? Consider the sound made by a musical instrument in a time interval [, T]. This sound can be represented by a function 3

114 III Orthogonality y(t) for t [,T], where y(t) is the air pressure at a point in space, for example, at your eardrum. A complex exponential e 2πiωt = cos(2πωt) ± isin(2πωt) can be thought of as a pure oscillation with frequency ω. It is a periodic function whose values are repeated when t increases by ω. If t has units of time (seconds) then ω has units of Hertz (cycles per second). In other words, in one second the function e 2πiωt cycles though its values ω times. The Fourier basis functions can be written as e 2πiωnt with ω n = n/t. Thus Fourier s theorem states that for t [, T] y(t) = n= c n e 2πiωnt. In other words, the audio signal y(t) can be synthesized as a superposition of pure oscillations with frequencies ω n = n/t. The coefficients c n describe how much of the frequency ω n is present in the signal. More precisely, writing the complex number c n as c n = c n e 2πiτn we have c n e 2πiωnt = c n e 2πi(ωnt+τn). Thus c n represents the amplitude of the oscillation with frequency ω n while τ n represents a phase shift. A frequency-amplitude plot for y(t) is a plot of the points (ω n, c n ). It should be thought of as a graph of the amplitude as a function of frequency and gives a visual representation of how much of each frequency is present in the signal. If y(t) is defined for all values of t we can use any interval that we want and expand the restriction of y(t) to this interval. Notice that the frequencies ω n = n/t in the expansion will be different for different values of T. Example: Let s illustrate this with the function y(t) = e 2πit and intervals [,T]. This function is itself a pure oscillation with frequency ω =. So at first glance one would expect that there will be only one term in the Fourier expansion. This will turn out to be correct if number is one of the available frequencies, that is, if there is some value of n for which ω n = n/t =. (This happens if T is an integer.) Otherwise, it is still possible to reconstruct y(t), but more frequencies will be required. In this case we would expect that c n should be large for ω n close to. Let s do the calculation. Fix T. Let s first consider the case when T is an integer. Then c n = T T T e 2πint/T e 2πit dt = e 2πi( n/t)t dt T { n = T = ( 2T πi( n/t) e 2πi(T n) e ) = n T, 4

115 III.4 Fourier series as expected. Now let s look at what happens when T is not an integer. Then c n = T = T 2πi(T n) e 2πint/T e 2πit dt A calculation (that we leave as an exercise) results in ( ) e 2πi(T n) c n = 2 2cos(2πT( ωn )) 2πT ω n We can use MATLAB/Octave to do an amplitude-frequency plot. Here are the commands for T =.5 and T =.5 N=[-2:2]; T=.5; omega=n/t; absc=sqrt(2-2*cos(2*pi*t*(-omega)))./(2*pi*t*abs(-omega)); plot(omega,absc) T=.5; omega=n/t; absc=sqrt(2-2*cos(2*pi*t*(-omega)))./(2*pi*t*abs(-omega)); hold on; plot(omega,absc, r ) Here is the result As expected, the values of c n are largest when ω n is close to. 5

116 III Orthogonality III.5 The Discrete Fourier Transform Prerequisites and Learning Goals After completing this section, you should be able to write down the definition of the discrete Fourier transform, and compute the matrix that implements it. explain why the Fast Fourier transform algorithm is a faster method. compute the discrete Fourier transform of a vector using the fft algorithm. compute the fft and its inverse using MATLAB/Octave. compute a frequency-amplitude plot for a sampled signal using MATLAB/Octave, and interpret the result. III.5. Definition In the previous section we saw that the functions e k (x) = e 2πikx for k Z form an infinite orthonormal basis for the Hilbert space of functions L 2 ([,]). Now we will introduce a discrete, finite dimensional version of this basis. To motivate the definition of this basis, imagine taking a function defined on the interval [,] and sampling it at the point at the N points,/n,2/n,...,j/n,...,(n )/N. If we do this to the basis functions e k (x) we end up with vectors e k given by where e k = e 2πik/N e 2πik/N e 2πi2k/N. e 2πi(N )k/n = ω N = e 2πi/N ωn k ωn 2k. ω (N )k N The complex number ω N lies on the unit circle, that is, ω N =. Moreover ω N is a primitive Nth root of unity. This means that ωn N = and ωj N unless j is a multiple of N. Because ω k+n N = ωn k ωn N = ωk N we see that e k+n = e k. Thus, although the vectors e k are defined for every integer k, they start repeating themselves after N steps. Thus there are only N distinct vectors, e,e,...,e N. 6

117 III.5 The Discrete Fourier Transform These vectors, e k for k =,...,N form an orthogonal basis for C N. To see this we use the formula for the sum of a geometric series: N N r = r j = r N j= r r Using this formula, we compute e k,e l = N j= N ω kj N ω lj N = j= ω (l k)j N = N ω (l k)n N ω l k N l = k = l k Now we can expand any vector f C N in this basis. Actually, to make our discrete Fourier transform agree with MATLAB/Octave we divide each basis vector by N. Then we obtain f = N N j= c j e j where c k = e k,f = N j= e 2πikj/N f j The map that send the vector f to the vector of coefficients c = [c,...,c N ] T is the discrete Fourier transform. We can write this in matrix form as c = Ff, f = F c where the matrix F has the vectors e k as its columns. Since this vectors are an orthogonal basis, the inverse is the transpose, up to a factor of N. Explicitly and ω N ω 2 N ω N N F = ω 2 N ω 4 N ω 2(N ) N.... ω N N ω 2(N ) N ω (N )(N ) N F = ω N ω 2 N ω N N ω 2 N ω 4 N ω 2(N ) N N.... ω N N ω 2(N ) N ω (N )(N ) N 7

118 III Orthogonality The matrix F = N /2 F is a unitary matrix ( F = F ). Recall that unitary matrices preserve the length of complex vectors. This implies that the lengths of the vectors f = [f,f,...,f N ] and c = [c,c,...,c N ] are related by or N N k= N c 2 = f 2 c k 2 = This is the discrete version of Parseval s formula. N k= f k 2 III.5.2 The Fast Fourier transform Multiplying an N N matrix with a vector of length N normally requires N 2 multiplications, since each entry of the product requires N, and there are N entries. It turns out that the discrete Fourier transform, that is, multiplication by the matrix F, can be carried out using only N log 2 (N) multiplications (at least if N is a power of 2). The algorithm that achieves this is called the Fast Fourier Transform, or FFT. This represents a tremendous saving in time: calculations that would require weeks of computer time can be carried out in seconds. The basic idea of the FFT is to break the sum defining the Fourier coefficients c k into a sum of the even terms and a sum of the odd terms. Each of these turns out to be (up to a factor we can compute) a discrete Fourier transform of half the length. This idea is then applied recursively. Starting with N = 2 n and halving the size of the Fourier transform at each step, it takes n = log 2 (N) steps to arrive at Fourier transforms of length. This is where the log 2 (N) comes in. To simplify the notation, we will ignore the factor of /N in the definition of the discrete Fourier transform (so one should divide by N at the end of the calculation.) We now also assume that N = 2 n so that we can divide N by 2 repeatedly. The basic formula, splitting the sum for c k into a 8

119 III.5 The Discrete Fourier Transform sum over odd and even j s is c k = = = = N j= N j= j even N/2 j= N/2 j= e i2πkj/n f j e i2πkj/n f j + N j= j odd N/2 e i2πk2j/n f 2j + e i2πkj/n f j j= e i2πk(2j+)/n f 2j+ N/2 e i2πkj/(n/2) f 2j + e i2πk/n j= e i2πkj/(n/2) f 2j+ Notice that the two sums on the right are discrete Fourier transforms of length N/2. To continue, it is useful to write the integers j in base 2. Lets assume that N = 2 3 = 8. Once you understand this case, the general case N = 2 n will be easy. Recall that = (base 2) = (base 2) 2 = (base 2) 3 = (base 2) 4 = (base 2) 5 = (base 2) 6 = (base 2) 7 = (base 2) The even j s are the ones whose binary expansions have the form, while the odd j s have binary expansions of the form. For any pattern of bits like, I will use the notation F <pattern> to denote the discrete Fourier transform where the input data is given by all the f j s whose j s have binary expansion fitting the pattern. Here are some examples. To start, Fk = c k is the original discrete Fourier transform, since every j fits the pattern. In this example k ranges over,...,7, that is, the values start repeating after that. Only even j s fit the pattern, so F is the discrete Fourier transform of the even j s given by N/2 Fk = e i2πkj/(n/2) f 2j. j= 9

120 III Orthogonality Here k runs from to 3 before the values start repeating. Similarly, F is a transform of length N/4 = 2 given by N/4 Fk = e i2πkj/(n/4) f 4j. j= In this case k =, and then the values repeat. Finally, the only j matching the pattern is j = 2, so Fk is a transform of length one term given by N/8 Fk = e i2πkj/(n/8) f 2 = j= e f 2. = f 2 j= With this notation, the basic even odd formula can be written F k Recall that ω N = e i2π/n, so ω N = e i2π/n. = F k + ω k N F k. Lets look at this equation when k =. We will represent the formula by the following diagram. F ** ω N *** F ** F 2

121 III.5 The Discrete Fourier Transform This diagram means that F is obtained by adding F to ω N F. (Of course ω N = so we could omit it.) Now lets add the diagrams for k =,2,3. F ** ω N *** F F ** F *** ω N F ** 2 F *** 2 ω N 2 F ** 3 F *** ω 3 3 N ** F F ** F ** 2 F ** 3 2

122 III Orthogonality Now when we get to k = 4, we recall that F and F are discrete transforms of length N/2 = 4. Therefore, by periodicity F4 = F, F5 = F, and so on. So in the formula F4 = F4 + ω 4 N F 4 we may replace F4 and F4 with F and F respectively. Making such replacements, we complete the first part of the diagram as follows. F ** ω N *** F F ** F *** ω N F ** 2 F *** 2 ω N 2 F ** 3 F *** ω 3 3 N ** F F ** F ** 2 F ** 3 ω4 N ω5 N ω6 N ω7 N F *** 4 F *** 5 F *** 6 F *** 7 22

123 III.5 The Discrete Fourier Transform To move to the next level we analyze the discrete Fourier transforms on the left of this diagram in the same way. This time we use the basic formula for the transform of length N/2, namely and F k F k = F k = F k + ω k N/2 F k + ω k N/2 F k. The resulting diagram shows how to go from the length two transforms to the final transform on the right. * F ω N/2 F ** ω N *** F F* ω N/2 F ** ω N F *** F* ω 2 N/2 F ** 2 ω2 N F *** 2 F* ω 3 N/2 F ** 3 F *** ω 3 3 N F * ω N/2 ** F ω4 N F *** 4 F* ω N/2 F ** ω5 N F *** 5 F* ω2 N/2 F ** 2 ω6 N F *** 6 F* ω 3 N/2 F ** 3 ω7 N F *** 7 23

124 III Orthogonality Now we go down one more level. Each transform of length two can be constructed from transforms of length one, i.e., from the original data in some order. We complete the diagram as follows. Here we have inserted the value N = 8. f = F ω 2 * F F ** ω 4 ω 8 *** F = c f = 4 F ω 2 F* ω 4 F ** ω 8 F *** = c f 2 = F ω 2 F* ω 2 4 F ** 2 ω2 8 F *** 2 = c 2 f 6 = F ω 2 F* ω 3 4 F ** 3 F *** ω = c 3 f = F ω 2 F * ω 4 ** F ω4 8 F *** 4 = c 4 f 5 = F ω 2 F* ω 4 F ** ω5 8 F *** 5 = c 5 f 3 = F ω 2 F* ω2 4 F ** 2 ω6 8 F *** 6 = c 6 f 7 = F ω 2 F* ω 3 4 F ** 3 ω7 8 F *** 7 = c 7 Notice that the f j s on the left of the diagram are in bit reversed order. In other words, if we reverse the order of the bits in the binary expansion of the j s, the resulting numbers are ordered from () to 7 (). Now we can describe the algorithm for the fast Fourier transform. Starting with the original data [f,...,f 7 ] we arrange the values in bit reversed order. Then we combine them pairwise, as indicated by the left side of the diagram, to form the transforms of length 2. To do this we we need to compute ω 2 = e iπ =. Next we combine the transforms of length 2 according to the middle part of the diagram to form the transforms of length 4. Here we use that ω 4 = e iπ/2 = i. Finally we combine the transforms of length 4 to obtain the transform of length 8. Here we need ω 8 = e iπ/4 = 2 /2 i2 /2. The algorithm for values of N other than 8 is entirely analogous. For N = 2 or 4 we stop at the first or second stage. For larger values of N = 2 n we simply add more stages. How many multiplications do we need to do? Well there are N = 2 n multiplications per stage of the algorithm (one for each circle on the diagram), and there are n = log 2 (N) stages. So the number of multiplications is 2 n n = N log 2 (N) As an example let us compute the discrete Fourier transform with N = 4 of the data [f,f,f 2,f 3 ] = [,2,3,4]. First we compute the bit reversed order of = (), = (),2 = (),3 = () to be () =,() = 2,() =,() = 3. We then do the rest of the computation right on the diagram as follows. 24

125 III.5 The Discrete Fourier Transform f = +3=4 4+6= = c f = 2 3 3= 2 i 2+2i = c f = 2 2+4=6 4 6= 2 = c 2 f 3 = 4 2 4= 2 i 2 2i = c 3 The MATLAB/Octave command for computing the fast Fourier transform is fft. Let s verify the computation above. > fft([ 2 3 4]) ans = + i i -2 + i -2-2i The inverse fft is computed using ifft. III.5.3 A frequency-amplitude plot for a sampled audio signal Recall that a frequency-amplitude plot for the function y(t) defined on the interval [, T] is a plot of the points (ω n, c n ), where ω n = n/t and c n are the numbers appearing in the Fourier series y(t) = c n e 2πiωnt = c n e 2πint/T n= n= If y(t) represents the sound of a musical instrument, then the frequency-amplitude plot gives a visual representation of the strengths of the various frequencies present in the sound. Of course, for an actual instrument there is no formula for y(t) and the best we can do is to measure this function at a discrete set of points. To do this we pick a sampling frequency F s samples/second. Then we measure the function y(t) at times t = t j = j/f s, j =,... N, where N = F s T (so that t N = T) and put the results y j = y(t j ) in a vector y = [y,y 2,...,y N ] T. How can we make an approximate frequency-amplitude plot with this information? The key is to realize that the coefficients in the discrete Fourier transform of y can be used to approximate the Fourier series coefficients c n. To see this, do a Riemann sum approximation 25

126 III Orthogonality of the integral in the formula for c n. Using the equally spaced points t j with t j = /F s, and recalling that N = TF s we obtain c n = T T T N e 2πint/T y(t)dt e 2πintj/T y(t j ) t j j= = N e 2πinj/(TFs) y j TF s = N c n where c n is the nth coefficient in the discrete Fourier transform of y j= The frequency corresponding to c n is ω n = n/t = nf s /N. So, for an approximate frequency-amplitude plot, we can plot the points (nf s /N, c n /N). However, it is important to realize that the approximation c n c n /N is only good for small n. The reason is that the Riemann sum will do a worse job in approximating the integral when the integrand is oscillating rapidly, that is, when n is large. So we should only plot a restricted range of n. In fact, it never makes sense to plot more than N/2 points. The reason for this is c n+n = c n and, for y real valued, c n = c n. These facts imply that c n = c N n, so that the values of c n in the range [,N/2 ] are the same as the values in [N/2,N ], with the order reversed. To compare the meanings of the coefficients c n and c n it is instructive to consider the formulas (both exact) for the Fourier series and the discrete Fourier transform for y j = y(t j ): y j = N y(t j ) = N n= n= c n e 2πinj/N c n e 2πint j/t = n= c n e 2πinj/N The coefficients c n and c n /N are close for n close to, but then their values diverge so that the infinite sum and the finite sum above both give the same answer. Now let s try and make a frequency amplitude plot using MATLAB/Octave for a sampled flute contained in the audio file F6.baroque.au available at This file contains a sampled baroque flute playing the note F 6, which has a frequency of Hz. The sampling rate is F s = 225 samples/second. Audio processing is one area where MATLAB and Octave are different. The Octave code to load the file F6.baroque.au is 26

127 III.5 The Discrete Fourier Transform y=loadaudio( F6.baroque, au,8); while the MATLAB code is y=auread( F6.baroque.au ); After this step the sampled values are loaded in the vector y. Now we compute the FFT of y and store the resulting values c n in a vector tildec. Then we compute a vector omega containing the frequencies and make a plot of these frequencies against c n /N. We plot the first Nmax=N/4 values. tildec = fft(y); N=length(y); Fs=225; omega=[:n-]*fs/n; Nmax=floor(N/4); plot(omega(:nmax), abs(tildec(:nmax)/n)); Here is the result Notice the large spike at ω 396 corresponding to the note F 6. Smaller spikes appear at the overtone series, but evidently these are quite small for a flute. 27

128

129 Chapter IV Eigenvalues and Eigenvectors 29

130 IV Eigenvalues and Eigenvectors IV. Eigenvalues and Eigenvectors Prerequisites and Learning Goals After completing this section, you should be able to write down the definition of eigenvalues and eigenvectors and be able to compute them using the standard procedure. use MATLAB/Octave commands poly and root to compute the characteristic polynomial and its roots, and eig to compute the eigenvalues and eigenvectors. write down the definitions of algebraic and geometric multiplicities of eigenvectors when there are repeated eigenvalues. use eigenvalues and eigenvectors to perform matrix diagonalization. recognize the form of the Jordan Canonical Form for non-diagonalizable matrices. explain the relationship between eigenvalues and the determinant and trace of a matrix. use eigenvalues to compute powers of a diagonalizable matrix. IV.. Definition Let A be an n n matrix. A number λ and non-zero vector v are an eigenvalue eigenvector pair for A if Av = λv Although v is required to be nonzero, λ = is possible. If v is an eigenvector, so is sv for any number s. Rewrite the eigenvalue equation as (λi A)v = Then we see that v is a non-zero vector in the nullspace N(λI A). Such a vector only exists if λi A is a singular matrix, or equivalently if det(λi A) = 3

131 IV. Eigenvalues and Eigenvectors IV..2 Standard procedure This leads to the standard textbook method of finding eigenvalues. The function of λ defined by p(λ) = det(λi A) is a polynomial of degree n, called the characteristic polynomial, whose zeros are the eigenvalues. So the standard procedure is: Compute the characteristic polynomial p(λ) Find all the zeros (roots) of p(λ). This is equivalent to completely factoring p(λ) as p(λ) = (λ λ )(λ λ 2 ) (λ λ n ) Such a factorization always exists if we allow the possibility that the zeros λ,λ 2,... are complex numbers. But it may be hard to find. In this factorization there may be repetitions in the λ i s. The number of times a λ i is repeated is called its algebraic multiplicity. For each distinct λ i find N(λI A), that is, all the solutions to (λ i I A)v = The non-zero solutions are the eigenvectors for λ i. IV..3 Example This is the typical case where all the eigenvalues are distinct. Let A = Then, expanding the determinant, we find This can be factored as So the eigenvalues are 2, 4 and 6. det(λi A) = λ 3 2λ λ 48 λ 3 2λ λ 48 = (λ 2)(λ 4)(λ 6) These steps can be done with MATLAB/Octave using poly and root. If A is a square matrix, the command poly(a) computes the characteristic polynomial, or rather, its coefficients. 3

132 IV Eigenvalues and Eigenvectors > A=[3-6 -7; 8 5; - -2 ]; > p=poly(a) p = Recall that the coefficient of the highest power comes first. The function roots takes as input a vector representing the coefficients of a polynomial and returns the roots. >roots(p) ans = To find the eigenvector(s) for λ = 2 we must solve the homogeneous equation (2I A)v =. Recall that eye(n) is the n n identity matrix I >rref(2*eye(3) - A) ans = - From this we can read off the solution v = Similarly we find for λ 2 = 4 and λ 3 = 6 that the corresponding eigenvectors are 2 v 2 = v 3 = The three eigenvectors v, v 2 and v 3 are linearly independent and form a basis for R 3. The MATLAB/Octave command for finding eigenvalues and eigenvectors is eig. The command eig(a) lists the eigenvalues 32

133 IV. Eigenvalues and Eigenvectors >eig(a) ans = while the variant [V,D] = eig(a) returns a matrix V whose columns are eigenvectors and a diagonal matrix D whose diagonal entries are the eigenvalues. >[V,D] = eig(a) V = D = e e e e e e e e e Notice that the eigenvectors have been normalized to have length one. Also, since they have been computed numerically, they are not exactly correct. The entry 2.243e-6 (i.e., ) should actually be zero. IV..4 Example 2 This example has a repeated eigenvalue. A = 2 The characteristic polynomial is det(λi A) = λ 3 4λ 2 + 5λ 2 = (λ ) 2 (λ 2) In this example the eigenvalues are and 2, but the eigenvalue has algebraic multiplicity 2. 33

134 IV Eigenvalues and Eigenvectors Let s find the eigenvector(s) for λ = we have I A = From this it is easy to see that there are two linearly independent eigenvectors for this eigenvalue: v = and w = In this case we say that the geometric multiplicity is 2. In general, the geometric multiplicity is the number of independent eigenvectors, or equivalently the dimension of N(λI A) The eigenvalue λ 2 = 2 has eigenvector v 2 = So, although this example has repeated eigenvalues, there still is a basis of eigenvectors. IV..5 Example 3 Here is an example where the geometric multiplicity is less than the algebraic multiplicity. If 2 A = 2 2 then the characteristic polynomial is det(λi A) = (λ 2) 3 so there is one eigenvalue λ = 2 with algebraic multiplicity 3. To find the eigenvectors we compute 2I A = From this we see that there is only one independent solution v = 34

135 IV. Eigenvalues and Eigenvectors Thus the geometric multiplicity dim(n(2i A)) is. What does MATLAB/Octave do in this situation? >A=[2 ; 2 ; 2]; >[V D] = eig(a) V = D = It simply returned the same eigenvector three times. In this example, there does not exist a basis of eigenvectors. IV..6 Example 4 Finally, here is an example where the eigenvalues are complex, even though the matrix has real entries. Let [ ] A = Then which has no real roots. However det(λi A) = λ 2 + λ 2 + = (λ + i)(λ i) so the eigenvalues are λ = i and λ 2 = i. The eigenvectors are found with the same procedure as before, except that now we must use complex arithmetic. So for λ = i we compute [ ] i ii A = i There is trick for computing the null space of a singular 2 2 matrix. Since the two rows must be multiples of each other (in this case the second row is i times the first row) we simply 35

136 IV Eigenvalues and Eigenvectors [ ] a need to find a vector with ia + b =. This is easily achieved by flipping the entries in b the first row and changing the sign of one of them. Thus [ ] v = i If a matrix has real entries, then the eigenvalues and eigenvectors occur in conjugate pairs. This can be seen directly from the eigenvalue equation Av = λv. Taking complex conjugates (and using that the conjugate of a product is the product of the conjugates) we obtain Ā v = λ v But if A is real then Ā = A so A v = λ v, which shows that λ and v are also an eigenvalue eigenvector pair. From this discussion it follows that v 2 is the complex conjugate of v [ ] v 2 = i IV..7 A basis of eigenvectors In three of the four examples above the matrix A had a basis of eigenvectors. If all the eigenvalues are distinct, as in the first example, then the corresponding eigenvectors are always independent and therefore form a basis. To see why this is true, suppose A has eigenvalues λ,...,λ n that are all distinct, that is, λ i λ j for i j. Let v,...,v n be the corresponding eigenvectors. Now, starting with the first two eigenvectors, suppose a linear combination of them equals zero: c v + c 2 v 2 = Multiplying by A and using the fact that these are eigenvectors, we obtain c Av + c 2 Av 2 = c λ v + c 2 λ 2 v 2 = On the other hand, multiplying the original equation by λ 2 we obtain Subtracting the equations gives c λ 2 v + c 2 λ 2 v 2 =. c (λ 2 λ )v = Since (λ 2 λ ) and, being an eigenvector, v it must be that c =. Now returning to the original equation we find c 2 v 2 = which implies that c 2 = too. Thus v and v 2 are linearly independent. Now let s consider three eigenvectors v, v 2 and v 3. Suppose c v + c 2 v 2 + c 3 v 3 = 36

137 IV. Eigenvalues and Eigenvectors As before, we multiply by A to get one equation, then multiply by λ 3 to get another equation. Subtracting the resulting equations gives c (λ λ 3 )v + c 2 (λ 2 λ 3 )v 2 = But we already know that v and v 2 are independent. Therefore c (λ λ 3 ) = c 2 (λ 2 λ 3 ) =. Since λ λ 3 and λ 2 λ 3 this implies c = c 2 = too. Therefore v, v 2 and v 3 are independent. Repeating this argument, we eventually find that all the eigenvectors v,...,v n are independent. In example 2 above, we saw that it might be possible to have a basis of eigenvectors even when there are repeated eigenvalues. For some classes of matrices (for example symmetric matrices (A T = A) or orthogonal matrices) a basis of eigenvectors always exists, whether or not there are repeated eigenvalues. Will will consider this in more detail later in the course. IV..8 When there are not enough eigenvectors Let s try to understand a little better the exceptional situation where there are not enough eigenvectors to form a basis. Consider [ ] A ǫ = + ǫ [ ] when ǫ = this matrix has a single eigenvalues λ = and only one eigenvector v =. What happens when we change ǫ slightly? Then the eigenvalues change to and + ǫ, and being distinct, they must have independent eigenvectors. A short calculation reveals that they are [ ] [ ] v = v 2 = ǫ These two eigenvectors are almost, but not quite, dependent. When ǫ becomes zero they collapse and point in the same direction. In general, if you start with a matrix with repeated eigenvalues and too few eigenvectors, and change the entries of the matrix a little, some of the eigenvectors (the ones corresponding to the eigenvalues whose algebraic multiplicity is higher than the geometric multiplicity) will split into several eigenvectors that are almost parallel. IV..9 Diagonalization Suppose A is an n n matrix with eigenvalues λ,,λ n and a basis of eigenvectors v,...,v n. Form the matrix with eigenvectors as columns S = [ v v 2 v n ] 37

138 IV Eigenvalues and Eigenvectors Then AS = [ Av Av 2 Av n ] = [ ] λ v λ 2 v 2 λ n v n λ = [ ] λ 2 v v 2 v n λ 3.. λ n = SD where D is the diagonal matrix with the eigenvalues on the diagonal. Since the columns of S are independent, the inverse exists and we can write A = SDS S AS = D This is called diagonalization. Notice that the matrix S is exactly the one returns by the MATLAB/Octave call[s D] = eig(a). >A=[ 2 3; 4 5 6; 7 8 9]; >[S D] = eig(a); >S*D*S^(-) ans = IV.. Jordan canonical form If A is a matrix that cannot be diagonalized, there still exits a similar factorization called the Jordan Canonical Form. It turns out that any matrix A can be written as A = SBS where B is a block diagonal matrix. The matrix B has the form B B 2 B = B 3.. B k 38

139 IV. Eigenvalues and Eigenvectors Where each submatrix B i (called a Jordan block) has a single eigenvalue on the diagonal and s on the superdiagonal. λ i λ i B i = λ i.. λ i If all the blocks are of size then there are no s and the matrix is diagonalizable. IV.. Eigenvalues, determinant and trace Recall that the determinant satisfies det(ab) = det(a)det(b) and det(s ) = /det(s). Also, the determinant of a diagonal matrix (or more generally of an upper triangular matrix) is the product of the diagonal entries. Thus if A is diagonalizable then det(a) = det(sds ) = det(s)det(d)det(s ) = det(s)det(d)/det(s) = det(d) = λ λ 2 λ n Thus the determinant of a matrix is the product of the eigenvalues. This is true for nondiagonalizable matrices as well, as can be seen from the Jordan Canonical Form. Notice that the number of times a particular λ i appears in this product is the algebraic multiplicity of that eigenvalues. The trace of a matrix is the sum of the diagonal entries. If A = [a i,j ] then tr(a) = i a i,i. Even though it is not true that AB = BA in general, the trace is not sensitive to the change in order: tr(ab) = a i,j b j,i = b j,i a i,j = tr(ba) i,j i,j Thus (taking A = SD and B = S ) tr(a) = tr(sds ) = tr(s SD) = tr(d) = λ + λ λ n Thus the trace of a diagonalizable matrix is the sum of the eigenvalues. Again, this is true for non-diagonalizable matrices as well, and can be seen from the Jordan Canonical Form. IV..2 Powers of a diagonalizable matrix If A is diagonalizable then its powers A k are easy to compute. A k = SDS SDS SDS SDS = SD k S 39

140 IV Eigenvalues and Eigenvectors because all of the factors S S cancel. Since powers of the diagonal matrix D are given by λ k λ k 2 D k = λ k 3.. λ k n this formula provides an effective way to understand and compute A k for large k. 4

141 IV.2 Power Method for Computing Eigenvalues IV.2 Power Method for Computing Eigenvalues Prerequisites and Learning Goals After completing this section, you should be able to write down the properties of the eigenvalues and eigenvectors of real symmetric matrices. write down the definition and properties of Hermitian matrices. use the power method to compute the eigenvalue/eigenvector of a Hermitian matrix, where the eigenvalue is closest to a given number. IV.2. Eigenvalues of real symmetric matrices If A is real (that is, the entries are all real numbers) and symmetric (that is A T = A) then the eigenvalues of A are all real, and the eigenvectors can be chosen to form an orthonormal basis. To see that the eigenvalues must be real, let s start with an eigenvalue eigenvector pair λ, v for A. For the moment, we allow the possibility that λ and v are complex. Since A is real and symmetric we have v,av = Av,v and since Av = λv this implies v,λv = λv,v Here we are using the inner product for complex vectors given by,w = T w. This means that the λ on the right side is conjugated, that is, λ v,v = λ v,v. Since v is an eigenvector, it cannot be zero. So v,v = v 2. Therefore we may divide by v,v to conclude λ = λ. This shows that λ is real. Now let s show that eigenvectors corresponding to two distinct eigenvalues must be orthogonal. If Av = λ v and Av 2 = λ 2 v 2 with λ λ 2, then starting with the equation that follows from the symmetry of A Av,v 2 = v,av 2 4

142 IV Eigenvalues and Eigenvectors we find λ v,v 2 = λ 2 v,v 2 Here λ should appear as λ, but we already know that eigenvalues are real so λ = λ. This can be written (λ λ 2 ) v,v 2 = and since λ λ 2 this implies v,v 2 = This calculation shows that if A has distinct eigenvalues then the eigenvectors are all orthogonal, and by rescaling them, we can obtain an orthonormal basis of eigenvectors. In fact, even it A has repeated eigenvalues, it is still true that an orthonormal basis of eigenvectors exists. If A is real and symmetric, then the eigenvectors can be chosen to be real. One way to see this is to notice that if once we know that λ is real then all the calculations involved in computing the nullspace of λi A only involve real numbers. This implies that the matrix that diagonalizes A can be chosen to be an orthogonal matrix. If A has complex entries, but satisfies A = A it is called Hermitian. (Recall that A = A T.) The argument above still is valid for Hermitian matrices and shows that all the eigenvalues are real. There also exists an orthonormal basis of eigenvectors. However, in contrast to the case where A is real and symmetric, the eigenvectors may have complex entries. Thus a Hermitian matrix may be diagonalized by a unitary matrix. If A is any matrix with real entries, then A + A T is real symmetric. (The matrix A T A is also real symmetric, and has the additional property that all the eigenvalues are positive.) We can use this to produce random symmetric matrices in MATLAB/Octave like this: >A = rand(4,4); >A = A+A A = Let s check the eigenvalues and vectors or A >[V, D] = eig(a) V = 42

143 IV.2 Power Method for Computing Eigenvalues D = The eigenvalues are real, as expected. Also, the eigenvectors contained in the columns of the matrix V have been normalized. Thus V is orthogonal: >V *V ans =.e e e e e-7.e e e e-6 -.2e-6.e e-6.324e e e-6.e+ (at least, up to numerical error.) IV.2.2 The power method The power method is a very simple method for finding a single eigenvalue eigenvector pair. Suppose A is an n n matrix. We assume that A is real symmetric, so that all the eigenvalues are real. Now let x be any vector of length n. Perform the following steps: Multiply by A Normalize to unit length. repeatedly. This generates a series of vectors x,x,x 2,... It turns out that these vectors converge to the eigenvector corresponding to the eigenvalue whose absolute value is the largest. 43

144 IV Eigenvalues and Eigenvectors To verify this claim, let s first find a formula for x k. At each stage of this process, we are multiplying by A and then by some number. Thus x k must be a multiple of A k x. Since the resulting vector has unit length, that number must be / A k x. Thus x k = Ak x A k x We know that A has a basis of eigenvectors v,v 2,...,v n. Order them so that λ > λ 2 λ n. (We are assuming here that λ λ 2, otherwise the power method runs into difficulty.) We may expand our initial vector x in this basis x = c v + c 2 v 2 + c n v n We need that c for this method to work, but if x is chosen at random, this is almost always true. Since A k v i = λ k i v i we have A k x = c λ k v + c 2 λ k 2 v 2 + c n λ k n v n ) = λ k (c v + c 2 (λ 2 /λ ) k v 2 + c n (λ n /λ ) k v n = λ k (c v + ǫ k ) where ǫ k as k. This is because (λ i /λ ) < for every i > so the powers tend to zero. Thus A k x = λ k c v + ǫ k so that x k = Ak x A k x ( ) k λ c v + ǫ k = λ c v + ǫ k ( (±) k ± v ) v We have shown that x k converges, except for a possible sign flip at each stage, to a normalized eigenvector corresponding to λ. The sign flip is present exactly when λ <. Knowing v (or a multiple of it) we can find λ with λ = v,av v 2 This gives a method for finding the largest eigenvalue (in absolute value) and the corresponding eigenvector. Let s try it out. 44

145 IV.2 Power Method for Computing Eigenvalues >A = [4 3; 3 2; 3 2 5]; >x=rand(3,); >for k = [:] >y=a*x; >x=y/norm(y) >end x = x = x = x = x = x = x =

146 IV Eigenvalues and Eigenvectors x = x = x = This gives the eigenvector. We compute the eigenvalue with >x *A*x/norm(x)^2 ans = Let s check: >[V D] = eig(a) V = D =

147 IV.2 Power Method for Computing Eigenvalues As expected, we have computed the largest eigenvalue and eigenvector. Of course, a serious program that uses this method would not just iterate a fixed number (above it was ) times, but check for convergence, perhaps by checking whether x k x k was less than some small number, and stopping when this was achieved. So far, the power method only computes the eigenvalue with the largest absolute value, and the corresponding eigenvector. What good is that? Well, it turns out that with an additional twist we can compute the eigenvalue closest to any number s. The key observation is that the eigenvalues of (A si) are exactly (λ i s) (unless, of course, A si is not invertible. But then s is an eigenvalue itself and we can stop looking.) Moreover, the eigenvectors of A and (A si) are the same. Let s see why this is true. If then Av = λv (A si)v = (λ s)v. Now if we multiply both sides by (A si) and divide by λ s we get (λ s) v = (A si) v. These steps can be run backwards to show that if (λ s) is an eigenvalue of (A si) with eigenvector v, then λ is an eigenvalue of A with the same eigenvector. Now start with an arbitrary vector x and define x k+ = (A si) x k (A si) x k. Then x k will converge to the eigenvector v i of (A si) for which λ i s is the largest. But, since the eigenvectors of A and A si are the same, v i is also an eigenvector of A. And since λ i s is largest when λ i is closest to s, we have computed the eigenvector v i of A for which λ i is closest to s. We can now compute λ i by comparing Av i with v i Here is a crucial point: when computing (A si) x k in this procedure, we should not actually compute the inverse. We don t need to know the whole matrix (A si), but just the vector (A si) x k. This vector is the solution y of the linear equation (A si)y = x k. In MATLAB/Octave we would therefore use something like (A - s*eye(n))\xk. Let s try to compute the eigenvalue of the matrix A above closest to 3. >A = [4 3; 3 2; 3 2 5]; >x=rand(3,); >for k = [:] >y=(a-3*eye(3))\x; >x=y/norm(y) >end 47

148 IV Eigenvalues and Eigenvectors x = x = x = x = x = x = x = x =

149 IV.2 Power Method for Computing Eigenvalues x = x = This gives the eigenvector. Now we can find the eigenvalue > lambda = x *A*x/norm(x)^2 lambda = Comparing with the results of eig above, we see that we have computed the middle eigenvalue and eigenvector. 49

150 IV Eigenvalues and Eigenvectors IV.3 Recursion Relations Prerequisites and Learning Goals After completing this section, you should be able to use matrix equations to solve a recurrence relation, for example the relation defining Fibonacci numbers. determine initial values for which the solution of a recurrence relation will become large or small (depending of the eigenvalues of the associated matrix). IV.3. Fibonacci numbers Consider the sequence of numbers given by a multiple of powers of the golden ratio ( + ) n 5 n =,2, When n is large, these numbers are almost integers: >format long; >((+sqrt(5))/2)^3/sqrt(5) ans = >((+sqrt(5))/2)^3/sqrt(5) ans = >((+sqrt(5))/2)^32/sqrt(5) ans = Why? To answer this question we introduce the Fibonacci sequence: where each number in the sequence is obtained by adding the previous two. If you go far enough along in this sequence you will encounter

151 IV.3 Recursion Relations and you can check (without using MATLAB/Octave, I hope) that the third number is the sum of the previous two. But why should powers of the golden ratio be very nearly, but not quite, equal to Fibonacci numbers? The reason is that the Fibonacci sequence is defined by a recursion relation. For the Fibonacci sequence F,F,F 2,... the recursion relation is F n+ = F n + F n This equation, together with the identity F n = F n can be written in matrix form as [ ] [ ] [ ] Fn+ Fn = Thus, taking n =, we find Similarly [ F3 F 2 and continuing like this we find ] = F n [ F2 F ] = [ [ Fn+ F n [ ][ F2 ] = F ] = [ F n ] [ ] F F [ ] n [ ] F Finally, since F = and F = we can write [ ] [ ] n [ ] Fn+ = F n F ] 2 [ ] F F We can diagonalize[ the matrix ] to get a formula for the Fibonacci numbers. The eigenvalues and eigenvectors of are and This implies [ ][ λ λ 2 λ n λ n 2 λ = λ 2 = 5 2 ][ ] λ λ 2 = v = v 2 = [ [ + ] 5 2 ] [ λ λ 2 ][ ][ ] λ n λ2 λ n 2 λ 5

152 IV Eigenvalues and Eigenvectors so that [ Fn+ F n ] = [ ][ ][ ][ ] λ λ 2 λ n λ2 5 λ n 2 λ = [ λ n+ λ n+ ] 2 5 λ n λn 2 In particular F n = 5 (λ n λ n 2) Since λ is smaller than in absolute value, the powers λ n 2 very quickly as n becomes large. This explains why become small F n 5 λ n for large n. If we want to use MATLAB/Octave to compute Fibonacci numbers, we don t need to bother diagonalizing the matrix. >[ ; ]^3*[;] ans = produces the same Fibonacci numbers as above. IV.3.2 Other recursion relations The idea that was used to solve for the Fibonacci numbers can be used to solve other recursion relations. For example the three-step recursion x n+ = ax n + bx n + cx n 2 can be written as a matrix equation x n+ a b c x n x n = x n x n x n 2 so given three initial values x, x and x 2 we can find the rest by computing powers of a 3 3 matrix. In the next section we will solve a recurrence relation that arises in Quantum Mechanics. 52

153 IV.4 The Anderson Tight Binding Model IV.4 The Anderson Tight Binding Model Prerequisites and Learning Goals After completing this section, you should be able to describe a bound state with energy E for the discrete Schrodinger equation for a single electron moving in a one dimensional semi-infinite crystal. describe a scattering state with energy E. compute the energies for which a bound state exists and identify the conduction band, for a potential that has only one non-zero value. compute the conduction bands for a one dimensional crystal. IV.4. Description of the model Previously we studied how to approximate differential equations by matrix equations. If we apply this discretization procedure to the Schrödinger equation for an electron moving in a solid we obtain the Anderson tight binding model. We will consider a single electron moving in a one dimensional semi-infinite crystal. The electron is constrained to live at discrete lattice points, numbered,,2,3,... These can be thought of as the positions of the atoms. For each lattice point n there is a potential V n that describes how much the atom at that lattice point attracts or repels the electron. Positive V n s indicate repulsion, whereas negative V n s indicate attraction. Typical situations studied in physics are where the V n s repeat the same pattern periodically (a crystal), or where they are chosen at random (disordered media). In fact, the term Anderson model usually refers to the random case, where the potentials are chosen at random, independently for each site. The wave function for the electron is a sequence of complex numbers Ψ = {ψ,ψ,ψ 2,...}. The sequence Ψ is called a bound state with energy E if satisfies the following three conditions: () The discrete version of the time independent Schrödinger equation ψ n+ ψ n + V n ψ n = Eψ n, (2) the boundary condition ψ =, (3) and the normalization condition N 2 = ψ n 2 <. n= 53

154 IV Eigenvalues and Eigenvectors This conditions are trivially satisfies if Ψ = {,,,...} so we specifically exclude this case. (In fact Ψ is actually the eigenvector of an infinite matrix so this is just the condition that eigenvectors must be non-zero.) Given an energy E, it is always possible to find a wave function Ψ to satisfy conditions () and (2). However for most energies E, none of these Ψ s will be getting small for large n, so the normalization condition (3) will not be satisfied. There are only a discrete set of energies E for which a bound state satisfying all three conditions is satisfied. In other words, the energy E is quantized. If E is one of the allowed energy values and Ψ is the corresponding bound state, then the numbers ψ n 2 /N 2 are interpreted as the probabilities of finding an electron with energy E at the nth site. These numbers add up to, consistent with the interpretation as probabilities. Notice that if Ψ is a bound state with energy E, then so is any non-zero multiple aψ = {aψ,aψ,aψ 2,...}. Replacing Ψ with aψ has no effect on the probabilities because N changes to an, so the a s cancel in ψ n 2 /N 2. IV.4.2 Recursion relation The discrete Schrödinger equation () together with the initial condition (2) is a recursion relation that can be solved using the method we saw in the previous section. We have [ ] [ ][ ] ψn+ Vn E ψn = so if we set and then this implies Condition (2) says that since ψ =. ψ n x n = A(z) = [ ψn+ ψ n ] [ ] z ψ n x n = A(V n E)A(V n E) A(V E)x. x = [ ] ψ, In fact, we may assume ψ =, since replacing Ψ with aψ where a = /ψ results in a bound state where this is true. Dividing by ψ is possible unless ψ =. But if ψ = then x = and the recursion implies that every ψ k =. This is not an acceptable bound state. Thus we may assume [ ] x = 54

155 IV.4 The Anderson Tight Binding Model So far we are able to compute x n (and thus ψ n ) satisfying conditions () and (2) for any values of V,V 2,... Condition (3) is a statement about the large n behaviour of ψ n. This can be very difficult to determine, unless we know more about the values V n. IV.4.3 A potential with most values zero We will consider the very simplest situation where V = a and all the other V n s are equal to zero. Let us try to determine for what energies E a bound state exists. In this situation where x n = A( E) n A( a E)x = A( E) n x x = A( a E)x = [ ][ ] a E [ ] (a + E) = The large n behavior of x n can be computed using the eigenvalues and eigenvectors of A( E). Suppose they are λ,v and λ 2,v 2. Then we expand and conclude that x = a v + a 2 v 2 x n = A( E) n (a v + a 2 v 2 ) = a λ n v + a 2 λ n 2 v 2 Keep in mind that all the quantities in this equation depend on E. Our goal is to choose E so that the x n become small for large n. Before computing the eigenvalues of A( E), let s note that det(a( E)) =. This implies that λ λ 2 = Suppose the eigenvalues are complex. Then, since A( E) has real entries, they must be complex conjugates. Thus λ 2 = λ and = λ λ 2 = λ λ = λ 2. This means that λ and λ 2 lie on the unit circle in the complex plane. In other words, λ = e iθ and λ 2 = e iθ for some θ. This implies that λ n = e i(n )θ is also on the unit circle, and is not getting small. Similarly λ2 n is not getting small. So complex eigenvalues will never lead to bound states. In fact, complex eigenvalues correspond to scattering states, and the energy values for which eigenvalues are complex are the energies at which the electron can move freely through the crystal. Suppose the eigenvalues are real. If λ > then λ 2 = / λ < and vice versa. So one of the products λ n, λ2 n will be growing large, and one will be getting small. So the only way that x n can be getting small is if the coefficient a or a 2 sitting in front of the growing product is zero. Now let us actually compute the eigenvalues. They are λ = E ± E

156 IV Eigenvalues and Eigenvectors If 2 < E < 2 then the eigenvalues are complex, so there are no bound states. The interval [ 2,2] is the conduction band, where the electrons can move through the crystal. If E = ±2 then there is only one eigenvalue, namely. In this case there actually is only one eigenvector, so our analysis doesn t apply. However there are no bounds states in this case. Now let us consider the case E < 2. Then the large eigenvalue is λ = ( E+ [ ] E 2 4)/2 and the corresponding eigenvector is v =. The small eigenvalue is λ E + λ 2 = ( E [ ] E 2 4)/2 and the corresponding eigenvector is v 2 =. We must now compute a E + λ [ ] 2 a and determine when it is zero. We have [v v 2 ] = x a. This is 2 2 matrix equation [ ] 2 a that we can easily solve for. A short calculation gives a 2 Thus we see that a = whenever a = (λ λ 2 ) ( (a + E)(E + λ 2 ) + ). (a + E)(E E 2 4) 2 = Let s consider the case a = 5 and plot this function on the interval [, 2]. To see if it crosses the axis, we also plot the function zero. >N=5; >E=linspace(-,-2,N); >ONE=ones(,N); >plot(e,(5*one+e).*(e-sqrt(e.^2-4*one)) - 2*ONE) >hold on >plot(e,zeros(,n)) Here is the result 56

157 IV.4 The Anderson Tight Binding Model We can see that there is a single bound state in this interval, just below 5. In fact, the solution is E = 5.2. The case E > 2 is similar. This time we end up with (a + E)(E + E 2 4) 2 = When a = 5 this never has a solution for E > 2. In fact the right side of this equation is bigger than (5 + 2)(2 + ) 2 = 2 and so can never equal zero. In conclusion, if V = 5, and all the other V n s are zero, then there is exactly one bound state with energy E = 5.2. Here is a diagram of the energy spectrum for this potential. Bound state energy Conduction band E For the bound state energy of E = 5.2, the corresponding wave function Ψ, and thus the probability that the electron is located at the nth lattice point can now also be computed. The evaluation of the infinite sum that gives the normalization constant N 2 can be done using a geometric series. 57

158 IV Eigenvalues and Eigenvectors IV.4.4 Conduction bands for a crystal The atoms in a crystal are arranged in a periodic array. We can model a one dimensional crystal in the tight binding model by considering potential values that repeat a fixed pattern. Let s focus on the case where the pattern is,2,3,4 so that the potential values are V,V 2,V 3,... =,2,3,4,,2,3,4,, 2,3,4,,2, 3, 4,,2,3,4,... In this case, if we start with the formula x n = A(V n E)A(V n E) A(V E) we can group the matrices into groups of four. The product [ ] T(E) = A(V 4 E)A(V 3 E)A(V 2 E)A(V E) is repeated so that ] x 4n = T(E) n [ Notice that the matrix T(E) has determinant since it is a product of matrices with determinant. So, as above, the eigenvalues λ and λ 2 are either real with λ 2 = /λ, or complex conjugates on the unit circle. As before, the conduction bands are the energies E for which the eigenvalues of T(E) are complex conjugates. It turns out that this happens exactly when tr(t(e)) 2 To see this, start with the characteristic polynomial for T(E) det(λi T(E)) = λ 2 tr(t(e))λ + det(t(e)) (see homework problem). Since det(t(e)) = the eigenvalues are given by λ = tr(t(e)) ± tr(t(e)) When tr(t(e)) 2 the quantity under the square root sign is negative, and so the eigenvalues have a non-zero imaginary part. Let s use MATLAB/Octave to plot the values of tr(t(e)) as a function of E. For convenience we first define a function that computes the matrices A(z). To to this we type the following lines into a file called A.m in our working directory. function A=A(Z) A=[Z -; ]; end 58

159 IV.4 The Anderson Tight Binding Model Next we start with a range of E values and define another vector T that contains the corresponding values of tr(t(e)). N=; E=linspace(-,6,N); T=[]; for e = E T=[T trace(a(4-e)*a(3-e)*a(2-e)*a(-e))]; end Finally, we plot T against E. At the same time, we plot the constant functions E = 2 and E = 2. plot(e,t) hold on plot(e,2*ones(,n)); plot(e,-2*ones(,n)); axis([-,6,-,]) On the resulting picture the energies where T(E) lies between 2 and 2 have been highlighted We see that there are four conduction bands for this crystal. 59

160 IV Eigenvalues and Eigenvectors IV.5 Markov Chains Prerequisites and Learning Goals After completing this section, you should be able to write down the definition of a stochastic matrix and its properties explain why the probabilities in a random walk approach limiting values write down the stochastic matrix for a random walk and calculate the limiting probabilities. use stochastic matrices to solve practical Markov chain problems. write down the stochastic matrix associated with the Google Page rank algorithm for a given damping factor α, and compute the ranking of the sites for a specified internet. use the Metropolis algorithm to produce a stochastic matrix with a predetermined limiting probability distribution IV.5. Random walk In the diagram below there are three sites labelled, 2 and 3. Think of a walker moving from site to site. At each step the walker either stays at the same site, or moves to one of the other sites according to a set of fixed probabilities. The probability of moving to the ith site from the jth site is denoted p i,j. These numbers satisfy p i,j because they are probabilities ( means no chance and means for sure ). On the diagram they label the arrows indicating the relevant transitions. Since the walker has to go somewhere at each step the sum of all the probabilities leaving a given site must be one. Thus for every j, p i,j = i 6

161 IV.5 Markov Chains p 3,3 3 p p 2,3 3,2 2 p 2,2 p,3 p 2, p 3, p,2 p, Let x n,i be the probability that the walker is at site i after n steps. We collect these probabilities into a sequence of vectors called state vectors. Each state vector contains the probabilities for the nth step in the walk. x n = The probability that the walker is at site i after n+ steps can be calculated from probabilities for the previous step. It is the sum over all sites of the probability that the walker was at that site, times the probability of moving from that site to the ith site. Thus x n, x n,2. x n,k x n+,i = j p i,j x n,j This can be written in matrix form as x n = Px n where P = [p i,j ]. Using this relation repeatedly we find x n = P n x where x contains the probabilities at the beginning of the walk. The matrix P has two properties:. All entries of P are non-negative. 2. Each column of P sum to. 6

162 IV Eigenvalues and Eigenvectors A matrix with these properties is called a stochastic matrix. The goal is to determine where the walker is likely to be located after many steps. In other words, we want to find the large n behaviour of x n = P n x. Let s look at an example. Suppose there are three sites, the transition probabilities are given by.5.2. P = and the walker starts at site so that initial state vector is x = Now let s use MATLAB/Octave to calculate the subsequent state vectors for n =,,,. >P=[.5.2.;.4.2.8;..6.]; >X=[; ; ]; >P*X ans =.5.4. >P^*X ans = >P^*X ans = P^*X 62

163 IV.5 Markov Chains ans = The state vectors converge. Let s see what happens if the initial vector is different, say with equal probabilities of being at the second and third sites. >X = [;.5;.5]; >P^*X ans = The limit is the same. Of course, we know how to compute high powers of a matrix using the eigenvalues and eigenvectors. A little thought would lead us to suspect that P has an eigenvalue of that is largest in absolute value, and that the corresponding eigenvector is the limiting vector, up to a multiple. Let s check >eig(p) ans = >P*[.24;.44;.32] ans =

164 IV Eigenvalues and Eigenvectors IV.5.2 Properties of stochastic matrices The fact that the matrix P in the example above has an eigenvalue of and an eigenvector that is a state vector is no accident. Any stochastic matrix P has the following properties: () If x is a state vector, so is Px. (2) P has an eigenvalue λ =. (3) The corresponding eigenvector v has all non-negative entries. (4) The other eigenvalues λ i have λ i If P or some power P k has all positive entries (that is, no zero entries) then (3 ) The eigenvector v has all positive entries. (4 ) The other eigenvalues λ i have λ i < (Since eigenvectors are only defined up to non-zero scalar multiples, strictly speaking, (3) and (3 ) should say that after possibly multiplying v by the entries are non-negative or positive.) These properties explain the convergence properties of the state vectors of the random walk. Suppose (3 ) and (4 ) hold and we expand the initial vector x in a basis of eigenvectors. (Here we are assuming that P is diagonalizable, which is almost always true.) Then x = c v + c 2 v c k v k so that Since λ = and λ i < for i we find x n = P n x = c λ n v + c 2 λ n 2v c k λ n k v k lim n x n = c v Since each x n is a state vector, so is the limit c v. This allows us to compute c easily. It is the reciprocal of the sum of the entries of v. In particular, if we chose v to be a state vector then c =. Now we will go through the properties above and explain why they are true () P preserves state vectors: Suppose x is a state vector, that is, x has non-negative entries which sum to. Then Px has non-negative entries too, since all the entries of P are non-negative. Also (Px)i = P i,j x j = P i,j x j = x j P i,j = x j = i i j j i j i j Thus the entries of Px also sum to one, and Px is a state vector. 64

165 IV.5 Markov Chains (2) P has an eigenvalue λ = The key point here is that P and P T have the same eigenvalues. To see this recall that det(a) = det(a T ). This implies that det(λi P) = det((λi P) T ) = det(λi T P T ) = det(λi P T ) So P and P T have the same characteristic polynomial. Since the eigenvalues are the zeros of the characteristic polynomial, they must be the same for P and P T. (Notice that this does not say that the eigenvectors are the same.) Since P has columns adding up to, P T has rows that add up to. This fact can be expressed as the matrix equation P. =.. But this equation says that is an eigenvalue for P T. Therefore is an eigenvalue for P as well. (4) Other eigenvalues of P have modulus To show that this is true, we use the -norm introduced at the beginning of the course. Recall that the norm is defined by x x 2 = x + x x n.. x n Multiplication by P decreases the length of vectors if we use this norm to measure length. In other words Px x 65

166 IV Eigenvalues and Eigenvectors for any vector x. This follows from the calculation (almost the same as one above, and again using the fact that columns of P are positive and sum to ) Px = i = i i = j (Px) i ( P i,j x j ) j P i,j x j j P i,j x j i = j x j i P i,j = j x j = x Now suppose that λ is an eigenvalue, so that Pv = λv for some non-zero v. Then λv = Pv Since λv = λ v and Pv v this implies λ v v Finally, since v is not zero, v >. Therefore we can divide by v to obtain λ (3) The eigenvector v (or some multiple of it) has all non-negative entries We can give a partial proof of this, in the situation where the eigenvalues other than λ = obey the strict inequality λ i <. In this case the power method implies that starting with any initial vector x the vectors P n x converge to a multiple c v of v. If we choose the initial vector x to have only positive entries, then every vector in the sequence P n x has only non-negative entries. This implies that the limiting vector must have non-negative entries. (3) and (4) vs. (3 ) and (4 ) and P n with all positive entries Saying that P n has all positive entries means that there is a non-zero probability of moving between any two sites in n steps. The fact that in this case all the eigenvalues other than λ = obey the strict inequality λ i < follows from a famous theorem in linear algebra called the Perron Frobenius theorem. Although we won t be able to prove the Perron Frobenius theorem, we can give some examples to show that if the condition that P n has all positive entries for some n is violated, then (3 ) and (4 ) need not hold. 66

167 IV.5 Markov Chains The first example is the matrix P = [ ] This matrix represents a random walk with two sites that isn t very random. Starting at site one, the walker moves to site two with probability, and vice versa. [ ] The powers P n of P are equal to I or P depending on whether n is even or odd. So P n doesn t converge: it [ ] [ ] is equal to for even n and for odd n. The eigenvalues of P are easily computed to be and. They both have modulus one. For the second example, consider a random walk where the sites can be divided into two sets A and B and the probability of going from any site in B to any site in A is zero. In this case the i,j entries P n with the i site in A and the jth site in B are always zero. Also, applying P to any state vector drains probability from A to B without sending any back. This means that in the limit P n x (that is, the eigenvector v ) will have zero entries for all sites in A. For example, consider a three site random walk (the very first example above) where there is no chance of ever going back to site. The matrix /3 P = /3 /2 /2 /3 /2 /2 corresponds to such a walk. We can check >P=[/3 ;/3 /2 /2; /3 /2 /2]; >[V D] = eig(p) V = D = The eigenvector corresponding to the eigenvalue is /2 (after normalization to make it /2 a state vector). 67

168 IV Eigenvalues and Eigenvectors IV.5.3 Google PageRank I m going to refer you to the excellent article by David Austin: Here are the main points: The sites are web pages and connections are links between pages. The random walk is a web surfer clicking links at random. The rank of a page is the probability that the surfer will land on that page in the limit of infinitely many clicks. We assume that the surfer is equally likely to click on any link on a given page. In other words, we assign to each outgoing link a transition probability of /(total number of outgoing links for that page). This rule doesn t tell us what happens when the surfer lands on a page with no outgoing links (so-called dangling nodes). In this case, we assume that any page on the internet is chosen at random with equal probability. The two rules above define a stochastic matrix P = P + P 2 where P contains the probabilities for the outgoing links and P 2 contains the probabilities for the dangling nodes. The matrix P is very sparse, since each web page has typically about outgoing links. This translates to non-zero entries (out of about 2 billion) for each column of P. The matrix P 2 has a non-zero column for each dangling node. The entries in each non-zero column are all the same, and equal to /N where N is the total number of sites. If x is a state vector, then P x is easy to compute, because P is so sparse, and P 2 x is a vector with all entries equal to the total probability that the state vector x assigns to dangling nodes, divided by N, the total number of sites. We could try to use the matrix P to define the rank of a page, by taking the eigenvector v corresponding to the eigenvalue of P and defining the rank of a page to be the entry in v corresponding to that page. There are problems with this, though. Because P has so many zero entries, it is virtually guaranteed that P will have many eigenvalues with modulus (or very close to ) so we can t use the power method to compute v. Moreover, there probably will also be many web pages assigned a rank of zero. To avoid these problems we choose a number α between and called the damping factor. We modify the behaviour of the surfer so that with probability α the rules corresponding to the matrix P above are followed, and with probability α the surfer picks a page at random. This behaviour is described by the stochastic matrix S = ( α)q + αp 68

169 IV.5 Markov Chains where Q is the matrix where each entry is equal to /N (N is the total number of sites.) If x is a state vector, then Qx is a vector with each entry equal to /N. The value of α used in practice is about.85. The final ranking of a page can then be defined to be the entry in v for S corresponding to that page. The matrix S has all non-zero entries IV.5.4 The Metropolis algorithm So far we have concentrated on the situtation where a stochastic matrix P is given, and we are interested in finding invariant distribution (that is, the eigenvector with eigenvalue ). Now we want to turn the situation around. Suppose we have a state vector π and we want to find a stochastic matrix that has this vector as its invariant distribution. The Metropolis algorithm, named after the mathematician and physicist Nicholas Metropolis, takes an arbitrary stochastic matrix P and modifies to produce a stochastic matrix Q that has π as its invariant distribution. This is useful in a situation where we have an enormous number of sites and some function p that gives a non-negative value for each site. Suppose that there is one site where p is very much larger than for any of the other sites, and that our goal is to find that site. In other words, we have a discrete maximization problem. We assume that it is not difficult to compute p for any given site, but the number of sites is too huge to simply compute p for every site and see where it is the largest. To solve this problem let s assume that the sites are labeled by integers,...,n. The vector p = [p,...,p N ] T has non-negative entries, and our goal is to find the largest one. We can form a state vector π (in principle) by normalizing p, that is, π = p/ i p i. Then the state vector π gives a very large probability to the site we want to find. Now we can use the Metropolis algorithm to define a random walk with π as its invariant distribution. If we step from site to site according to this random walk, chances are high that after a while we end up at the site where π is large. In practice we don t want to compute the sum i p i in the denominator of the expression for π vector, since the number of terms is huge. An important feature of the Metropolis algorithm is that this computation is not required. You can more learn about the Metropolis algorithm in the article The Markov chain Monte Carlo revolution by Persi Diaconis at In this article Diaconis presents an example where the Metropolis algorithm is used to solve a substitution cipher, that is, a code where each letter in a message is replaced by another letter. The sites in this example are all permutations of a given string of letters and punctuation. The function p is defined by analyzing adjacent pairs of letters. Every adjacent pair of letters has a certain probablility of occuring in an English text. Knowing these probablities is it 69

170 IV Eigenvalues and Eigenvectors possible to construct a function p that is large on strings that are actually English. Here is the output of a random walk that is attempting to maximizde this function. IV.5.5 Description of the algorithm To begin, notice that a square matrix P with non-negative entries is stochastic if P T ½ = ½, where ½ = [,,...,] T is the vector with entries all equal to. This is just another way of saying that the columns of P add up to. Next, suppose we are given a state vector π = [π,π 2,...,π n ] T. Form the diagonal matrix Π that has these entries on the diagonal. Then, if P is a stochastic matrix, the condition PΠ = ΠP T implies that π is the invartiant distribution for P. To see this, notice that Π½ = π so applying the both sides of the equation to ½ yields Pπ = π. This condition can be written as a collection of conditions on the components of P. p i,j π j = π i p j,i Notice that for diagonal entries p i,i this condition is always true. So we may make any changes we want to the diagonal entries without affecting this condition. For the off-diagonal entries (i j) there is one equation for each pair p i,j, p j,i. Here is how the Metropolis algorithm works. We start with a stochastic matrix P where these equations are not neccesarily true. For each off-diagonal pair p i,j,p j,i, of matrix entries, we make the equation above hold by by decreasing the value of the one of the entries, while leaving the other entry alone. It is easy to see that the adjusted value will still be nonnegative. 7

Chapter I. Linear Equations

Chapter I. Linear Equations Chapter I Linear Equations I.. Solving Linear Equations Prerequisites and Learning Goals From your work in previous courses, you should be able to Write a system of linear equations using matrix notation.

More information

Math 307 Learning Goals

Math 307 Learning Goals Math 307 Learning Goals May 14, 2018 Chapter 1 Linear Equations 1.1 Solving Linear Equations Write a system of linear equations using matrix notation. Use Gaussian elimination to bring a system of linear

More information

Math 307 Learning Goals. March 23, 2010

Math 307 Learning Goals. March 23, 2010 Math 307 Learning Goals March 23, 2010 Course Description The course presents core concepts of linear algebra by focusing on applications in Science and Engineering. Examples of applications from recent

More information

Math 307: Problems for section 3.1

Math 307: Problems for section 3.1 Math 307: Problems for section 3.. Show that if P is an orthogonal projection matrix, then P x x for every x. Use this inequality to prove the Cauchy Schwarz inequality x y x y. If P is an orthogonal projection

More information

Math Computation Test 1 September 26 th, 2016 Debate: Computation vs. Theory Whatever wins, it ll be Huuuge!

Math Computation Test 1 September 26 th, 2016 Debate: Computation vs. Theory Whatever wins, it ll be Huuuge! Math 5- Computation Test September 6 th, 6 Debate: Computation vs. Theory Whatever wins, it ll be Huuuge! Name: Answer Key: Making Math Great Again Be sure to show your work!. (8 points) Consider the following

More information

Elementary Linear Algebra

Elementary Linear Algebra Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We

More information

18.06 Problem Set 1 - Solutions Due Wednesday, 12 September 2007 at 4 pm in

18.06 Problem Set 1 - Solutions Due Wednesday, 12 September 2007 at 4 pm in 18.6 Problem Set 1 - Solutions Due Wednesday, 12 September 27 at 4 pm in 2-16. Problem : from the book.(5=1+1+1+1+1) (a) problem set 1.2, problem 8. (a) F. A Counterexample: u = (1,, ), v = (, 1, ) and

More information

Linear Algebra, Summer 2011, pt. 2

Linear Algebra, Summer 2011, pt. 2 Linear Algebra, Summer 2, pt. 2 June 8, 2 Contents Inverses. 2 Vector Spaces. 3 2. Examples of vector spaces..................... 3 2.2 The column space......................... 6 2.3 The null space...........................

More information

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2

Final Review Sheet. B = (1, 1 + 3x, 1 + x 2 ) then 2 + 3x + 6x 2 Final Review Sheet The final will cover Sections Chapters 1,2,3 and 4, as well as sections 5.1-5.4, 6.1-6.2 and 7.1-7.3 from chapters 5,6 and 7. This is essentially all material covered this term. Watch

More information

Topic 15 Notes Jeremy Orloff

Topic 15 Notes Jeremy Orloff Topic 5 Notes Jeremy Orloff 5 Transpose, Inverse, Determinant 5. Goals. Know the definition and be able to compute the inverse of any square matrix using row operations. 2. Know the properties of inverses.

More information

Chapter 2 Notes, Linear Algebra 5e Lay

Chapter 2 Notes, Linear Algebra 5e Lay Contents.1 Operations with Matrices..................................1.1 Addition and Subtraction.............................1. Multiplication by a scalar............................ 3.1.3 Multiplication

More information

Linear Algebra Using MATLAB

Linear Algebra Using MATLAB Linear Algebra Using MATLAB MATH 5331 1 May 12, 2010 1 Selected material from the text Linear Algebra and Differential Equations Using MATLAB by Martin Golubitsky and Michael Dellnitz Contents 1 Preliminaries

More information

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES

LECTURES 14/15: LINEAR INDEPENDENCE AND BASES LECTURES 14/15: LINEAR INDEPENDENCE AND BASES MA1111: LINEAR ALGEBRA I, MICHAELMAS 2016 1. Linear Independence We have seen in examples of span sets of vectors that sometimes adding additional vectors

More information

Math 308 Midterm Answers and Comments July 18, Part A. Short answer questions

Math 308 Midterm Answers and Comments July 18, Part A. Short answer questions Math 308 Midterm Answers and Comments July 18, 2011 Part A. Short answer questions (1) Compute the determinant of the matrix a 3 3 1 1 2. 1 a 3 The determinant is 2a 2 12. Comments: Everyone seemed to

More information

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511)

GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) GAUSSIAN ELIMINATION AND LU DECOMPOSITION (SUPPLEMENT FOR MA511) D. ARAPURA Gaussian elimination is the go to method for all basic linear classes including this one. We go summarize the main ideas. 1.

More information

All of my class notes can be found at

All of my class notes can be found at My name is Leon Hostetler I am currently a student at Florida State University majoring in physics as well as applied and computational mathematics Feel free to download, print, and use these class notes

More information

University of British Columbia Math 307, Final

University of British Columbia Math 307, Final 1 University of British Columbia Math 37, Final April 29, 214 12.-2.3pm Name: Student Number: Signature: Instructor: Instructions: 1. No notes, books or calculators are allowed. A MATLAB/Octave formula

More information

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3

Example: 2x y + 3z = 1 5y 6z = 0 x + 4z = 7. Definition: Elementary Row Operations. Example: Type I swap rows 1 and 3 Linear Algebra Row Reduced Echelon Form Techniques for solving systems of linear equations lie at the heart of linear algebra. In high school we learn to solve systems with or variables using elimination

More information

MAT 343 Laboratory 3 The LU factorization

MAT 343 Laboratory 3 The LU factorization In this laboratory session we will learn how to MAT 343 Laboratory 3 The LU factorization 1. Find the LU factorization of a matrix using elementary matrices 2. Use the MATLAB command lu to find the LU

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Chapter 1 Linear Equations. 1.1 Systems of Linear Equations

Chapter 1 Linear Equations. 1.1 Systems of Linear Equations Chapter Linear Equations. Systems of Linear Equations A linear equation in the n variables x, x 2,..., x n is one that can be expressed in the form a x + a 2 x 2 + + a n x n = b where a, a 2,..., a n and

More information

Differential Equations

Differential Equations This document was written and copyrighted by Paul Dawkins. Use of this document and its online version is governed by the Terms and Conditions of Use located at. The online version of this document is

More information

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]

[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.] Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the

More information

Chapter 4. Solving Systems of Equations. Chapter 4

Chapter 4. Solving Systems of Equations. Chapter 4 Solving Systems of Equations 3 Scenarios for Solutions There are three general situations we may find ourselves in when attempting to solve systems of equations: 1 The system could have one unique solution.

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway

Linear Algebra: Lecture Notes. Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway Linear Algebra: Lecture Notes Dr Rachel Quinlan School of Mathematics, Statistics and Applied Mathematics NUI Galway November 6, 23 Contents Systems of Linear Equations 2 Introduction 2 2 Elementary Row

More information

Getting Started with Communications Engineering

Getting Started with Communications Engineering 1 Linear algebra is the algebra of linear equations: the term linear being used in the same sense as in linear functions, such as: which is the equation of a straight line. y ax c (0.1) Of course, if we

More information

Handout 1 EXAMPLES OF SOLVING SYSTEMS OF LINEAR EQUATIONS

Handout 1 EXAMPLES OF SOLVING SYSTEMS OF LINEAR EQUATIONS 22M:33 J. Simon page 1 of 7 22M:33 Summer 06 J. Simon Example 1. Handout 1 EXAMPLES OF SOLVING SYSTEMS OF LINEAR EQUATIONS 2x 1 + 3x 2 5x 3 = 10 3x 1 + 5x 2 + 6x 3 = 16 x 1 + 5x 2 x 3 = 10 Step 1. Write

More information

Slope Fields: Graphing Solutions Without the Solutions

Slope Fields: Graphing Solutions Without the Solutions 8 Slope Fields: Graphing Solutions Without the Solutions Up to now, our efforts have been directed mainly towards finding formulas or equations describing solutions to given differential equations. Then,

More information

Chapter 9: Systems of Equations and Inequalities

Chapter 9: Systems of Equations and Inequalities Chapter 9: Systems of Equations and Inequalities 9. Systems of Equations Solve the system of equations below. By this we mean, find pair(s) of numbers (x, y) (if possible) that satisfy both equations.

More information

Numerical Methods Lecture 2 Simultaneous Equations

Numerical Methods Lecture 2 Simultaneous Equations CGN 42 - Computer Methods Numerical Methods Lecture 2 Simultaneous Equations Topics: matrix operations solving systems of equations Matrix operations: Adding / subtracting Transpose Multiplication Adding

More information

Linear Algebra Homework and Study Guide

Linear Algebra Homework and Study Guide Linear Algebra Homework and Study Guide Phil R. Smith, Ph.D. February 28, 20 Homework Problem Sets Organized by Learning Outcomes Test I: Systems of Linear Equations; Matrices Lesson. Give examples of

More information

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way

Designing Information Devices and Systems I Fall 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way EECS 16A Designing Information Devices and Systems I Fall 018 Lecture Notes Note 1 1.1 Introduction to Linear Algebra the EECS Way In this note, we will teach the basics of linear algebra and relate it

More information

1 Last time: determinants

1 Last time: determinants 1 Last time: determinants Let n be a positive integer If A is an n n matrix, then its determinant is the number det A = Π(X, A)( 1) inv(x) X S n where S n is the set of n n permutation matrices Π(X, A)

More information

Matrices MA1S1. Tristan McLoughlin. November 9, Anton & Rorres: Ch

Matrices MA1S1. Tristan McLoughlin. November 9, Anton & Rorres: Ch Matrices MA1S1 Tristan McLoughlin November 9, 2014 Anton & Rorres: Ch 1.3-1.8 Basic matrix notation We have studied matrices as a tool for solving systems of linear equations but now we want to study them

More information

Answers for Calculus Review (Extrema and Concavity)

Answers for Calculus Review (Extrema and Concavity) Answers for Calculus Review 4.1-4.4 (Extrema and Concavity) 1. A critical number is a value of the independent variable (a/k/a x) in the domain of the function at which the derivative is zero or undefined.

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

22A-2 SUMMER 2014 LECTURE 5

22A-2 SUMMER 2014 LECTURE 5 A- SUMMER 0 LECTURE 5 NATHANIEL GALLUP Agenda Elimination to the identity matrix Inverse matrices LU factorization Elimination to the identity matrix Previously, we have used elimination to get a system

More information

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim

Introduction - Motivation. Many phenomena (physical, chemical, biological, etc.) are model by differential equations. f f(x + h) f(x) (x) = lim Introduction - Motivation Many phenomena (physical, chemical, biological, etc.) are model by differential equations. Recall the definition of the derivative of f(x) f f(x + h) f(x) (x) = lim. h 0 h Its

More information

Numerical Methods Lecture 2 Simultaneous Equations

Numerical Methods Lecture 2 Simultaneous Equations Numerical Methods Lecture 2 Simultaneous Equations Topics: matrix operations solving systems of equations pages 58-62 are a repeat of matrix notes. New material begins on page 63. Matrix operations: Mathcad

More information

1 Matrices and Systems of Linear Equations. a 1n a 2n

1 Matrices and Systems of Linear Equations. a 1n a 2n March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real

More information

CHAPTER 8: MATRICES and DETERMINANTS

CHAPTER 8: MATRICES and DETERMINANTS (Section 8.1: Matrices and Determinants) 8.01 CHAPTER 8: MATRICES and DETERMINANTS The material in this chapter will be covered in your Linear Algebra class (Math 254 at Mesa). SECTION 8.1: MATRICES and

More information

Solving Linear Systems Using Gaussian Elimination

Solving Linear Systems Using Gaussian Elimination Solving Linear Systems Using Gaussian Elimination DEFINITION: A linear equation in the variables x 1,..., x n is an equation that can be written in the form a 1 x 1 +...+a n x n = b, where a 1,...,a n

More information

Lecture 10: Powers of Matrices, Difference Equations

Lecture 10: Powers of Matrices, Difference Equations Lecture 10: Powers of Matrices, Difference Equations Difference Equations A difference equation, also sometimes called a recurrence equation is an equation that defines a sequence recursively, i.e. each

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

A primer on matrices

A primer on matrices A primer on matrices Stephen Boyd August 4, 2007 These notes describe the notation of matrices, the mechanics of matrix manipulation, and how to use matrices to formulate and solve sets of simultaneous

More information

MATH2071: LAB #5: Norms, Errors and Condition Numbers

MATH2071: LAB #5: Norms, Errors and Condition Numbers MATH2071: LAB #5: Norms, Errors and Condition Numbers 1 Introduction Introduction Exercise 1 Vector Norms Exercise 2 Matrix Norms Exercise 3 Compatible Matrix Norms Exercise 4 More on the Spectral Radius

More information

Chapter 2: Matrices and Linear Systems

Chapter 2: Matrices and Linear Systems Chapter 2: Matrices and Linear Systems Paul Pearson Outline Matrices Linear systems Row operations Inverses Determinants Matrices Definition An m n matrix A = (a ij ) is a rectangular array of real numbers

More information

Chapter Five Notes N P U2C5

Chapter Five Notes N P U2C5 Chapter Five Notes N P UC5 Name Period Section 5.: Linear and Quadratic Functions with Modeling In every math class you have had since algebra you have worked with equations. Most of those equations have

More information

Extra Problems for Math 2050 Linear Algebra I

Extra Problems for Math 2050 Linear Algebra I Extra Problems for Math 5 Linear Algebra I Find the vector AB and illustrate with a picture if A = (,) and B = (,4) Find B, given A = (,4) and [ AB = A = (,4) and [ AB = 8 If possible, express x = 7 as

More information

Matrices, Row Reduction of Matrices

Matrices, Row Reduction of Matrices Matrices, Row Reduction of Matrices October 9, 014 1 Row Reduction and Echelon Forms In the previous section, we saw a procedure for solving systems of equations It is simple in that it consists of only

More information

MITOCW ocw f99-lec05_300k

MITOCW ocw f99-lec05_300k MITOCW ocw-18.06-f99-lec05_300k This is lecture five in linear algebra. And, it will complete this chapter of the book. So the last section of this chapter is two point seven that talks about permutations,

More information

Eigenvalues and eigenvectors

Eigenvalues and eigenvectors Roberto s Notes on Linear Algebra Chapter 0: Eigenvalues and diagonalization Section Eigenvalues and eigenvectors What you need to know already: Basic properties of linear transformations. Linear systems

More information

Elementary maths for GMT

Elementary maths for GMT Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

More information

Honors Advanced Mathematics Determinants page 1

Honors Advanced Mathematics Determinants page 1 Determinants page 1 Determinants For every square matrix A, there is a number called the determinant of the matrix, denoted as det(a) or A. Sometimes the bars are written just around the numbers of the

More information

STEP Support Programme. STEP 2 Matrices Topic Notes

STEP Support Programme. STEP 2 Matrices Topic Notes STEP Support Programme STEP 2 Matrices Topic Notes Definitions............................................. 2 Manipulating Matrices...................................... 3 Transformations.........................................

More information

Algebraic. techniques1

Algebraic. techniques1 techniques Algebraic An electrician, a bank worker, a plumber and so on all have tools of their trade. Without these tools, and a good working knowledge of how to use them, it would be impossible for them

More information

Lecture 9: Elementary Matrices

Lecture 9: Elementary Matrices Lecture 9: Elementary Matrices Review of Row Reduced Echelon Form Consider the matrix A and the vector b defined as follows: 1 2 1 A b 3 8 5 A common technique to solve linear equations of the form Ax

More information

Chapter 1 Review of Equations and Inequalities

Chapter 1 Review of Equations and Inequalities Chapter 1 Review of Equations and Inequalities Part I Review of Basic Equations Recall that an equation is an expression with an equal sign in the middle. Also recall that, if a question asks you to solve

More information

36 What is Linear Algebra?

36 What is Linear Algebra? 36 What is Linear Algebra? The authors of this textbook think that solving linear systems of equations is a big motivation for studying linear algebra This is certainly a very respectable opinion as systems

More information

MATH 310, REVIEW SHEET 2

MATH 310, REVIEW SHEET 2 MATH 310, REVIEW SHEET 2 These notes are a very short summary of the key topics in the book (and follow the book pretty closely). You should be familiar with everything on here, but it s not comprehensive,

More information

Getting Started with Communications Engineering. Rows first, columns second. Remember that. R then C. 1

Getting Started with Communications Engineering. Rows first, columns second. Remember that. R then C. 1 1 Rows first, columns second. Remember that. R then C. 1 A matrix is a set of real or complex numbers arranged in a rectangular array. They can be any size and shape (provided they are rectangular). A

More information

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2

More information

Linear Algebra. Chapter Linear Equations

Linear Algebra. Chapter Linear Equations Chapter 3 Linear Algebra Dixit algorizmi. Or, So said al-khwarizmi, being the opening words of a 12 th century Latin translation of a work on arithmetic by al-khwarizmi (ca. 78 84). 3.1 Linear Equations

More information

LINEAR SYSTEMS, MATRICES, AND VECTORS

LINEAR SYSTEMS, MATRICES, AND VECTORS ELEMENTARY LINEAR ALGEBRA WORKBOOK CREATED BY SHANNON MARTIN MYERS LINEAR SYSTEMS, MATRICES, AND VECTORS Now that I ve been teaching Linear Algebra for a few years, I thought it would be great to integrate

More information

Linear Systems and Matrices

Linear Systems and Matrices Department of Mathematics The Chinese University of Hong Kong 1 System of m linear equations in n unknowns (linear system) a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2.......

More information

x + 2y + 3z = 8 x + 3y = 7 x + 2z = 3

x + 2y + 3z = 8 x + 3y = 7 x + 2z = 3 Chapter 2: Solving Linear Equations 23 Elimination Using Matrices As we saw in the presentation, we can use elimination to make a system of linear equations into an upper triangular system that is easy

More information

Linear Algebra. Min Yan

Linear Algebra. Min Yan Linear Algebra Min Yan January 2, 2018 2 Contents 1 Vector Space 7 1.1 Definition................................. 7 1.1.1 Axioms of Vector Space..................... 7 1.1.2 Consequence of Axiom......................

More information

v = ( 2)

v = ( 2) Chapter : Introduction to Vectors.. Vectors and linear combinations Let s begin by saying what vectors are: They are lists of numbers. If there are numbers in the list, there is a natural correspondence

More information

A Review of Matrix Analysis

A Review of Matrix Analysis Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value

More information

Linear Algebra Handout

Linear Algebra Handout Linear Algebra Handout References Some material and suggested problems are taken from Fundamentals of Matrix Algebra by Gregory Hartman, which can be found here: http://www.vmi.edu/content.aspx?id=779979.

More information

Chapter 5 Simplifying Formulas and Solving Equations

Chapter 5 Simplifying Formulas and Solving Equations Chapter 5 Simplifying Formulas and Solving Equations Look at the geometry formula for Perimeter of a rectangle P = L W L W. Can this formula be written in a simpler way? If it is true, that we can simplify

More information

Please bring the task to your first physics lesson and hand it to the teacher.

Please bring the task to your first physics lesson and hand it to the teacher. Pre-enrolment task for 2014 entry Physics Why do I need to complete a pre-enrolment task? This bridging pack serves a number of purposes. It gives you practice in some of the important skills you will

More information

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS

chapter 12 MORE MATRIX ALGEBRA 12.1 Systems of Linear Equations GOALS chapter MORE MATRIX ALGEBRA GOALS In Chapter we studied matrix operations and the algebra of sets and logic. We also made note of the strong resemblance of matrix algebra to elementary algebra. The reader

More information

Pre-calculus is the stepping stone for Calculus. It s the final hurdle after all those years of

Pre-calculus is the stepping stone for Calculus. It s the final hurdle after all those years of Chapter 1 Beginning at the Very Beginning: Pre-Pre-Calculus In This Chapter Brushing up on order of operations Solving equalities Graphing equalities and inequalities Finding distance, midpoint, and slope

More information

Chapter 11 - Sequences and Series

Chapter 11 - Sequences and Series Calculus and Analytic Geometry II Chapter - Sequences and Series. Sequences Definition. A sequence is a list of numbers written in a definite order, We call a n the general term of the sequence. {a, a

More information

6 EIGENVALUES AND EIGENVECTORS

6 EIGENVALUES AND EIGENVECTORS 6 EIGENVALUES AND EIGENVECTORS INTRODUCTION TO EIGENVALUES 61 Linear equations Ax = b come from steady state problems Eigenvalues have their greatest importance in dynamic problems The solution of du/dt

More information

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C =

BASIC NOTIONS. x + y = 1 3, 3x 5y + z = A + 3B,C + 2D, DC are not defined. A + C = CHAPTER I BASIC NOTIONS (a) 8666 and 8833 (b) a =6,a =4 will work in the first case, but there are no possible such weightings to produce the second case, since Student and Student 3 have to end up with

More information

Lectures 9-10: Polynomial and piecewise polynomial interpolation

Lectures 9-10: Polynomial and piecewise polynomial interpolation Lectures 9-1: Polynomial and piecewise polynomial interpolation Let f be a function, which is only known at the nodes x 1, x,, x n, ie, all we know about the function f are its values y j = f(x j ), j

More information

MTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018)

MTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018) MTH5112 Linear Algebra I MTH5212 Applied Linear Algebra (2017/2018) COURSEWORK 3 SOLUTIONS Exercise ( ) 1. (a) Write A = (a ij ) n n and B = (b ij ) n n. Since A and B are diagonal, we have a ij = 0 and

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0. Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

More information

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture

Vector Spaces. 9.1 Opening Remarks. Week Solvable or not solvable, that s the question. View at edx. Consider the picture Week9 Vector Spaces 9. Opening Remarks 9.. Solvable or not solvable, that s the question Consider the picture (,) (,) p(χ) = γ + γ χ + γ χ (, ) depicting three points in R and a quadratic polynomial (polynomial

More information

Lecture for Week 2 (Secs. 1.3 and ) Functions and Limits

Lecture for Week 2 (Secs. 1.3 and ) Functions and Limits Lecture for Week 2 (Secs. 1.3 and 2.2 2.3) Functions and Limits 1 First let s review what a function is. (See Sec. 1 of Review and Preview.) The best way to think of a function is as an imaginary machine,

More information

3 Fields, Elementary Matrices and Calculating Inverses

3 Fields, Elementary Matrices and Calculating Inverses 3 Fields, Elementary Matrices and Calculating Inverses 3. Fields So far we have worked with matrices whose entries are real numbers (and systems of equations whose coefficients and solutions are real numbers).

More information

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way

Designing Information Devices and Systems I Spring 2018 Lecture Notes Note Introduction to Linear Algebra the EECS Way EECS 16A Designing Information Devices and Systems I Spring 018 Lecture Notes Note 1 1.1 Introduction to Linear Algebra the EECS Way In this note, we will teach the basics of linear algebra and relate

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

2 Systems of Linear Equations

2 Systems of Linear Equations 2 Systems of Linear Equations A system of equations of the form or is called a system of linear equations. x + 2y = 7 2x y = 4 5p 6q + r = 4 2p + 3q 5r = 7 6p q + 4r = 2 Definition. An equation involving

More information

CMU CS 462/662 (INTRO TO COMPUTER GRAPHICS) HOMEWORK 0.0 MATH REVIEW/PREVIEW LINEAR ALGEBRA

CMU CS 462/662 (INTRO TO COMPUTER GRAPHICS) HOMEWORK 0.0 MATH REVIEW/PREVIEW LINEAR ALGEBRA CMU CS 462/662 (INTRO TO COMPUTER GRAPHICS) HOMEWORK 0.0 MATH REVIEW/PREVIEW LINEAR ALGEBRA Andrew ID: ljelenak August 25, 2018 This assignment reviews basic mathematical tools you will use throughout

More information

is a 3 4 matrix. It has 3 rows and 4 columns. The first row is the horizontal row [ ]

is a 3 4 matrix. It has 3 rows and 4 columns. The first row is the horizontal row [ ] Matrices: Definition: An m n matrix, A m n is a rectangular array of numbers with m rows and n columns: a, a, a,n a, a, a,n A m,n =...... a m, a m, a m,n Each a i,j is the entry at the i th row, j th column.

More information

AP Calculus AB Summer Assignment

AP Calculus AB Summer Assignment AP Calculus AB Summer Assignment Name: When you come back to school, it is my epectation that you will have this packet completed. You will be way behind at the beginning of the year if you haven t attempted

More information

MATH240: Linear Algebra Review for exam #1 6/10/2015 Page 1

MATH240: Linear Algebra Review for exam #1 6/10/2015 Page 1 MATH24: Linear Algebra Review for exam # 6//25 Page No review sheet can cover everything that is potentially fair game for an exam, but I tried to hit on all of the topics with these questions, as well

More information

MATH Max-min Theory Fall 2016

MATH Max-min Theory Fall 2016 MATH 20550 Max-min Theory Fall 2016 1. Definitions and main theorems Max-min theory starts with a function f of a vector variable x and a subset D of the domain of f. So far when we have worked with functions

More information

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination

Chapter 2. Solving Systems of Equations. 2.1 Gaussian elimination Chapter 2 Solving Systems of Equations A large number of real life applications which are resolved through mathematical modeling will end up taking the form of the following very simple looking matrix

More information

CHAPTER 1: Functions

CHAPTER 1: Functions CHAPTER 1: Functions 1.1: Functions 1.2: Graphs of Functions 1.3: Basic Graphs and Symmetry 1.4: Transformations 1.5: Piecewise-Defined Functions; Limits and Continuity in Calculus 1.6: Combining Functions

More information

8.5 Taylor Polynomials and Taylor Series

8.5 Taylor Polynomials and Taylor Series 8.5. TAYLOR POLYNOMIALS AND TAYLOR SERIES 50 8.5 Taylor Polynomials and Taylor Series Motivating Questions In this section, we strive to understand the ideas generated by the following important questions:

More information

Lab 2 Worksheet. Problems. Problem 1: Geometry and Linear Equations

Lab 2 Worksheet. Problems. Problem 1: Geometry and Linear Equations Lab 2 Worksheet Problems Problem : Geometry and Linear Equations Linear algebra is, first and foremost, the study of systems of linear equations. You are going to encounter linear systems frequently in

More information

a11 a A = : a 21 a 22

a11 a A = : a 21 a 22 Matrices The study of linear systems is facilitated by introducing matrices. Matrix theory provides a convenient language and notation to express many of the ideas concisely, and complicated formulas are

More information

Vectors Part 1: Two Dimensions

Vectors Part 1: Two Dimensions Vectors Part 1: Two Dimensions Last modified: 20/02/2018 Links Scalars Vectors Definition Notation Polar Form Compass Directions Basic Vector Maths Multiply a Vector by a Scalar Unit Vectors Example Vectors

More information