Linear Algebra. Carleton DeTar February 27, 2017
|
|
- Alexis Warren
- 6 years ago
- Views:
Transcription
1 Linear Algebra Carleton DeTar February 27, 2017 This document provides some background for various course topics in linear algebra: solving linear systems, determinants, and finding eigenvalues and eigenvectors. 1 Gaussian elimination Gaussian elimination is a systematic strategy for solving a set of linear equations. It can also be used to construct the inverse of a matrix and to factor a matrix into the product of lower and upper triangular matrices. We start by solving the linear system + 2 = = = 4 Basically, the objective of Gaussian elimination is to do transformations on the equations that do not change the solution, but systematically zero out (eliminate) the off-diagonal coefficients, leaving a set of equations from which we can read off the answers. We express the problem in terms of a set of equations, and side-by-side, we express it in terms of an equivalent matrix product. We do this to show how the manipulations in the matrix tracks the manipulations of the equations, where it is easier to see that we are not changing the solution. The method has two parts. First triangulation and then back substitution. 1.1 Triangulation 1
2 Equations Starting equation = 2 + = = 4 First step: examine the coefficients of. Swap equations (1) and (3) so the largest coefficient is in the first equation (first row). This is called the pivot element = = = Matrix-vector representation Equivalent matrix-vector equation = Swap the first the third rows of the matrix and the first and third elements of the vector on the right side. Note that in the matrix equation we don t interchange and = 4 Next step: Divide the first equation by the coefficient 3 to make the pivot element equal to 1. + /3 + /3 = 2 + = = = Next step: Multiply the first equation by 2 and subtract it from the second equation, putting the result in the second equation. This eliminates the coefficient of in the second equation. + /3 + /3 = 0 + /3 5 /3 = 1/ = Next step: Eliminate the coefficient of in the third equation by subtracting the first equation from the third, putting the result into the third equation. + /3 + /3 = 0 + /3 5 /3 = 1/ /3 /3 = 7/3 0 1/3 5/ /3 5/3 0 5/3 /3 = = 1/3 1/3 7/3 2
3 Next step: Now work on the second column (coefficients ). We want the largest coefficient in the second equation (diagonal element in the matrix.) So swap the second and third equation. + /3 + /3 = /3 /3 = 7/3 0 + /3 5 /3 = 1/3 Now divide by 5/3, the pivot element in the second column. + /3 + /3 = 0 + /5 = 7/5 0 + /3 5 /3 = 1/3 Now eliminate the coefficient of in the third equation by multiplying the second equation by 1/3, subtracting the result from the third equation, and putting the result in the third equation. + /3 + /3 = 0 + /5 = 7/ /15 = 48/15 To complete the triangulation step we divide the third equation by the coefficient of, namely 24/15. + /3 + /3 = 0 + /5 = 7/ = 2 0 5/3 /3 0 1/3 5/3 0 1 /5 0 1/3 5/3 0 1 / /15 = = = 7/3 1/3 7/5 1/3 Notice that the matrix is now in upper triangular form all elements below the diagonal are zero. 0 1 / = 7/5 48/15 7/ Back substitution 3
4 Next, we do back substitution. We start by noticing that the last equation gives us the solution for. Then we work our way up the third column, eliminating the coefficients of this is just Gaussian elimination backwards! But it amounts to the same thing as plugging in the solution for into the other two equations and moving the resulting constant to the rhs of the equation. So multiply the third equation by 3/5 and subtract from equation two, leaving the result in equation two. Notice that the matrix is now in upper triangular form all elements below the diagonal are zero = 2 + /3 + /3 = = = 2 Continuing on the third column, eliminate the coefficient of in equation /3 + 0 = 2/ = = 2 1 2/ = 2/3 2 Next, work on the second column. We have only the coefficient of in the first equation to eliminate. We then get the answer = = = 2 The last step, of course, is to check the solution by plugging it in to the orginal system of equations () = () 2 = () + 2 = 4 Notice that we now have a unit matrix, so the solution can be read off = = 4 4
5 2 Determinants Many properties of a matrix are based on its determinant. calculated, let s start with a simple 3 3 matrix a 11 a 12 a 13 A = a 21 a 22 a 23 a 31 a 32 a 33 To review how they are We learn in high school that the rule for calculating its determinant is to start by multiplying along the main diagonal: a 11 a 22 a 33, then along the parallel super diagonal (wrapping around): a 12 a 23 a 31, then along the parallel sub diagonal (again wrapping around): a 21 a 32 a 13. We add these three terms. Then we switch to the (let s call it) antidiagonal : a 13 a 22 a 31 and its parallel super and sub antidiagonals: a 12 a 21 a 33 and a 11 a 23 a 32. These last three products are subtracted from the sum of the first three. So the full result is det A = a 11 a 22 a 33 + a 12 a 23 a 31 + a 13 a 21 a 32 a 13 a 22 a 31 a 12 a 21 a 33 a 11 a 23 a 32. This method works only for 3 3 matrices. But we can generalize it by recognizing that it has the compact form det A = ( ) P a 1,P 1 a 2,P 2 a 3,P 3 P where the sum is over all permutations P of the columns 123. We note that there are six such permutations: 123, 231, 312, 321, 213, 132, which matches the number of terms in our standard form. We use the shorthand notation P 1, P 2, P 3 to specify one of these six permutations. Any permutation can be achieved by swapping enough pairs of members. The first three permutations in this list are called even, because they are produced by an even number (including 0) of pairwise exchanges, and the last three are called odd, because they require an odd number. The shorthand notation ( ) P means plus for an even and minus for an odd permutation. Expressing the determinant in terms of permutations allows us to generalize to any size matrix: det A = ( ) P a 1,P 1 a 2,P 2 a 3,P 3... a n,p n P The number of terms in the sum is n!. There are other ways you may have learned to calculate the determinant of a matrix. We won t go into details here. One way is to use the cofactor method: Pick a row of the matrix. Let s do it with the first row a 1,k. Work you way across the row, visiting each column once. At each step, construct an (n 1) (n 1) matrix by eliminating the row you are working with and the column that you are visiting at that step. Calculate the determinant of the smaller matrix. This is called the cofactor. Call it A 1,k. Proceed to the end of the row. At that point you have a cofactor for each of the elements in the row. Then the determinant is given by the rule det A = k ( ) k a 1,k A 1,k 5
6 You can use any row you like. If you use row j, then you get det A = k ( ) k+j a j,k A j,k You can also do it by columns. The cofactor method is, in fact, algebraically the same as the method of summing over permutations. It is just another way of rearranging the terms in the sum. We list some important properties of determinants without proof The determinant of a product of matrices is the product of the determinants. det(ab) = det(a) det(b) If the inverse of a matrix exists, its determinant is the inverse of the determinant of the original matrix. det(a ) = 1/ det(a) Taking the transpose does not change the determinant. det(ã) = det(a) The determinant of a triangular matrix is the product of its diagonal elements. The rule is the same for upper and lower triangular matrices. det(u) = u 11 u u nn This one is easy to show using the permutation sum. If a permutation puts a zero matrix element in the product of terms, then that permutation doesn t contribute anything to the determinant. The only permutation P 1, P 2,..., P n that doesn t involve at least one zero matrix element is the identity permutation n. And that identity permutation gives you the product of the diagonal elements. What is an efficient way to calculate a determinant? Efficient means calculating it with the least number of floating point operations, since that is what usually costs computing effort. The sum over permutations method (or the equivalent cofactor method) is terribly inefficient, because it requires computing the product of n factors in each of n! terms. Including the summation, the number of floating point operations is n n!, which grows extremely rapidly with increasing n. A more efficient way to calculate the determinant is to factor the matrix into a product of a lower triangular and upper triangular matris. det A = det L det U which is just the product of the diagonal elements of each factor. The cost of doing this using the Crout reduction method is the same as the cost of doing Gaussian elimination. It grows as n 3 as the matrix size grows. This is still costly, but for large n it is vastly cheaper than the sum over permutations. 6
7 3 Eigenvalues and eigenvectors A great many matrices (more generally linear operators) are characterized by their eigenvalues and eigenvectors. They play a crucial role in all branches of science and engineering. Most of the time, finding them requires resorting to numerical methods. So we discuss some simpler methods. 3.1 Characteristic Polynomial Generally speaking, eigenvalues of a square matrix A are roots of the so-called characteristic polynomial: det A λi = P (λ) = 0 That is, start with the matrix A and modify it by subtracting the same variable λ from each diagonal element. Then calculate the determinant of the resulting matrix and you get a polynomial. Here is how it works using a 3 3 matrix: det A λi = A = 1/2 λ 3/2 0 3/2 1/2 λ λ 1/2 3/2 0 3/2 1/ = 2 + λ + 2λ2 λ 3 = ( λ)(1 λ)(2 λ) The three zeros of this cubic polynomial are (, 1, 2), so this matrix has three distinct eigenvalues. For an n n matrix we get a polynomial of degree n. Why? It is easy to see if we remember from the previous section that the determinant is a sum over products of matrix elements. One of those products runs down the diagonal. Since each diagonal element has a λ in it, the diagonals alone give you a polynomial of degree n. The other products have fewer diagonal elements, so they can t increase the degree of the polynomial. beyond n. A polynomial of degree n has n roots, although some of them may appear more than once. Call the zeros of the characteristic polynomial, i.e., the eigenvalues, λ i. If we factor the polynomial in terms of its roots, we get P (λ) = (λ 1 λ)(λ 2 λ)... (λ n λ). Notice that the determinant of the matrix itself is the value of the characteristic polynomial at λ = 0. Plugging in λ = 0 into the factored expression above leads to the result that the determinant of the matrix is the product of its eigenvalues. det A = P (0) = λ 1 λ 2... λ n 7
8 3.2 Eigenvalue equation When a matrix has a zero determinant, we can find a nontrivial (column vector) solution v to the equation (A λi)v = 0 or Av = λv This is the standard equation for eigenvalue λ and eigenvector v. There can be as many as n linearly independent solutions to this equation as follows Av i = λ i v i. Notice that the eigenvector is not unique. We can multiply both sides of the equation by a constant c to see that if v i is a solution for eigenvalue λ i, so is cv i. Often we deal with real symmetric matrices (the transpose of the matrix is equal to the itself). In that case the eigenvectors form a complete set of orthogonal vectors. They can be used to define the directions of coordinate axes, so we can write any n dimensional vector x as a linear combination x = α 1 v 1 + α 2 v α n v n where the coefficient α i is the component of the vector in the direction v i. More generally, if there are n linearly independent eigenvectors v i, this is also possible. Then we have a simple method for finding the eigenvalue with the largest magnitude, namely the power method. 3.3 Power method The power method originates from the general statement that we can use the eigenvectors of a matrix to represent any vector x: We multiply by A and get x = α 1 v 1 + α 2 v α n v n Ax = α 1 Av 1 + α 2 Av α n Av n = α 1 λ 1 v 1 + α 2 λ 2 v α n λ n v n So we get a new vector whose coefficients are each multiplied by the corresponding eigenvalue: α i λ i. The coefficients with the larger eigenvalues get bigger compared with the coefficients with smaller eigenvalues. So let s say we have sorted the eigenvalues so the one with smallest magnitude is λ 1, and the one with the largest magnitude is λ n. If we multiply by A m times, the coefficients become α i λ m i. If we keep going, the nth term (corresponding to the largest eigenvalue) will eventually swamp all the others and we get (for very large m) A m x λ m n α n v n = y 8
9 So we get an eigenvector corresponding to the largest eigenvalue. Another way of saying this is that when we hit the vector x with the matrix A we get a new vector that tends to point more in the direction of the leading eigenvector v n. The more factors of A we pile on, the more precisely it points in that direction. So how do we get the eigenvalue if we have an eigenvector y pointing in the direction of v n? If it points in that direction, we must have y = cv n. Then remember that eigenvectors satisfy Ay = Acv n = λ n cv n = λ n y. That is, the vector Ay has every component of y multiplied by the eigenvalue λ n. We can use any of the components to read off the answer. To turn this process into a practical algorithm, we normalize the vector after each multiplication by A. Normalization simply means multiplying by a constant to put the vector in our favorite standard form. A common normalization is to divide by the Cartesian norm so we get a vector of unit length. The normalization we will use here is dividing the whole vector by the component with the largest magnitude. If we take the absolute values of the components, the largest one is called the infinity norm. So we divide by the infinity norm if that component is positive and by minus the infinity norm if it is negative. Here is an example. Suppose we have y = (3, 2, ). The infinity norm is 3. We divide by 3 to normalize to get x: x = (1, 2/3, /3). Let s call the component with the largest magnitude the leading component. In the example, the leading component in y is the first one and it is positive. If it was 3, instead, we d divide by 3. The goal is to get a vector proportioanl to the original vector, but with one component equal to +1 and with the rest of the components no larger in magnitude than 1. The reason we pick this way of normalizing the vector is that we can then easily read off the eigenvalue by looking at what happens to the leading component when we multiply by A. Also, the infinity norm is cheaper to compute, since it doesn t require any arithmetic just comparisons. Suppose, after normalizing y to get x, we multiply by A one more time and get Ax = (2,, 2/3). We can read off the eigenvalue from the leading component. It is 2. Of course, we could check every component to see that each one got multiplied by 2. So here is the power algorithm Start with any arbitrary vector y. Repeat steps 1-3 until convergence. Step 1: Normalize y to get x. The leading component is now 1. Step 2: Compute y = Ax. Step 3: The approximate eigenvalue is the new leading component. 9
10 Convergence means that when you normalize y, the new x is close enough to the old x to declare victory. You choose what is close enough to suit the requirements of your problem. Notice that with the power algorithm, you can start with any arbitrary vector and the method will converge to the same result. Well, that is almost any starting vector. You might be unlucky and pick a starting vector that has a zero component α n along the leading eigenvector v n. But numerical calculations are usually subject to roundoff error, so even if you unwittingly started witn α n = 0, chances are very good that after a few hits with the matrix A, you develop a tiny nonzero α n, and then it is just a matter of time before its coefficient grows to dominate the iteration. 3.4 Inverse power method The inverse power method works with the inverse A, assuming it exists. It is easy to check that the eigenvectors of the matrix A are also eigenvectors of its inverse, but the eigenvalues are the algebraic inverses: A v i = µ i v i where µ i = 1/λ i. So now the eigenvalue µ i with the largest magnitude corresponds to the eigenvalue λ i with the smallest magnitude. So we can get the largest and smallest eigenvalues. How do we get the ones between? For a matrix whose eigenvalues are all real, we can do this by generalizing the inverse power method. We take the inverse of the shifted matrix (A qi), where q is any number we like. (We intend to vary q.) The eigenvectors of this matrix are still the same as the eigenvectors of A: (A qi) v i = µ i v i where, now, µ i = 1/(λ i q). Which is the largest µ i? It depends on q. If q is close to one of the λ i s, then µ i is maximum for that i. So if we hold that q fixed and run the power method, we eventually get the eigenvector v i. Then we change q and rerun the power method. It s like tuning a radio dial. As q gets close to a new eigenvalue, we get the next broadcast station, i.e. the next eigenvector. If we keep going, eventually, we get them all. Clearly, if any of the eigenvalues are complex, we would have a lot of searching to do, because we d need to search the entire complex plane, and not just the real line interval between λ 1 and λ n. There are better methods. 3.5 Other methods: QR algorithm The power and inverse power method are simple and very easy to implement, but if you want all the eigenvalues, those methods are very inefficient. There are other, much more sophisticated and efficient methods, though. Here we describe in broad terms the Householder/QR algorithm for real symmetric matrices. For details, please see standard texts in numerical methods. A real symmetric matrix A can be put into diagonal form by a real orthogonal similarity transform. In other words, there exists a real orthogonal matrix Q such that the product 10
11 (similarity transform) Λ = QAQ (1) is a diagonal matrix Λ. (An orthogonal matrix Q is one whose transpose Q is its inverse: Q Q = QQ = 1.) This solves the problem, because the eigenvalues of the matrix A are the diagonal values in Λ, and the eigenvectors are the column vectors of Q. We say that the transform Q diagonalizes the matrix. Of course, finding the transform Q is a challenge. With the Householder/QR algorithm it is done through an iterative process that eventually converges to the answer. The first step in the process, the Householder step, is to find an orthogonal similarity transform that puts the matrix in tridiagonal form. This can be done exactly in a finite number of steps. The Householder method finds a matrix P that is not only orthogonal, it is symmetric (P = P ):  = P AP. (2) The matrix  is tridiagonal (and real and symmetric). In the next phase, the QR phase, we apply a succession of orthogonal similarity transforms Q (i) on the tridiagonal matrix that make the off-diagonal values smaller. Eventually they become small enough that we can say it is diagonal for all intents and purposes. The first similarity transform is applied to the tridiagonal matrix Â:  (1) = Q (1) ÂQ (1). (3) The transform is constructed so the resulting matrix Â(1) is still tridiagonal, but the offdiagonal elements are smaller. Then we apply the second similarity transform to the result above:  (2) = Q (2)  (1) Q (2). (4) We keep going until eventually Â(n), for large n, is close enough to a diagonal matrix that we can call it our Λ. Putting all the transforms together, we get Λ = lim Q (n) Q(n)... Q (2) Q(1) P AP Q (1) Q (2)... Q (n) Q (n) (5) n It is easy to show that the product of real orthogonal matrices is also real orthogonal. So the product Q = P Q (1) Q (2)... Q (n) Q (n) (6) is the orthogonal matrix that diagonalizes A. That is what we wanted. So where does the R in the QR algorithm come in? At each step the tridiagonal matrix  (i) is factored into a product of an orthogonal matrix Q (i) and an upper triangular matrix R (i)  (i) = Q (i) R (i). (7) Hence the name QR. This factorization can be done exactly with a finite number of steps. Then one can show that the combination  (i+1) = Q (i)  (i) Q (i). (8) 11
12 is tridiagonal. That gives us the next matrix in the sequence. The same factorization is done on it, and the process continues. As we can see, finding the eigenvalues this way takes some work. It turns out that the rate of convergence depends on the spacing of the eigenvalues. If two eigenvalues are very close to each other (compared with their average spacing), more iterations are needed to get convergence. If they are well separated, fewer iterations are required. The QR algorithm also works with (complex) Hermitian matrices (A = A). This covers a vast number of cases in science and engineering. The eigenvalues are still real, but the similarity transform is unitary (Q Q = QQ = 1). Here the dagger means the complex conjugate transpose. 12
1. In this problem, if the statement is always true, circle T; otherwise, circle F.
Math 1553, Extra Practice for Midterm 3 (sections 45-65) Solutions 1 In this problem, if the statement is always true, circle T; otherwise, circle F a) T F If A is a square matrix and the homogeneous equation
More informationCalculating determinants for larger matrices
Day 26 Calculating determinants for larger matrices We now proceed to define det A for n n matrices A As before, we are looking for a function of A that satisfies the product formula det(ab) = det A det
More informationLinear Algebra: Matrix Eigenvalue Problems
CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given
More informationLinear Algebra Review
Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite
More informationTopic 15 Notes Jeremy Orloff
Topic 5 Notes Jeremy Orloff 5 Transpose, Inverse, Determinant 5. Goals. Know the definition and be able to compute the inverse of any square matrix using row operations. 2. Know the properties of inverses.
More information22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes
Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one
More informationApplied Linear Algebra
Applied Linear Algebra Gábor P. Nagy and Viktor Vígh University of Szeged Bolyai Institute Winter 2014 1 / 262 Table of contents I 1 Introduction, review Complex numbers Vectors and matrices Determinants
More informationSymmetric and anti symmetric matrices
Symmetric and anti symmetric matrices In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, matrix A is symmetric if. A = A Because equal matrices have equal
More informationMAC Module 3 Determinants. Learning Objectives. Upon completing this module, you should be able to:
MAC 2 Module Determinants Learning Objectives Upon completing this module, you should be able to:. Determine the minor, cofactor, and adjoint of a matrix. 2. Evaluate the determinant of a matrix by cofactor
More informationIMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET
IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each
More informationIMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET
IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each
More informationMATH 1553 PRACTICE MIDTERM 3 (VERSION B)
MATH 1553 PRACTICE MIDTERM 3 (VERSION B) Name Section 1 2 3 4 5 Total Please read all instructions carefully before beginning. Each problem is worth 10 points. The maximum score on this exam is 50 points.
More informationConceptual Questions for Review
Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.
More informationDifferential Equations
This document was written and copyrighted by Paul Dawkins. Use of this document and its online version is governed by the Terms and Conditions of Use located at. The online version of this document is
More information18.06SC Final Exam Solutions
18.06SC Final Exam Solutions 1 (4+7=11 pts.) Suppose A is 3 by 4, and Ax = 0 has exactly 2 special solutions: 1 2 x 1 = 1 and x 2 = 1 1 0 0 1 (a) Remembering that A is 3 by 4, find its row reduced echelon
More information33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM
33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM (UPDATED MARCH 17, 2018) The final exam will be cumulative, with a bit more weight on more recent material. This outline covers the what we ve done since the
More information[Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty.]
Math 43 Review Notes [Disclaimer: This is not a complete list of everything you need to know, just some of the topics that gave people difficulty Dot Product If v (v, v, v 3 and w (w, w, w 3, then the
More informationElementary Linear Algebra
Matrices J MUSCAT Elementary Linear Algebra Matrices Definition Dr J Muscat 2002 A matrix is a rectangular array of numbers, arranged in rows and columns a a 2 a 3 a n a 2 a 22 a 23 a 2n A = a m a mn We
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationRepeated Eigenvalues and Symmetric Matrices
Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one
More informationSome Notes on Linear Algebra
Some Notes on Linear Algebra prepared for a first course in differential equations Thomas L Scofield Department of Mathematics and Statistics Calvin College 1998 1 The purpose of these notes is to present
More informationCS 246 Review of Linear Algebra 01/17/19
1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector
More information4. Linear transformations as a vector space 17
4 Linear transformations as a vector space 17 d) 1 2 0 0 1 2 0 0 1 0 0 0 1 2 3 4 32 Let a linear transformation in R 2 be the reflection in the line = x 2 Find its matrix 33 For each linear transformation
More informationMath 4A Notes. Written by Victoria Kala Last updated June 11, 2017
Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...
More informationThird Midterm Exam Name: Practice Problems November 11, Find a basis for the subspace spanned by the following vectors.
Math 7 Treibergs Third Midterm Exam Name: Practice Problems November, Find a basis for the subspace spanned by the following vectors,,, We put the vectors in as columns Then row reduce and choose the pivot
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationLinear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.
Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the
More informationChapter 5. Eigenvalues and Eigenvectors
Chapter 5 Eigenvalues and Eigenvectors Section 5. Eigenvectors and Eigenvalues Motivation: Difference equations A Biology Question How to predict a population of rabbits with given dynamics:. half of the
More informationCPE 310: Numerical Analysis for Engineers
CPE 310: Numerical Analysis for Engineers Chapter 2: Solving Sets of Equations Ahmed Tamrawi Copyright notice: care has been taken to use only those web images deemed by the instructor to be in the public
More informationReview for Exam Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions.
Review for Exam. Find all a for which the following linear system has no solutions, one solution, and infinitely many solutions. x + y z = 2 x + 2y + z = 3 x + y + (a 2 5)z = a 2 The augmented matrix for
More informationLinear Algebraic Equations
Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff
More informationMATH 2050 Assignment 8 Fall [10] 1. Find the determinant by reducing to triangular form for the following matrices.
MATH 2050 Assignment 8 Fall 2016 [10] 1. Find the determinant by reducing to triangular form for the following matrices. 0 1 2 (a) A = 2 1 4. ANS: We perform the Gaussian Elimination on A by the following
More informationDiagonalization. MATH 1502 Calculus II Notes. November 4, 2008
Diagonalization MATH 1502 Calculus II Notes November 4, 2008 We want to understand all linear transformations L : R n R m. The basic case is the one in which n = m. That is to say, the case in which the
More informationMatrices and Linear Algebra
Contents Quantitative methods for Economics and Business University of Ferrara Academic year 2017-2018 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2
More informationNumerical Linear Algebra
Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix
More informationPreliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012
Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.
More informationMath Linear Algebra Final Exam Review Sheet
Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of
More informationVectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =
Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.
More informationA Review of Matrix Analysis
Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value
More informationDirect Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le
Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization
More informationCheat Sheet for MATH461
Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A
More informationMath 18.6, Spring 213 Problem Set #6 April 5, 213 Problem 1 ( 5.2, 4). Identify all the nonzero terms in the big formula for the determinants of the following matrices: 1 1 1 2 A = 1 1 1 1 1 1, B = 3 4
More informationMath 471 (Numerical methods) Chapter 3 (second half). System of equations
Math 47 (Numerical methods) Chapter 3 (second half). System of equations Overlap 3.5 3.8 of Bradie 3.5 LU factorization w/o pivoting. Motivation: ( ) A I Gaussian Elimination (U L ) where U is upper triangular
More informationMath 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm
Math 405: Numerical Methods for Differential Equations 2016 W1 Topics 10: Matrix Eigenvalues and the Symmetric QR Algorithm References: Trefethen & Bau textbook Eigenvalue problem: given a matrix A, find
More informationNumerical Methods - Numerical Linear Algebra
Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear
More informationChapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form
Chapter 5. Linear Algebra A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are real numbers. 1
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J Olver 8 Numerical Computation of Eigenvalues In this part, we discuss some practical methods for computing eigenvalues and eigenvectors of matrices Needless to
More informationSingular Value Decomposition
Chapter 5 Singular Value Decomposition We now reach an important Chapter in this course concerned with the Singular Value Decomposition of a matrix A. SVD, as it is commonly referred to, is one of the
More informationChapter 2:Determinants. Section 2.1: Determinants by cofactor expansion
Chapter 2:Determinants Section 2.1: Determinants by cofactor expansion [ ] a b Recall: The 2 2 matrix is invertible if ad bc 0. The c d ([ ]) a b function f = ad bc is called the determinant and it associates
More informationMTH 464: Computational Linear Algebra
MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University March 2, 2018 Linear Algebra (MTH 464)
More informationInverses and Determinants
Engineering Mathematics 1 Fall 017 Inverses and Determinants I begin finding the inverse of a matrix; namely 1 4 The inverse, if it exists, will be of the form where AA 1 I; which works out to ( 1 4 A
More informationLinear Least-Squares Data Fitting
CHAPTER 6 Linear Least-Squares Data Fitting 61 Introduction Recall that in chapter 3 we were discussing linear systems of equations, written in shorthand in the form Ax = b In chapter 3, we just considered
More informationIntroduction to Matrices
POLS 704 Introduction to Matrices Introduction to Matrices. The Cast of Characters A matrix is a rectangular array (i.e., a table) of numbers. For example, 2 3 X 4 5 6 (4 3) 7 8 9 0 0 0 Thismatrix,with4rowsand3columns,isoforder
More informationand let s calculate the image of some vectors under the transformation T.
Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =
More informationEigenvalues, Eigenvectors, and Diagonalization
Week12 Eigenvalues, Eigenvectors, and Diagonalization 12.1 Opening Remarks 12.1.1 Predicting the Weather, Again Let us revisit the example from Week 4, in which we had a simple model for predicting the
More informationComputational Methods CMSC/AMSC/MAPL 460. Eigenvalues and Eigenvectors. Ramani Duraiswami, Dept. of Computer Science
Computational Methods CMSC/AMSC/MAPL 460 Eigenvalues and Eigenvectors Ramani Duraiswami, Dept. of Computer Science Eigen Values of a Matrix Recap: A N N matrix A has an eigenvector x (non-zero) with corresponding
More informationEigenvalues, Eigenvectors, and Diagonalization
Math 240 TA: Shuyi Weng Winter 207 February 23, 207 Eigenvalues, Eigenvectors, and Diagonalization The concepts of eigenvalues, eigenvectors, and diagonalization are best studied with examples. We will
More informationA Brief Outline of Math 355
A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting
More informationNotes on Linear Algebra
1 Notes on Linear Algebra Jean Walrand August 2005 I INTRODUCTION Linear Algebra is the theory of linear transformations Applications abound in estimation control and Markov chains You should be familiar
More informationMATH 310, REVIEW SHEET 2
MATH 310, REVIEW SHEET 2 These notes are a very short summary of the key topics in the book (and follow the book pretty closely). You should be familiar with everything on here, but it s not comprehensive,
More information1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )
Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical
More information(b) If a multiple of one row of A is added to another row to produce B then det(b) =det(a).
.(5pts) Let B = 5 5. Compute det(b). (a) (b) (c) 6 (d) (e) 6.(5pts) Determine which statement is not always true for n n matrices A and B. (a) If two rows of A are interchanged to produce B, then det(b)
More informationMath Matrix Algebra
Math 44 - Matrix Algebra Review notes - 4 (Alberto Bressan, Spring 27) Review of complex numbers In this chapter we shall need to work with complex numbers z C These can be written in the form z = a+ib,
More informationMath 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008
Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures
More informationLinear Algebra Practice Final
. Let (a) First, Linear Algebra Practice Final Summer 3 3 A = 5 3 3 rref([a ) = 5 so if we let x 5 = t, then x 4 = t, x 3 =, x = t, and x = t, so that t t x = t = t t whence ker A = span(,,,, ) and a basis
More informationGaussian Elimination and Back Substitution
Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving
More informationQueens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane.
Queens College, CUNY, Department of Computer Science Numerical Methods CSCI 361 / 761 Spring 2018 Instructor: Dr. Sateesh Mane c Sateesh R. Mane 2018 8 Lecture 8 8.1 Matrices July 22, 2018 We shall study
More informationLecture # 11 The Power Method for Eigenvalues Part II. The power method find the largest (in magnitude) eigenvalue of. A R n n.
Lecture # 11 The Power Method for Eigenvalues Part II The power method find the largest (in magnitude) eigenvalue of It makes two assumptions. 1. A is diagonalizable. That is, A R n n. A = XΛX 1 for some
More informationLinear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4
Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix
More informationLinear Algebra Primer
Linear Algebra Primer D.S. Stutts November 8, 995 Introduction This primer was written to provide a brief overview of the main concepts and methods in elementary linear algebra. It was not intended to
More informationWe will discuss matrix diagonalization algorithms in Numerical Recipes in the context of the eigenvalue problem in quantum mechanics, m A n = λ m
Eigensystems We will discuss matrix diagonalization algorithms in umerical Recipes in the context of the eigenvalue problem in quantum mechanics, A n = λ n n, (1) where A is a real, symmetric Hamiltonian
More informationSolving Linear Systems of Equations
November 6, 2013 Introduction The type of problems that we have to solve are: Solve the system: A x = B, where a 11 a 1N a 12 a 2N A =.. a 1N a NN x = x 1 x 2. x N B = b 1 b 2. b N To find A 1 (inverse
More information22.4. Numerical Determination of Eigenvalues and Eigenvectors. Introduction. Prerequisites. Learning Outcomes
Numerical Determination of Eigenvalues and Eigenvectors 22.4 Introduction In Section 22. it was shown how to obtain eigenvalues and eigenvectors for low order matrices, 2 2 and. This involved firstly solving
More informationLinear Algebra Primer
Introduction Linear Algebra Primer Daniel S. Stutts, Ph.D. Original Edition: 2/99 Current Edition: 4//4 This primer was written to provide a brief overview of the main concepts and methods in elementary
More informationMATH 1120 (LINEAR ALGEBRA 1), FINAL EXAM FALL 2011 SOLUTIONS TO PRACTICE VERSION
MATH (LINEAR ALGEBRA ) FINAL EXAM FALL SOLUTIONS TO PRACTICE VERSION Problem (a) For each matrix below (i) find a basis for its column space (ii) find a basis for its row space (iii) determine whether
More informationA matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and
Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.
More information5.6. PSEUDOINVERSES 101. A H w.
5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and
More informationRemark By definition, an eigenvector must be a nonzero vector, but eigenvalue could be zero.
Sec 6 Eigenvalues and Eigenvectors Definition An eigenvector of an n n matrix A is a nonzero vector x such that A x λ x for some scalar λ A scalar λ is called an eigenvalue of A if there is a nontrivial
More informationG1110 & 852G1 Numerical Linear Algebra
The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the
More informationMath 240 Calculus III
Generalized Calculus III Summer 2015, Session II Thursday, July 23, 2015 Agenda 1. 2. 3. 4. Motivation Defective matrices cannot be diagonalized because they do not possess enough eigenvectors to make
More informationA matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and
Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.
More informationSolution of Linear Equations
Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass
More informationThe determinant. Motivation: area of parallelograms, volume of parallepipeds. Two vectors in R 2 : oriented area of a parallelogram
The determinant Motivation: area of parallelograms, volume of parallepipeds Two vectors in R 2 : oriented area of a parallelogram Consider two vectors a (1),a (2) R 2 which are linearly independent We
More informationSolving a system by back-substitution, checking consistency of a system (no rows of the form
MATH 520 LEARNING OBJECTIVES SPRING 2017 BROWN UNIVERSITY SAMUEL S. WATSON Week 1 (23 Jan through 27 Jan) Definition of a system of linear equations, definition of a solution of a linear system, elementary
More informationCS 143 Linear Algebra Review
CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see
More informationQuantum Computing Lecture 2. Review of Linear Algebra
Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces
More informationExtra Problems for Math 2050 Linear Algebra I
Extra Problems for Math 5 Linear Algebra I Find the vector AB and illustrate with a picture if A = (,) and B = (,4) Find B, given A = (,4) and [ AB = A = (,4) and [ AB = 8 If possible, express x = 7 as
More information9.1 Eigenanalysis I Eigenanalysis II Advanced Topics in Linear Algebra Kepler s laws
Chapter 9 Eigenanalysis Contents 9. Eigenanalysis I.................. 49 9.2 Eigenanalysis II................. 5 9.3 Advanced Topics in Linear Algebra..... 522 9.4 Kepler s laws................... 537
More informationMATH 240 Spring, Chapter 1: Linear Equations and Matrices
MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear
More informationMatrix Factorization and Analysis
Chapter 7 Matrix Factorization and Analysis Matrix factorizations are an important part of the practice and analysis of signal processing. They are at the heart of many signal-processing algorithms. Their
More informationSOLVING LINEAR SYSTEMS
SOLVING LINEAR SYSTEMS We want to solve the linear system a, x + + a,n x n = b a n, x + + a n,n x n = b n This will be done by the method used in beginning algebra, by successively eliminating unknowns
More informationEigenvalues, Eigenvectors, and Diagonalization
Week12 Eigenvalues, Eigenvectors, and Diagonalization 12.1 Opening Remarks 12.1.1 Predicting the Weather, Again View at edx Let us revisit the example from Week 4, in which we had a simple model for predicting
More informationEigenvalue and Eigenvector Problems
Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems
More informationLinear Algebra Primer
Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................
More informationA = 3 B = A 1 1 matrix is the same as a number or scalar, 3 = [3].
Appendix : A Very Brief Linear ALgebra Review Introduction Linear Algebra, also known as matrix theory, is an important element of all branches of mathematics Very often in this course we study the shapes
More informationA VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 2010
A VERY BRIEF LINEAR ALGEBRA REVIEW for MAP 5485 Introduction to Mathematical Biophysics Fall 00 Introduction Linear Algebra, also known as matrix theory, is an important element of all branches of mathematics
More informationMath 304 Fall 2018 Exam 3 Solutions 1. (18 Points, 3 Pts each part) Let A, B, C, D be square matrices of the same size such that
Math 304 Fall 2018 Exam 3 Solutions 1. (18 Points, 3 Pts each part) Let A, B, C, D be square matrices of the same size such that det(a) = 2, det(b) = 2, det(c) = 1, det(d) = 4. 2 (a) Compute det(ad)+det((b
More informationThe Solution of Linear Systems AX = B
Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has
More informationNotes on Determinants and Matrix Inverse
Notes on Determinants and Matrix Inverse University of British Columbia, Vancouver Yue-Xian Li March 17, 2015 1 1 Definition of determinant Determinant is a scalar that measures the magnitude or size of
More informationLecture 4: Linear Algebra 1
Lecture 4: Linear Algebra 1 Sourendu Gupta TIFR Graduate School Computational Physics 1 February 12, 2010 c : Sourendu Gupta (TIFR) Lecture 4: Linear Algebra 1 CP 1 1 / 26 Outline 1 Linear problems Motivation
More information