Numerical Linear Algebra

Size: px
Start display at page:

Download "Numerical Linear Algebra"

Transcription

1 Numerical Linear Algebra Lecture Notes 2014 Bärbel Janssen October 15, 2014 Department of High Performance Computing School of Computer Science and Communication Royal Institute of Technology, KTH

2

3 Contents 1 Introduction 5 2 Condition and Stability Errors and machine precision Errors Floating-point representation Arithmetic operations an cancellation Conditioning Stability Linear Algebra and Analysis Key definitions and notation Examples Linear systems Vector norms Matrix norms Orthogonal systems Non-quadratic linear systems Eigenvalues and eigenvectors Geometric and algebraic multiplicity Singular Value Decomposition Conditioning of linear algebraic systems Conditioning of eigenvalue problems Direct Solution Methods Triangular linear systems Gaussian elimination An example for why pivoting is necessary Conditioning of Gaussian elimination Symmetric matrices Cholesky decomposition Orthogonal decomposition Householder Transformation Direct determination of eigenvalues

4 Contents 5 Iterative Solution Methods Fixed point iteration and defect correction Construction of iterative methods Descent methods Gradient method Conjugate gradient method (cg method) Iterative Methods for Eigenvalue Problems Methods for the partial eigenvalue problem The Power method The inverse iteration Krylov space methods Reduction by Galerkin approximation Lanczos and Arnoldi method

5 1 Introduction These lecture notes are based on a course given at Royal Institute of Technology, KTH in autum During a prestudy week problems are solve theoretically. Even small programs are implemented in Matlab. Then a compact course follows in which each day a new topic of numerical linear algebra is considered. The course ends with a week for solving bigger projects with Matlab. 5

6

7 2 Condition and Stability Numerical methods require use of computers and therefore precision is limited. methods we use have to be analyzed in view of the finite precision. The 2.1 Errors and machine precision In approximations, we would like to decide how close we are to a (possibly) exact solution. This usually means we are interested in the difference between the approximated and the real solution Errors Definition 2.1 (Absolute error) Let x be an approximation to x. The absolute error e is defined as e = x x, x R, e = x x, x R n. The absolute error e might be big just because x or x is big. This gives rise to the following definition. Definition 2.2 (Relative error) For x 0, we define the relative error between x and its approximation x as e R = x x, x R, e R = x x x x, x Rn. Errors may have different reasons. But one of them is the already mentioned finite precision. This is connected with the computer s represenatation of numbers Floating-point representation In scientific computuations, floating-point numbers are typically used. The three digit floating-point representation of π = is Definition 2.3 (Decimal floating-point number) The fraction.314 is called the mantissa, 10 is called the base, and 1 is called the exponent. Generally, n-digit decimal floating-point numbers have the form ±.d 1 d 2 d n 10 p, 7

8 2 Condition and Stability where the digits d 1 through d n are integers between 0 and 9 and d 1 is never zero except in the case d 1 = d 2 = = d n = 0. Generally, a floating-point number can be represented using a base b as ±.d 1 d 2 d n b p, where each digit is between 0 and b 1. For the decimal system, b = 10, while b = 2 in the inary system, where the digits are 0 or 1. If a number can be written in that sense, we call it representable. Before a computation with any number x, we have to convert it into a representable number x: x = fl(x). The relation between the exact value x and its representable version fl(x) has the form where δ may be positive, negative or zero. fl(x) = x(1 + δ), Arithmetic operations an cancellation Errors in finite arithmetic do not only occur in representing numbers but also in performing arithmetic operations. The result after an operation may be not representable even if the input numbers both were representable. The result needs to be rounded or truncated to be representable. The result of a floating point operation satifies fl(a op b) = (a op b)(1 + δ) where op is one of the basic operations +,, :. Usually, properties of exact mathematic operations are not valid in arithmetic operations as fl(fl(x + y) + z) fl(x + fl(y + z)). A well know example for loss of precision occurs in the computation of the exponential of an arbitrary number x. Since the convergence radius of the exponential series is infinity which means it converges for all numbers x R the power-series expansion exp(x) = k=0 x k k! = 1 + x x ! x3 + (2.1) is a good candidate for computations. However, for negative values of x, the convergence is very bad with a high relative error. When the expansion eq. (2.1) is applied to any negative argument, the terms alternate in sign. Subtraction is required to compute the approximation. The large relative error is caused by rounding in combination with subtraction. When close floating-point numbers are subtracted, the leading digits of their mantissas are subtracted (hence the term cancellation). With exact arithmetic, these low-order digits would in general be nonzero. However, this information has been discarded before the subtraction in order to represent the numbers. Remark 2.1 Mathematically equivalent methods may differ dramatically in their sensitivity to rounding errors. 8

9 2.2 Conditioning 2.2 Conditioning When analyzing algorithms for mathematical problems we also have to know about the conditioning of the problem itself. Definition 2.4 (Condition) A problem is well conditioned if small changes in problem parameters produce small changes in the outcome. Thus to decide whether a problem is well conditioned, we make a list of the problem s parameters, we change each parameter slightly, and we observe the outcome. If the new outcome deviates from the old one just a little, the problem is well conditioned. Remark 2.2 The property to be well or ill conditioned is a property of the problem itself. It does not have to do anything with an algorithm applied to solve a certain problem. Especially, it is independent of computation and rounding errors. Example 2.1 Let us consider the problem of determining the roots of the polynomial (x 1) 4 = 0, (2.2) whose four routs are all exactly equal to 1. Now we suppose that the right-hand side is perturbed by 10 8 such that the perturbed problem of eq. (2.2) now reads and it can be equivalently written as ( (x 1) = x (x 1) 4 = 10 8, (2.3) ) ( x ) ( x 1 + i ) ( x i ), 100 where i C is the imaginary number such that i 2 = 1. That means one root has changed by 10 2 compared to the root of eq. (2.2). The change in the solution is six orders larger in comparison to the size of the perturbation. So we say that this problem is ill conditioned. 2.3 Stability Let us turn to the anlysis of algorithms. We expect them to give a correct answer, even if small errors are made. Definition 2.5 (Stability) An algorithm is stable if small changes in algorithm parameters have a small effect on the algorithm s output. Example 2.2 Consider the example of solving the equation x = e x or equivalently x = ln(x). 9

10 2 Condition and Stability The iteration procedure x k+1 = ln(x k ) (2.4) produces approximations which diverge from the root except if you choose the solution x as the starting guess x 1 = x. This algorithm is unstable since each iteration amplifies the error. An interation cobweb in fig. 2.1 shows a few iteration steps beginning with x 1 = 1 /2. As contrast to the unstable iteration in eq. (2.4) let us look at a stable iteration f(x) f(x) = x f(x 3 ) f(x 1 ) f(x 2 ) f(x 4 ) x x 3 x 1 x 2 x 4 which is given as Figure 2.1: Cobweb for the iteration eq. (2.4). x k+1 = exp( x k ). (2.5) Beginning with the starting value x 1 = 1, this algorithm produces convergent iterates to the root. The first few iterates are shown in a cobweb in fig This is an example for a stable algorithm. 10

11 2.3 Stability f(x) f(x) = x f(x 2 ) f(x 4 ) f(x 3 ) f(x 1 ) x x 2 x 4 x 5 x 3 x 1 Figure 2.2: Cobweb for the iteration eq. (2.5). 11

12

13 3 Linear Algebra and Analysis In this chapter we introduce the notation which is used throughout this lecture notes. For convinience, we recall the most important definitions and results. However, we refer to standard literature for proofs. 3.1 Key definitions and notation The elements of finite dimensional real vector spaces are denoted by x 1 x =. x n R n with x i R. If we want to refer to this vector as a row vector, we simply write x T = (x 1,..., x n ). For two vectors x, y R addition and scalar multiplication is defined as x + y := (x 1 + y 1,..., x n + y n ), αx := (αx 1,..., αx n ). Definition 3.1 (Linear independence) A set of vectors v 1,..., v k R n is called linearly independent if k c i v i = 0 c i = 0, for i = 1,..., k. i=1 The rank of A, rank(a) is the number of linearly independent columns of A. This is the ingredient for the next definition of a basis. Definition 3.2 (Basis) A set of n vectors v 1,..., v n R n is called a basis of R n if they are linearly independent. We say that the basis spans R n. Any element of R n can be uniquely written as n x = c i v i, c i R, for i = 1,..., n. i=1 Using Zorn s Lemma, one can show that each vector space possesses a basis. A special basis of R n is formed by the unit vectors {e 1,..., e n }, where δ 1i { 1 for i = j, e i =. δ ni, δ ij = The symbol δ ij is called Kronecker sybol. 0 for i j. 13

14 3 Linear Algebra and Analysis Definition 3.3 (Euclidean scalar product) The function (, ) : R n R n R defined by n (x, y) = y T x = x T y = c i d i, where x = n i=1 c iv i, y = n i=1 d iv i is called Euclidean scalar product. Two vectors x and y of R n are orthogonal if (x, y) = 0. This whole lecture deals with linear transformation. Its definition is the following: Definition 3.4 (Linearity) Let A be a transformation between two vector spaces as We call A linear if A : R n R m. i=1 A(αx + βy) = A(αx) + A(βy) = αa(x) + βa(y). holds true for every α, β R and for every x, yr n. In the next definition we turn to linear transformations and matrices. Definition 3.5 (Linear transformation and matrices) Let {v 1,..., v n } and {w 1,..., w m } be bases of R n and R m respectively. Relative to these bases, a linear transformation A : R n R m is represented by the matrix A having m rows and n columns: a 11 a a 1n a 21 a a 2n A =......, a m1 a m2... a mn the coefficients a ij of the matrix A are uniquely defined by the relations Av j = m a ij w i, j = 1,..., n. i=1 A matrix with elements a ij is written as A = (a ij ). Given a matrix A, (A) ij denotes the element in the ith row and the jth column. The transpose of A is uniquely defined by the relations (Ax, y) = ( x, A T y ) for every x R n, y R m 14

15 3.1 Key definitions and notation which imply that ( A T ) ij = a ji. It is called symmetric if A = A T. For (square) matrices, addition and scalar multiplication is defined elementwise. Let then we have A = (a ij ), B = (b ij ), α R αa + B = (αa ij + b ij ) as well as multiplications between a matrix and a vector or between two matrices as Examples Ax = n a ik x k R n, AB = k=1 n a ik b kj R n n. In order to better understand the definitions, let us look at some examples. Example 3.1 (Transpose) Given a matrix A = k=1 ( ) 1 2 3, then its transpose is A T = , 1 6 Only matrices of the same dimensions can be added as the next example shows. Example 3.2 (Addition of matrices) ( ) ( ) = ( 1 8 ) In the case of matrix vector multiplication the dimensions have to be suitable. Example 3.3 (Matrix vector multiplication) a 11 a 12 ( ) a 11 x 1 + a 12 x 2 a 21 a 22 x1 = a x 21 x 1 + a 22 x 2, a 31 a 2 32 a 31 x 1 + a 32 x 2 or a more concrete example if A = ( 3 1 ) and x = 2, 3 then Ax = ( ) = ( ) In the same manner two matrices of suitable dimensions are multiplied. 15

16 3 Linear Algebra and Analysis Example 3.4 (Matrix matrix multiplication) Let A = ( ) , B = 2 2, then AB = ( ) Linear systems One focus of this course is to solve linear systems like a 11 x 1 + a 12 x a 1n x n = b 1 a 21 x 1 + a 22 x a 2n x n = b 2.. a m1 x 1 + a m2 x a mn x n = b m (3.1) numerically. Direct and iterative methods will be presented and explained to numerically compute a solution. Before applying sophisticated methods, we have to make sure that a solution exists. Therfor, we recall the known results from basic Linear Algebra. We present criteria of solvability for rectangular as well as quadratic systems. The system eq. (3.1) is solvable if and only if rank(a) = rank(a, b) with the composed matrix a ( ) 11 a 1n b 1 A, b =......, a mn a mn b m In case the matrix A is quadratic, the following criteria for solvability of eq. (3.1) are equivalent: Ax = 0 implies x = 0. rank(a) = n. det(a) 0. All eigenvalues of A are nonzero. The inverse A 1 of A exists and it holds A 1 A = AA 1 = I. For the definition of an eigenvalue, see section

17 3.3 Vector norms 3.3 Vector norms Using numerical methods with limited precision and roundoff errors, we have to decide how far away from the exact solution the computed solution is. This means we have to measure distances between vectors for example. In this section we provide the necessary tools. Definition 3.6 (Norm) A mapping : R n R is a (vector) norm if it fulfills the following properties: (i) x 0 for any vector x R n, x = 0 x = 0. (ii) For any scalar α R and any vector x R n, it holds αx = α x. (iii) For any two vectors x R n and y R n, the triangle inequality holds: x + y x + y. A vector space equipped with a norm is called a normed vector space. Example 3.5 (Vector norms) A standard example for a vector norm is the euclidian norm ( n x 2 := i=1 x i 2 ) 1/2 Other examples are the maximum norm and the l 1 norm or the general l p norms for 1 p < x := max 1 i n x i, ( n ) 1/p x p := x i p. Let us compute different norms of the same vector in different vector norms. Example 3.6 Consider the vector ( ) 1 x =, then x 2 1 = 3, x 2 = 5 and x = 2. i=1 Different norms for the same vector can give different results. However, their norms can not vary arbitrarily as the following therorem shows. Theorem 3.1 (Equivalence of norms) All norms on the finite dimensional vector space R n are equivalent to the maximum norm, i. e., there exist positive constants m and M such that for each norm m x x M x, x R n. 17

18 3 Linear Algebra and Analysis In the following we state some of the most important inequalities used throuout this course. Lemma 3.1 (Hölder s inequality) For 1 < p, q < and 1 /p + 1 /q = 1 and x, y R n, the inequality ( n n ) 1/p ( n ) 1/q x i y i x i p y i q i=1 i=1 i=1 holds. In the case p = q = 2 this inequality is called Cauchy-Schwarz inequality. Lemma 3.2 (Cauchy-Schwarz inequality) ( n n ) 1/2 ( n ) 1/2 x i y i x i 2 y i 2 i=1 i=1 i=1 In other references, you can find this inequality also written like this (x, y) 2 (x, x)(y, y) = x 2 2 y 2 2, or like this x T y x 2 y 2. Lemma 3.3 (Minkowski inequality) For 1 p the inequality x + y p x p + y p, x, y R n. For scalars the following inequality is often used. Lemma 3.4 (Young inequality) For 1 < p, q < the inequality xy x p p + y q, x, y R. q 3.4 Matrix norms Definition 3.7 (Matrix norm) A mapping : R n n R is a matrix norm if it fulfills the following properties: (i) A 0 for any matrix A R n n, A = 0 A = 0. (ii) For any scalar α R and any matrix A R n n, it holds αa = α A. (iii) For any two matrices A R n n and B R n n, the triangle inequality holds: A + B A + B. 18

19 3.5 Orthogonal systems If a matrix norm in addition also satisfies it is called submultiplicative or consistent. AB A B, A matrix norm for any A R n can be created through any vector norm on R n by Ax A := sup x R n x. (3.2) x 0 From this definition, it follows directly that All matrix norm generated like this are called natural matrix norms. These norms all comply with (i) Ax A x for every x R n, which follows directly from eq. (3.2). (ii) AB A B for every A, B R n n. (iii) the norm of the identity matrix I = 1. Theorem 3.2 (Natural matrix norms) Let A = (a ij ) be a square matrix. Then Ax A 1 = sup x Rn 1 x 0 x 1 Ax A 2 = sup x Rn 2 x 0, x 2 Ax A = sup x Rn x 0 x = max 1 j n = max 1 i n n a ij, i=1 n a ij, j=1 3.5 Orthogonal systems We introduced the definition of orthogonal. Two vectors x and y R n are orthogonal if (x, y) = 0. For two subspaces N, M R n we define orthogonality as (x, y) = 0, x N, y M. Definition 3.8 (Orthogonal projection) Let M R n be a nontrivial subset. For any vector x R n the orthogonal projection P M x M is defined by x P M x 2 = min x y. y M This best approximation property is equivalent to the relation which can be used to compute P M x. (x P M x, y) = 0 y M, 19

20 3 Linear Algebra and Analysis A set of vectors {v 1,..., v m }, v i 0 of R n which are mutually orthogonal (v k, v l ) = 0 for k l is necessarily independent. Definition 3.9 (Orthogonal system) A set of vectors {v 1,..., v m }, v i 0 of R n which are mutually orthogonal (v k, v l ) = 0 for k l is called orthogonal system and in the case m = n orthogonal basis. If in addation also (v k, v k ) = 1 for k = 1,..., m the system is called orthonormal system or orthonormal basis, respectively. Often it is necessary to transform an existing system of vectors into an orthogonal or orthonormal one. The following theorem states how. Theorem 3.3 (Gram-Schmidt algorithm) Let {v 1,..., v m } be any basis of R n. Then b 1 := v1 v 1, bk := v k k (v k, b j )b j, b k := b k, k = 2,..., n, b k j=1 yields an orthonormal basis {b 1,..., b n } of R n. This algorithm is called Gram-Schmidt algorithm. Definition 3.10 (Orthonormal matrix) A matrix Q R m n is called orthogonal or orthonormal if its column vectors foram an orthogonal or orthonormal system, respectively. In the case m = n such a matrix is called unitary. Lemma 3.5 A unitary matrix Q R n n is regular and its inverse is Q 1 = Q T. Further it holds (Qx, Qy) = (x, y), x, y R n, Qx 2 = x 2, x R n. (3.3) Definition 3.11 (Range and null space) For a general matrix A R m n, we define its range and null space (or kernel) as respectively. Definition 3.12 A quadratic matrix is called normal if A T A = AA T. is called positive semi-definite if and positive definite if range(a) := {y R m : y = Ax, x R n }, kern(a) := {x R n : Ax = 0}, (Ax, x) R, (Ax, x) 0, x R n, (Ax, x) R, (Ax, x) > 0, x R n \ {0}, 20

21 3.6 Non-quadratic linear systems Remark 3.1 In view of (3.3), we remark that euclidian scalar product and euclidian vector norm are invariant under unitary transformations. Example 3.7 (Unitary matrix) The real unitary matrix Q (ij) θ = i j cos θ 0 sin θ 0 i sin θ 0 cos θ 0 j describes a rotation in the (x i, x j )-plane about the origin with angle θ [0, 2π). 3.6 Non-quadratic linear systems Let A R n m be a not necessarily quadratic matrix and b R m a right-hand side vector. In this section we consider solving the non-quadractic system Ax = b (3.4) for x R m, where m n. We need to define a new notion of a solution to the linear system eq. (3.4) as the following example shows: 1 1 A = 1 and b = There exists no vector x such that eq. (3.4) is exactly fulfilled. Instead of looking for an exact solution we seek a vector x such that Ax is as close to b as possible. In mathematical terms, this means we look for a vector x such that b Ax 2 is minimized. In the case that x is a exact solution to eq. (3.4) it also minimizes b Ax 2. Theorem 3.4 (Least squares) There exists a solution x of eq. (3.4) in the sense that A x b 2 = min x R n Ax b 2. This is equivalent to x being a solution of the normal equation A T A x = A T b. 21

22 3 Linear Algebra and Analysis 3.7 Eigenvalues and eigenvectors In the following we consider quadratic matrices A R n n. Definition 3.13 (Eigenvalue) A number λ C is called eigenvalue of the matrix A R n n if there exists a vector w 0, w R n such that Aw = λw. The vector w is called eigenvector. Eigenvalues can also be obtain as zeros of the characteristic polynomial χ A (z) := det(a zi). (3.5) I denotes the identity matrix of the same size as the matrix A. The set of all eigenvalues of a matrix A is called its spectrum and denoted by σ(a) C. Any matrix A R n n has n eigenvalues {λ 1,..., λ n } which are not necessarily distinct. Definition 3.14 (Similar matrices) Two matrices A, B R n n are called similar (to each other) if there is a regular matrix T R n n such that B = T 1 AT. Lemma 3.6 For two similar matrices A and B R n n there holds: (i) det(a) = det(b). (ii) σ(a) = σ(b). (iii) trace(a) = trace(b), where the trace of A is defined as trace(a) := n i=1 a ii. Let λ i, i = 1,..., n denote the eigenvalues to the matrix A R n n and let u i, i = 1,..., n denote the corresponding normalized eigenvectors so that Au i = λ i u i, i = 1,..., n. (3.6) Writing u i as the column vectors of the matrix U, and let Λ = diag(λ 1,..., λ n ) be a diagonal matrix with the eigenvalues on the diagonal. With these matrices, the eq. (3.6) can be written in compact form as AU = UΛ A = UΛU T (3.7) since U is orthonormal and therefor U T = U 1. The form on the right of eq. (3.7) is called the spectral decomposition of A. On the other hand, if a spectral decomposition with an orthonormal matrix U is available, the entries of the diagonal matrix Λ are the eigenvalues of A. In general not all matrices are diagonalizable in that way. If for 22

23 3.7 Eigenvalues and eigenvectors example A does not have n linearly independent eigenvectors it is not diagonalizable. However, there exists an invertible matrix P such that P 1 AP = J 1... where each submatrix J i of order n i is of the form λ i 1.. J i = λ i I, if n i = 1, J i =.... λ i 1, if n i 2. (3.8) J r λ i This is the Jordan normal form of A. decomposition. Another normal form is given by the Schur then there exists an or- Theorem 3.5 (Real Schur decomposition) If A R n n thogonal Q R n n such that R 11 R 12 R 1m Q T 0 R 22 R 2m AQ =......, 0 0 R mm where each R ii is either a 1-by-1 matrix or a 2-by-2 matrix having complex conjugate eigenvalues. For a proof, we refer to [2]. Lemma 3.7 (Spectral norm) For an arbitrary square matrix A R n n the product matrx A T A R n n is always symmetric and positive semi-definite. For the spectral norm of A there holds { } A 2 = max λ, λ σ(a T A). If A is symmetric, then there holds A 2 = max { λ, λ σ(a)}. Let be an arbitrary vector norm with a corresponding matrix norm. Then, with a normalized eigenvector u = 1 of a matrix A corresponding the eigenvalue λ, there holds λ = λ u = λu = Au A u = A, 23

24 3 Linear Algebra and Analysis e.g., all eigenvalues of A are contained in a circle in C with center at the origin and radius A. Especially with A, we obtain the eigenvalue bound n max λ A = max a ij. (3.9) λ σ(a) i=1,...,n Since the eigenvalues of A T and A are the same σ(a) = σ(a T ) and using the bound eq. (3.9) simultaneously for A T and for A, we get the refinded bound max λ min ( ( ) ) n n A, A T = min max a ij, max a ij. λ σ(a) i=1,...,n j=1,...,n Theorem 3.6 (Gerschgorin-Hadamard theorem) All eigenvalues of a matrix A R n n are contained in the union of the corresponding Gerschgorin circles { } n K j := z C : z a jj a jk, j = 1,..., n. k=1,k j Example 3.8 (Eigenvalues and eigenvectors) Let us consider the following matrices and vectors ( ) ( ) ( ) ( ) A =, B =, u =, v =, then we have Au = ( ) 3 = 3u, Bv = 3 j=1 j=1 ( ) 3 = 3v. 0 This shows that λ = 3 is an eigenvalue of both A and B, u is an eigenvector of A and v is an eigenvector of B Geometric and algebraic multiplicity The set of eigenvectors corresponding to a single eigenvalue, together with the zero vector, forms a subspace of C n. This subspace is called eigenspace. If λ is an eigenvector of A we denote the corresponding eigenspace by E λ. An eigenspace is an example of an invariant subspace of A, i.e., AE λ E λ. The dimension of E λ can be interpreted as the maximum number of linearly independent eigenvectors to the same eigenvalue λ. We call the dimension of E λ the geometric multiplicity of λ. The geometric multiplicity can also be described as the dimension of the nullspace A λi, since that nullspace is again E λ. By the fundamental theorem of algebra, we can write χ A in the form χ A (z) = (z λ 1 )(z λ 2 ) (z λ n ) for some numbers λ i C. By definition 3.13, each λ i is an eigenvalues of A, and all eigenvalues of A appear somewhere in the list. In general, an eigenvalue might appear more than once. The algebraic multiplicity of an eigenvalue λ of A is its multiplicity as a root of χ A. An eigenvalue is simple if its algebraic multiplicity is 1. i=1 24

25 3.7 Eigenvalues and eigenvectors Example 3.9 Consider the matrices A = 2, B = Both matrices A and B have the same characteristic polynomial χ A (z) = χ B (z) = (z 2) 3, so there is a single eigenvalue λ = 2 of algebraic multiplicity 3. In case of A we can choose three independent eigenvectors, for example e 1, e 2 and e 3, so the geometric multiplicity is also 3. For B, on the other hand, we can find only a single independent eigenvector (a scalar multiple of e 1 ), so the geometric multiplicity of the eigenvalue is only Singular Value Decomposition Theorem 3.7 (Singular value decomposition) Let A R m n be an arbitrary matrix. There exist orthogonal matrices U R m m and V R n n such that A = USV T, S = diag(σ 1,..., σ p ) R m n, p = min(m, n). (3.10) where σ 1 σ 2 σ p 0. Depending on whether m n or m n the matrix S has the form σ 1 σ , or σ n. 0 σ m 0 The nonnegative values {σ i } are called the singular values of A and eq. (3.10) is called the singular value decomposition (SVD) of A. From eq. (3.10) we observe that there holds Av i = σ i u i, A T u i = σ i v i, i = 1,..., min(m, n) with u i and v i the columns vectors of U and V, respectively. Moreover, we obtain A T Av i = σ 2 i v i, AA T u i = σ 2 i u i, which shows that the values σ i, i = 1, min(m, n) are the square roots of eigenvalues of the symmetric positive definite matrices A T A R n n and AA T R m m corresponding to the eigenvectors v i and u i, respectively. (i) In the case m n the matrix A T A R n n has the p = n eigenvalues {σ 2 i, i = 1,..., n}, while the matrix AA T R m m has the m eigenvalues {σ 2 1,..., σ 2 n, 0 n+1,..., 0 m }. 25

26 3 Linear Algebra and Analysis (ii) In the case m n the matrix A T A R n n has the n eigenvalues {σ 2 1,..., σ 2 m, 0 m+1,..., 0 n }, while the matrix AA T R m m has the p = m eigenvalues {σ 2 i, i = 1,..., m}. The existence of a decomposition eq. (3.10) is a consequence of the fact that A T A is orthonormally diagonalizable as Q T (A T A)Q = diag(σ 2 i ). Important consequences of the decomposition in eq. (3.10). Suppose that the singular values are ordered like then there holds (i) rank(a) = r, σ 1 σ r > σ r+1 = = σ p = 0, p = min(m, n) (ii) kern(a) = span{v r+1,..., v n }, (iii) range(a) = span{u 1,..., u r }, (iv) A = r i=1 σ iu i v it, (v) A 2 = σ 1 = σ max, (vi) A F = (σ σ 2 r) (Frobenius norm). 3.8 Conditioning of linear algebraic systems Let us consider a linear system to be solved which is given as Ax = b where A R n n and b R n. Let us further assume that this system is perturbed by small errors δa and δb, such that the actually system to be solved is à x = b with à = A + δa, b = b + δb and x = x + δx. We are interested in the impact of these perturbations on the solution. Theorem 3.8 (Perturbation theorem) Let the matrix A R n n be regular and the perturbation satisfy δa < A 1 1. Then the perturbed matrix à = A + δa is also regular and for the resulting relative error in the solution there holds ( δx x cond(a) δb 1 cond(a) δa b + δa ), (3.11) A A with the so called condition number cond(a) := A A 1 of the matrix A. 26

27 3.9 Conditioning of eigenvalue problems The condition number cond(a) depends on the chosen vector norm in the estimate eq. (3.11). Most often, the max-norm or the euclidian norm 2 are used. In the first case, there holds cond (A) := A A 1 Especially for hermitian matrices we have cond 2 (A) := A 2 A 1 2 = λ max λ min, with the eigenvalues λ max and λ min of A with largest and smallest modulus, respectively. Example 3.10 Consider the following matrix A and its inverse A 1 ( ) ( ) A =, A 1 = We derive and therefore, we obtain Hence, this matrix is very ill-conditioned. A = , A 1 = cond (A) Conditioning of eigenvalue problems The most natural way of computing eigenvalues of a matrix A R n n appears to go via its definition of zeros of the characteristic polynomial, see eq. (3.5). The computation of corresponding eigenvectors could be achieved by solving the singular system (A λi)w = 0. This approach is not advisable in general since the determination of zeros of a polynomial may be highly ill-conditioned. The computation of zeros of the Wilkinson polynomial is a famous example, see [1, chapter 3.3]. Theorem 3.9 (Stability theorem) Let A R n n be a diagonalizable matrix, and let {w 1,..., w n } be n linearly independent eigenvectors of A and let B R n n be an arbitrary second matrix. Then, for each eigenvalue λ(b) of B there is a corresponding eigenvalue λ(a) of A such that with the matrix W = (w 1,..., w n ) there holds λ(a) λ(b) cond 2 (W ) A B 2. (3.12) For symmetric matrices A R n n there exists an ONB in R n of eigenvectors so that W in eq. (3.12) can be assumed unitary, W W T = I. That leads to cond 2 (W ) = W 2 W T 2 = 1. In conclusion, the eigenvalue problem of symmetric (or more general normal) matrices is well conditioned. For general matrices, the conditioning of the eigenvalue problem may be arbitrarily bad. 27

28

29 4 Direct Solution Methods In this chapter, we study direct methods for the solution of a system of linear equations. In contrast to iterative methods which we will later turn to in chapter 5, direct methods deliver a solution in a finite number of steps if we assume an exact arithmetic. This could be seen as an advantage over iterative methods. However, in order to get useful result from a direct method it has to be carried out to the last step. Iterative methods which theroretically converge after an infinite number of steps might produce a satisfying solution in a few number of iterations. Direct methods are known to be very robust. They are well applicable to moderate sized problems but become infeasible for large systems due to their usually high storage requirements and high computational complexity. Today, moderate size systems are of dimensions up to n We will consider to solve systems of the form Ax = b (4.1) with a quadratic matrix A and vectors x and b of suitable sizes. 4.1 Triangular linear systems Systems which have a triangular form are particularly easy to solve. In the case of an upper triangular system, the matrix A has the coefficient matrix a 11 a a 1n a a 2n A =.... and the coresponding linear system looks like a nn a 11 x 1 + a 12 x a 1n x n = b 1 a 22 x a 2n x n = b a nn x n = b n For a nn 0, we obtain x n = bn a nn and plug this into the second last equation to compute x n 1. If we progress in that manner and in case a jj 0, j = 1,..., n, we derive the solution to (4.1) as ( n ) x n = b n, x j = 1 a jk x k, j = n 1, a nn a jj k=j+1 29

30 4 Direct Solution Methods The same operations to obtain a solution can be applied to a lower triangular system in opposite order. Let us take a look at the following example: Example 4.1 x 1 + x 2 + 2x 3 = 3 x 2 3x 3 = 4. 19x 3 = 19 In the first step we obtain x 3 = 1. After inserting it in the second equation we derive x 1 + x = 3 x 2 3 = 4 x 3 = 1 and find that x 2 = 1. In the last step we solve the system and get 4.2 Gaussian elimination x 1 = 2 x 2 = 1. x 3 = 1 The elimination method of Gauss transforms the system Ax = b in several elimination steps into an upper triangular form and then applies the procedure described in section 4.1. Only elimination steps that do not change the solution are allowed. These are the following: permutation of two rows of the matrix, permutation of two colums of the matrix, inlcuding according renumbering of the unkowns x i, addition of a scalar multiple of a row to another row of the matrix. The procedure is applied to the composed system ( A, b ). For the realization, we assume that the matrix A is regular. We start by setting A (0) = A, b (0) = b. Since we assumed the matrix to be regular, there exists an element a (0) r1 0, 1 r n. In the first step, we permute the first and the rth row. We call the result ( Ã (0), b (0)). Each remaining row subtracts q j1 = ã(0) j1 /ã (0) 11 times the first row. After this first step, we arrive at ( A (1), b (1)) = ã (0) 11 ã (0) 12 ã (0) 1n b(0) 1 0 a (1) 22 a (1) 2n b (1) a (1) n2 a (1) nn b (1) n, 30

31 4.2 Gaussian elimination where the new entries are given as a (1) j1 = ã(0) ji q j1 ã (0) 1i, b(0) j q j1 b(0) 1, 2 i, j n. Again, the submatrix (a (1) jk ) is regular and we can repeat the same steps which we applied to the matrix A (0). After n 1 of these elimination steps we arrive at the desired upper triangular form ã (0) 11 ã (0) 12 ã (0) 13 ã (0) 1n b(0) 1 0 ã (1) 22 ã (1) 23 ã (1) 2n b(1) 2 ( A (n 1), b (n 1)) = 0 0 ã (2) 33 ã (2) 3n a (n 1) nn b(2) 3. b (n 1) n. (4.2) For the solution of this upper triangular system we refer to section 4.1. understanding, we present the following eample: For better Example A = 2 1 3, b = The nonzero element from the first column to be determined is a (0) 11 = 3 which means that we do not have to permute any rows at the beginning. For the second and third row, we compute In the next step, we get q 21 = ã21 ã (0) 11 = 2 3, q 31 = ã31 ã (0) 11 = 1 3. ( A (1), b (1)) = 0 1 / /3, 0 2 / /3 and again no swapping of rows is required since the entry a (1) With one more elimination step, we get the upper triangualar form ( A (1), b (1)) = 0 1 / / For an example of how to finally derive the solution, have a look at example 4.1. Since all the permitted actions are linear manipulations they can be described in terms of matrices: (Ã(0), b (0)) ( = P 1 A (0), b (0)) (, A (1), b (1)) = G (Ã(0) 1, b (0)), 31

32 4 Direct Solution Methods where P 1 is a permutation matrix and G 1 is a Frobenius matrix given by 1 r P 1 = r q 21 1 G 1 =.... q n1 1 Both matrices P 1 and G 1 are regular with determinants det(p 1 ) = det(g 1 ) = 1 and there holds 1 P1 1 = P 1, G 1 q =..... q n1 1 Since there holds Ax = b A (1) x = G 1 P 1 Ax = G 1 P 1 b = b (1), the systems Ax = b and A (1) x = b (1) have the same solution. After n 1 elimination steps, we arrive at (A, b) ( A (1), b (1)) ( A (n 1), b (n 1)) =: (R, c), where ( A (i), b (i)) = G i P i ( A (i 1), b (i 1)), ( A (0), b (0)) := (A, b), with permutation matrics P i and Frobenius matrics G i of the following form 1 P i =... 1 i r 0 1 i r G i = i... 1 i q i+1,i q ni 1 32

33 4.2 Gaussian elimination The end result (R, c) = G n 1 P n 1 G 1 P 1 (A, b) is an upper triangular system which has the same solution as the original system An example for why pivoting is necessary To illustrate a potential pitfall of the Gaussian elimination, we solve the system.001x 1 + x 2 = 1, x 1 + x 2 = 2. (4.3) If we assume to use exact arithmetic, we obtain the upper triangular system which has the solution x 2 = x 1 + x 2 = 1, 999x 2 = 998, x 1 = 1 x = 1 998/ , = However, if we assume to use two-digit decimal floating-point arithmetic with rounding, we would derive the following upper triangular system.001x 1 + x 2 = 1, (1 1000)x 2 = (4.4) Evaluating the first difference, the computer produces fl(1 1000) = fl( 999) = fl( ) = For the second difference, the computer derives fl(2 1000) = fl( 998) = fl( ) = The upper triangular system that results from two-digit decimal floating-point arithmetic is.001x 1 + x 2 = 1, (4.5) 1000x 2 = By backward substitution, the solution is given as x 2 = = 1, x 1 = 1 x = 0. 33

34 4 Direct Solution Methods The error in x 1 is 100% since the computed x 1 is zero while the correct solution for x 1 is close to 1. As you observe, the upper triangular systems in eq. (4.3) and eq. (4.5) do not differ a lot from each other. The crucial step is the backward substitution where we compute 1 x 2. Here, cancellation happens due to the fact that x is close to one. A way to avoid this dilemma is changing the two equations in eq. (4.4) to x 1 + x 2 = 2,.001x 1 + x 2 = 1. Applying two-digit decimal floating-point arithmetic to this system yields the upper triangular system x 1 + x 2 = 2, x 2 = 1. With backward substitution the solution x 2 = x 1 = 1 is derived. This is in much better agreement with the exact solution. This process is called pivoting. Definition 4.1 (Pivot element) The element a r1 = ã (0) 11 in eq. (4.2) is called pivot element and the whole substep of its determination is called pivot search. For reasons of numerical stability, usually the chioce a r1 = max j=1,...,n a j1 is made. The whole process inluding permuation of rows is called column pivoting. If the elements of the matrix A are of very different size, total pivoting is advisable. This consists in the choice a rs = max k,j=1,...,n a jk and subsequent permutation of the 1st row with the rth row and the 1st column with the sth column. According to the column permutation also the unknowns x k have to be renumbered. Total pivoting is costly so that column pivoting is usually preferred. Let us recall the end result after a Gaussian elimination (with pivoting). We can use the zero entries in the lower triangular matrix to store the elements q k+1,k..., q nk which will be renamed as λ k+1,k..., λ nk after possible permuatation of rows due to pivoting: r 11 r 12 r 13 r 1n c 1 λ 21 r 22 r 23 r 2n c 2 λ 31 λ 32 r 33 r 3n c 3. (4.6) λ n1 λ n2 λ n,n 1 r nn c n 34

35 4.2 Gaussian elimination Theorem 4.1 (LR factorization) The matrices 1 0 r 11 r 12 r 1n l 21 1 L = , R = r 22 r 2n.... l n1 l n,n r nn are factors in the (multiplicative) decomposition of the matrix P A, P A = LR, P := P n 1 P 1. If there exists such a decomposition with P = I, then it is uniquely determined. Once an LR decomposition is computed, the solution of the linear system Ax = b can be achieved by successively solving two triangular systems Ly = P b, Rx = y by forward and backward substitution, respectively. We revisit the example 4.2 to illustrate the LR decomposition. Example pivoting elimination 2/3 1/ /3 pivoting /3 2/ / /3 1/ /3 premutation 1/3 2/ /3 elimination 1/3 2/3 1 10/3 1/3 2/ /3 2/3 1/ /3 2/3 1/2 1 /2 4 We obtain the solution x 3 = 8, x 2 = 3 2 and the LR decomposition ( ) x 3 = 7, x 1 = 1 3 (2 x 2 6x 3 ) = 19, P 1 = I, P 2 = 0 0 1, P A = = LR = 1/ / /3 1/ /2 35

36 4 Direct Solution Methods Conditioning of Gaussian elimination For any (regular) matrix A there exists an LR decomposition like P A = LR. We derive R = L 1 P A, R 1 = (P A) 1 L for R and its inverse R 1. Due to column pivoting, all the elements of L and L 1 are less or equal one and there holds Therefore, cond (L) = L L 1 n 2. cond (R) = R R 1 = L 1 P A (P A) 1 L L 1 P A (P A) 1 L n 2 cond (P A). Applying the perturmation theorem, theorem 3.8, we obtain the estimate for the solution of the equation LRx = P b (considering only perturbation of the right-hand side b, δa = 0) δx x cond (L) cond (R) δp b P b n4 cond (P A) δp b P b. Hence, the conditioning of the original system Ax = b by the LR decomposition is amplified by n 4 in the worst case. However, this is an extremely pessimistic estimate which can be significantly improve, for example, see [4, theorem 2.2]. 4.3 Symmetric matrices For symmetric matrices A = A T we would like that the symmetry is also reflected in the LR factorization. It should be only half the work to achive this. Let us assume that there exists a decomposition such that A = LR, where L is the a unit lower triangular matrix and R is a upper triangular matrix. From the symmerty of A it follows that LR = A = A T = (LR) T = (LD R) T = R T DL T, where 1 r 12 /r11 r 1n/r11.. R = r n 1,n/rn 1,n 1 r 11 0, D = r nn The uniqueness of the LR decomposition implies that L = R T and R = DL T such that A may be written as A = LDL T. 36

37 4.3.1 Cholesky decomposition Symmetric positive definite matrices allow for a Cholesky decomposition A = LDL T = L L T 4.3 Symmetric matrices with the matrix L := LD 1 /2. For computing the Cholesky decomposition it suffices to compute the matrices D and L. This reduces the required work to half the work for the LR decomposition. The algorithm starts from the relation A = L L T which is l 11 0 l 11 ln ln1 lnn 0 lnn = and yields equations to determine the first column of L: a 11 a nn..... a n1 a nn l2 11 = a 11, l21 l11 = a 21,..., ln1 l11 = a n1, from which l11 = a 11, for j = 2,..., n : l21 = a 21 l11 = a 21 a11. is obtained. Let now for some i {2,..., n} the elements l jk, k = 1,..., i 1, j = k,..., n be already computed. Then, via l2 i1 + l 2 i l 2 ii = a ii, lii > 0, lj1 li1 + l j2 li l ji lii = a ji, the next elements l ii and l ji, j = i + 1,..., n can be obtained as lii = lji = a ii l i1 2 l i l i,i 1 2 {a, ji l j1 li1 l j2 li2... l } j,i 1 li,i 1, l 1 ii Example 4.4 The matrix A = has the uniquely determined Cholesky decomposition A = LDL = L L T given as A = =

38 4 Direct Solution Methods 4.4 Orthogonal decomposition Let A R m n be a not necessarily quadratic matrix and b R m a right-hand side vector. We consider the system Ax = b for x R n. As discussed in section 3.6, we seek a vector x R n with minimal defect norm b Ax. A solution is obtained by solving the normal equation A T Ax = A T b. Theorem 4.2 (QR decomposition) Let A R m n be a rectangular matrix with m n and rank(a) = n. Then there exists a uniquely determined othonormal matrix Q R m n with the property Q T Q = I and a uniquely determined upper triangular matrix R R n n with diagonal r ii > 0, i = 1..., n, such that A = QR. In theorem 3.3 we introduced an algorithm which transforms a basis into an orthonormal one. However, in practical applications this algorithm is unfeasible because of its instability. Due to round-off effects the orthonormality of the columns of Q is quickly lost already after a few orthonomailzation steps. A more stable algorithm for this poouse is the Householder transformation described below Householder Transformation Definition 4.2 (Householder transformation) For any normalized vector v R m, v 2 = 1, the matrix S = I 2vv T R m m is called Householder transformation and the vector v is called Householder vector. Householder transformations are symmetric S = S T and orthonormal S T S = SS T = I. For the geometric interpretation of the Householder transformation S, let us consider an arbitrary normed vector v R 2, v 2 = 1. The two vectors { v, v } form a basis of R 2, where v T v = 0. For an arbitrary vector u = αv + βv R 2, there holds Su = ( I 2vv ) ( T αv + βv ) = αv + βv 2α(v v T )v 2β(v v T )v = αv + βv. }{{}}{{} =1 =0 Starting from a matrix A R m n the Householder algorithm generates a sequence of matrices A := A (0) A (i) A (n) = R, 38

39 4.4 Orthogonal decomposition where A (i) has the form i.... A (i) = i In the ith step the Householder transformation S i K m m is determined such that After n steps the result is S i A (i 1) = A (i). R = A (n) = S n S n 1 s q A = Q T A, where Q R m m as product of unitary matrices is also unitary and R R m m has the form r 11 r 1n.... R = n m. 0 r nn 0 0 Therefore we have the representation A = S T 1 S T n R = Q R. We obtain the desired QR decomposition of A by removing the last m n columns in Q and the last m n rows in R: R A = Q n = QR. 0 }{{} n }{{} m n m n In the following, we describe the elimination process in more detail. Step 1: As a first step of this process, we construct a Householder matrix S 1 which transforms a 1 (the first column of A) into a multiple of e 1, the first coordinate vector. This results in a vector which has just a first component. Depending on the sign of a 11 39

40 4 Direct Solution Methods we choose one of the axes span{a 1 + a 1 e 1 } or span{a 1 a 1 e 1 } in order to reduce round-off errors. In case a 11 0 we choose v 1 = a 1 + a 1 2 e 1 a 1 + a 1 2 e 1 2, v 1 = a 1 a 1 2 e 1 a 1 a 1 2 e 1 2. Then the matrix A (1) = (I 2v 1 v T 1 )A has the column vectors a (1) 1 = (I 2v 1 v T 1 )a 1 = a 1 2 e 1, a (1) k = (I 2v 1 v T 1 )a k = a k 2(a k, v 1 )v 1, k = 2,..., n. In contrast to the first step of the Gaussian elimination, the first row of the resulting matrix is changed. Step i: Let the transformed matrix A (i 1) be already computed. The Householder transformation is given as I 0 S i = I 2v i vi T =, v i = 0. i 1 m. 0 0 I 2ṽ i ṽ T }{{} i 1 The application of the (orthonormal) matrix S i to A (i 1) leaves the first i 1 rows and columns of A (i 1) unchanged. For the construction of v i, we use the considerations of step 1 for the submatrix Thus, it follows that ṽ i = Ã (i 1) = ã(i 1) ii ã (i 1) ii ã (i 1) ii ã (i 1) in..... = ã (i 1) mi ã (i 1) mn ã (i 1) ii 2 ẽ i ã (i 1) ii 2 ẽ i 2, ṽ i = ã and the matrix A (i) has the column vectors a (i) k a (i) i = a (i) k ṽ i ( ) ã (i 1) 1,..., ã (i 1) n. (i 1) ii ã (i 1) ii + ã (i 1) ii 2 ẽ i + ã (i 1) ii 2 ẽ i 2, = a (i 1) k, k = 1,..., i 1, ( T a (i 1) 1i,..., a (i 1) i 1,i, ã(i 1) i 2, 0,..., 0), ( ) = a (i 1) k 2 ã (i 1) k, ṽ i v i, k = i + 1,..., n. For a quadratic matrix A R n n the costs for a QR decomposition are about double the costs for a LR decomposition. 40

41 4.5 Direct determination of eigenvalues 4.5 Direct determination of eigenvalues Let us consider a general square matrix A R n n. The direct way of computing eigenvalues of A would be to follow the definition of what an eigenvalues is and compute the zeros of the characteristic polynomial χ A (z) = det(zi A) by a suitable method, e.g. the Newton method. We already discussed that the determination of zeros of polynomials, especially in monomial expasion can be highly ill-conditioned. In general the eigenvalues cannot be computed via the characteristic polynomial. Only in special cases where the polynomial does not need to be built up explicitly as for tri-diagonal or Hessenberg matrices. Triagonal matrix a 1 b 1 c bn 1 Hessenberg matrix a 11 a 12 a 1n a an 1,n c n a n 0 a n,n 1 a nn Reduction methods Let us recall some properties of similar matrices. Two matrices A, B C n n are called similar (A B) if there holds A = T 1 BT with a regular matrix T C n n. In view of det(a zi) = det(t 1 (B zi)t ) = det(t 1 ) det(b zi) det(t ) = det(b zi) similar matrices have the same characteristic polynomial and therefore the same eigenvalues. For any eigenvalue λ of A with a corresponding eigenvector w there holds Aw = T 1 BT w = λw, i.e., T w is an eigenvector of B corresponding to the same eigenvalue λ. Further, algebraic and geometric multiplicity of eigenvalues of similar matrices are the same. A reduction method reduces a given matrix A C n n by a sequence of similarity transformations to a simply structured matrix for which the eigenvalue problem is easier to solve. In general, the direct transformation of a given matrix into Jordan normal form (see eq. (3.8)) or the real Schur decomposition, see theorem 3.5, in finitely many steps is possible only if all its eigenvectors are a priori known. Therefore, one transforms the matrix in finitely many steps into a matrix of a simpler structure, e.g. Hessenberg form. The classical method for computing the eigenvalues of a tri-diagonal or Hessenberg matrix is Hyman s method. This method computes the characteristic polynomial of a Hessenberg matrix without explicitly determining the coefficients in its monomial expansion. With Sturm s method, one can then determine the zeros of the characteristic polynomial. For further reading on Hyman s and Sturm s method, we refer to [4]. 41

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

MAT 610: Numerical Linear Algebra. James V. Lambers

MAT 610: Numerical Linear Algebra. James V. Lambers MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Basic Elements of Linear Algebra

Basic Elements of Linear Algebra A Basic Review of Linear Algebra Nick West nickwest@stanfordedu September 16, 2010 Part I Basic Elements of Linear Algebra Although the subject of linear algebra is much broader than just vectors and matrices,

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

Linear Algebra in Actuarial Science: Slides to the lecture

Linear Algebra in Actuarial Science: Slides to the lecture Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Linear Algebra Review

Linear Algebra Review Chapter 1 Linear Algebra Review It is assumed that you have had a course in linear algebra, and are familiar with matrix multiplication, eigenvectors, etc. I will review some of these terms here, but quite

More information

Eigenvalue and Eigenvector Problems

Eigenvalue and Eigenvector Problems Eigenvalue and Eigenvector Problems An attempt to introduce eigenproblems Radu Trîmbiţaş Babeş-Bolyai University April 8, 2009 Radu Trîmbiţaş ( Babeş-Bolyai University) Eigenvalue and Eigenvector Problems

More information

Elementary linear algebra

Elementary linear algebra Chapter 1 Elementary linear algebra 1.1 Vector spaces Vector spaces owe their importance to the fact that so many models arising in the solutions of specific problems turn out to be vector spaces. The

More information

Lecture 1: Review of linear algebra

Lecture 1: Review of linear algebra Lecture 1: Review of linear algebra Linear functions and linearization Inverse matrix, least-squares and least-norm solutions Subspaces, basis, and dimension Change of basis and similarity transformations

More information

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM 33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM (UPDATED MARCH 17, 2018) The final exam will be cumulative, with a bit more weight on more recent material. This outline covers the what we ve done since the

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

Lecture 3: QR-Factorization

Lecture 3: QR-Factorization Lecture 3: QR-Factorization This lecture introduces the Gram Schmidt orthonormalization process and the associated QR-factorization of matrices It also outlines some applications of this factorization

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Cheat Sheet for MATH461

Cheat Sheet for MATH461 Cheat Sheet for MATH46 Here is the stuff you really need to remember for the exams Linear systems Ax = b Problem: We consider a linear system of m equations for n unknowns x,,x n : For a given matrix A

More information

SUMMARY OF MATH 1600

SUMMARY OF MATH 1600 SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You

More information

Matrix decompositions

Matrix decompositions Matrix decompositions Zdeněk Dvořák May 19, 2015 Lemma 1 (Schur decomposition). If A is a symmetric real matrix, then there exists an orthogonal matrix Q and a diagonal matrix D such that A = QDQ T. The

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche

Lecture Notes for Inf-Mat 3350/4350, Tom Lyche Lecture Notes for Inf-Mat 3350/4350, 2007 Tom Lyche August 5, 2007 2 Contents Preface vii I A Review of Linear Algebra 1 1 Introduction 3 1.1 Notation............................... 3 2 Vectors 5 2.1 Vector

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

Numerical Methods for Solving Large Scale Eigenvalue Problems

Numerical Methods for Solving Large Scale Eigenvalue Problems Peter Arbenz Computer Science Department, ETH Zürich E-mail: arbenz@inf.ethz.ch arge scale eigenvalue problems, Lecture 2, February 28, 2018 1/46 Numerical Methods for Solving Large Scale Eigenvalue Problems

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences)

AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) AMS526: Numerical Analysis I (Numerical Linear Algebra for Computational and Data Sciences) Lecture 1: Course Overview; Matrix Multiplication Xiangmin Jiao Stony Brook University Xiangmin Jiao Numerical

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra. DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1

More information

Conceptual Questions for Review

Conceptual Questions for Review Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

8. Diagonalization.

8. Diagonalization. 8. Diagonalization 8.1. Matrix Representations of Linear Transformations Matrix of A Linear Operator with Respect to A Basis We know that every linear transformation T: R n R m has an associated standard

More information

Gaussian Elimination for Linear Systems

Gaussian Elimination for Linear Systems Gaussian Elimination for Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 3, 2011 1/56 Outline 1 Elementary matrices 2 LR-factorization 3 Gaussian elimination

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

Linear Algebra March 16, 2019

Linear Algebra March 16, 2019 Linear Algebra March 16, 2019 2 Contents 0.1 Notation................................ 4 1 Systems of linear equations, and matrices 5 1.1 Systems of linear equations..................... 5 1.2 Augmented

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

Scientific Computing: Dense Linear Systems

Scientific Computing: Dense Linear Systems Scientific Computing: Dense Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 February 9th, 2012 A. Donev (Courant Institute)

More information

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky

Lecture 2 INF-MAT : , LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Lecture 2 INF-MAT 4350 2009: 7.1-7.6, LU, symmetric LU, Positve (semi)definite, Cholesky, Semi-Cholesky Tom Lyche and Michael Floater Centre of Mathematics for Applications, Department of Informatics,

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

AMS526: Numerical Analysis I (Numerical Linear Algebra)

AMS526: Numerical Analysis I (Numerical Linear Algebra) AMS526: Numerical Analysis I (Numerical Linear Algebra) Lecture 1: Course Overview & Matrix-Vector Multiplication Xiangmin Jiao SUNY Stony Brook Xiangmin Jiao Numerical Analysis I 1 / 20 Outline 1 Course

More information

Class notes: Approximation

Class notes: Approximation Class notes: Approximation Introduction Vector spaces, linear independence, subspace The goal of Numerical Analysis is to compute approximations We want to approximate eg numbers in R or C vectors in R

More information

Applied Linear Algebra in Geoscience Using MATLAB

Applied Linear Algebra in Geoscience Using MATLAB Applied Linear Algebra in Geoscience Using MATLAB Contents Getting Started Creating Arrays Mathematical Operations with Arrays Using Script Files and Managing Data Two-Dimensional Plots Programming in

More information

6 Inner Product Spaces

6 Inner Product Spaces Lectures 16,17,18 6 Inner Product Spaces 6.1 Basic Definition Parallelogram law, the ability to measure angle between two vectors and in particular, the concept of perpendicularity make the euclidean space

More information

Orthonormal Transformations and Least Squares

Orthonormal Transformations and Least Squares Orthonormal Transformations and Least Squares Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 30, 2009 Applications of Qx with Q T Q = I 1. solving

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

Lecture Notes to be used in conjunction with. 233 Computational Techniques

Lecture Notes to be used in conjunction with. 233 Computational Techniques Lecture Notes to be used in conjunction with 233 Computational Techniques István Maros Department of Computing Imperial College London V29e January 2008 CONTENTS i Contents 1 Introduction 1 2 Computation

More information

Numerical Methods I Eigenvalue Problems

Numerical Methods I Eigenvalue Problems Numerical Methods I Eigenvalue Problems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 October 2nd, 2014 A. Donev (Courant Institute) Lecture

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Linear Algebra. Carleton DeTar February 27, 2017

Linear Algebra. Carleton DeTar February 27, 2017 Linear Algebra Carleton DeTar detar@physics.utah.edu February 27, 2017 This document provides some background for various course topics in linear algebra: solving linear systems, determinants, and finding

More information

MATH 532: Linear Algebra

MATH 532: Linear Algebra MATH 532: Linear Algebra Chapter 5: Norms, Inner Products and Orthogonality Greg Fasshauer Department of Applied Mathematics Illinois Institute of Technology Spring 2015 fasshauer@iit.edu MATH 532 1 Outline

More information

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors

Chapter 7. Canonical Forms. 7.1 Eigenvalues and Eigenvectors Chapter 7 Canonical Forms 7.1 Eigenvalues and Eigenvectors Definition 7.1.1. Let V be a vector space over the field F and let T be a linear operator on V. An eigenvalue of T is a scalar λ F such that there

More information

6.4 Krylov Subspaces and Conjugate Gradients

6.4 Krylov Subspaces and Conjugate Gradients 6.4 Krylov Subspaces and Conjugate Gradients Our original equation is Ax = b. The preconditioned equation is P Ax = P b. When we write P, we never intend that an inverse will be explicitly computed. P

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten

More information

Eigenvalues and eigenvectors

Eigenvalues and eigenvectors Chapter 6 Eigenvalues and eigenvectors An eigenvalue of a square matrix represents the linear operator as a scaling of the associated eigenvector, and the action of certain matrices on general vectors

More information

Numerical Linear Algebra

Numerical Linear Algebra University of Alabama at Birmingham Department of Mathematics Numerical Linear Algebra Lecture Notes for MA 660 (1997 2014) Dr Nikolai Chernov April 2014 Chapter 0 Review of Linear Algebra 0.1 Matrices

More information

What is it we are looking for in these algorithms? We want algorithms that are

What is it we are looking for in these algorithms? We want algorithms that are Fundamentals. Preliminaries The first question we want to answer is: What is computational mathematics? One possible definition is: The study of algorithms for the solution of computational problems in

More information

LinGloss. A glossary of linear algebra

LinGloss. A glossary of linear algebra LinGloss A glossary of linear algebra Contents: Decompositions Types of Matrices Theorems Other objects? Quasi-triangular A matrix A is quasi-triangular iff it is a triangular matrix except its diagonal

More information

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

Matrix Algebra for Engineers Jeffrey R. Chasnov

Matrix Algebra for Engineers Jeffrey R. Chasnov Matrix Algebra for Engineers Jeffrey R. Chasnov The Hong Kong University of Science and Technology The Hong Kong University of Science and Technology Department of Mathematics Clear Water Bay, Kowloon

More information

Notes on Eigenvalues, Singular Values and QR

Notes on Eigenvalues, Singular Values and QR Notes on Eigenvalues, Singular Values and QR Michael Overton, Numerical Computing, Spring 2017 March 30, 2017 1 Eigenvalues Everyone who has studied linear algebra knows the definition: given a square

More information

A Brief Outline of Math 355

A Brief Outline of Math 355 A Brief Outline of Math 355 Lecture 1 The geometry of linear equations; elimination with matrices A system of m linear equations with n unknowns can be thought of geometrically as m hyperplanes intersecting

More information

Chapter 5 Eigenvalues and Eigenvectors

Chapter 5 Eigenvalues and Eigenvectors Chapter 5 Eigenvalues and Eigenvectors Outline 5.1 Eigenvalues and Eigenvectors 5.2 Diagonalization 5.3 Complex Vector Spaces 2 5.1 Eigenvalues and Eigenvectors Eigenvalue and Eigenvector If A is a n n

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff

More information

Lecture Notes to be used in conjunction with. 233 Computational Techniques

Lecture Notes to be used in conjunction with. 233 Computational Techniques Lecture Notes to be used in conjunction with 233 Computational Techniques István Maros Department of Computing Imperial College London V2.9e January 2008 CONTENTS i Contents 1 Introduction 1 2 Computation

More information

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.)

Index. book 2009/5/27 page 121. (Page numbers set in bold type indicate the definition of an entry.) page 121 Index (Page numbers set in bold type indicate the definition of an entry.) A absolute error...26 componentwise...31 in subtraction...27 normwise...31 angle in least squares problem...98,99 approximation

More information

Orthonormal Transformations

Orthonormal Transformations Orthonormal Transformations Tom Lyche Centre of Mathematics for Applications, Department of Informatics, University of Oslo October 25, 2010 Applications of transformation Q : R m R m, with Q T Q = I 1.

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

SYLLABUS. 1 Linear maps and matrices

SYLLABUS. 1 Linear maps and matrices Dr. K. Bellová Mathematics 2 (10-PHY-BIPMA2) SYLLABUS 1 Linear maps and matrices Operations with linear maps. Prop 1.1.1: 1) sum, scalar multiple, composition of linear maps are linear maps; 2) L(U, V

More information

Math 3108: Linear Algebra

Math 3108: Linear Algebra Math 3108: Linear Algebra Instructor: Jason Murphy Department of Mathematics and Statistics Missouri University of Science and Technology 1 / 323 Contents. Chapter 1. Slides 3 70 Chapter 2. Slides 71 118

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4

EIGENVALUE PROBLEMS. EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS EIGENVALUE PROBLEMS p. 1/4 EIGENVALUE PROBLEMS p. 2/4 Eigenvalues and eigenvectors Let A C n n. Suppose Ax = λx, x 0, then x is a (right) eigenvector of A, corresponding to the eigenvalue

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

The following definition is fundamental.

The following definition is fundamental. 1. Some Basics from Linear Algebra With these notes, I will try and clarify certain topics that I only quickly mention in class. First and foremost, I will assume that you are familiar with many basic

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

Numerical Methods in Matrix Computations

Numerical Methods in Matrix Computations Ake Bjorck Numerical Methods in Matrix Computations Springer Contents 1 Direct Methods for Linear Systems 1 1.1 Elements of Matrix Theory 1 1.1.1 Matrix Algebra 2 1.1.2 Vector Spaces 6 1.1.3 Submatrices

More information

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show

Problem Set (T) If A is an m n matrix, B is an n p matrix and D is a p s matrix, then show MTH 0: Linear Algebra Department of Mathematics and Statistics Indian Institute of Technology - Kanpur Problem Set Problems marked (T) are for discussions in Tutorial sessions (T) If A is an m n matrix,

More information

ESSENTIALS OF COMPUTATIONAL LINEAR ALGEBRA Supplement for MSc Optimization 1

ESSENTIALS OF COMPUTATIONAL LINEAR ALGEBRA Supplement for MSc Optimization 1 ESSENTIALS OF COMPUTATIONAL LINEAR ALGEBRA Supplement for MSc Optimization István Maros Department of Computer Science Faculty of Information Technology University of Pannonia, Veszprém maros@dcsuni-pannonhu

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

CHAPTER 11. A Revision. 1. The Computers and Numbers therein

CHAPTER 11. A Revision. 1. The Computers and Numbers therein CHAPTER A Revision. The Computers and Numbers therein Traditional computer science begins with a finite alphabet. By stringing elements of the alphabet one after another, one obtains strings. A set of

More information

Matrix Factorization and Analysis

Matrix Factorization and Analysis Chapter 7 Matrix Factorization and Analysis Matrix factorizations are an important part of the practice and analysis of signal processing. They are at the heart of many signal-processing algorithms. Their

More information

Scientific Computing: An Introductory Survey

Scientific Computing: An Introductory Survey Scientific Computing: An Introductory Survey Chapter 4 Eigenvalue Problems Prof. Michael T. Heath Department of Computer Science University of Illinois at Urbana-Champaign Copyright c 2002. Reproduction

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

Index. for generalized eigenvalue problem, butterfly form, 211

Index. for generalized eigenvalue problem, butterfly form, 211 Index ad hoc shifts, 165 aggressive early deflation, 205 207 algebraic multiplicity, 35 algebraic Riccati equation, 100 Arnoldi process, 372 block, 418 Hamiltonian skew symmetric, 420 implicitly restarted,

More information

Linear Algebra Lecture Notes-II

Linear Algebra Lecture Notes-II Linear Algebra Lecture Notes-II Vikas Bist Department of Mathematics Panjab University, Chandigarh-64 email: bistvikas@gmail.com Last revised on March 5, 8 This text is based on the lectures delivered

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

This can be accomplished by left matrix multiplication as follows: I

This can be accomplished by left matrix multiplication as follows: I 1 Numerical Linear Algebra 11 The LU Factorization Recall from linear algebra that Gaussian elimination is a method for solving linear systems of the form Ax = b, where A R m n and bran(a) In this method

More information