Numerical Linear Algebra

Size: px
Start display at page:

Download "Numerical Linear Algebra"

Transcription

1 Numerical Linear Algebra R. J. Renka Department of Computer Science & Engineering University of North Texas 02/03/2015

2 Notation and Terminology R n is the Euclidean n-dimensional linear space over the set of real numbers R for n 1. Matrices are denoted by uppercase letters: A R m n. If m = n, A is square and has order n. Vectors are denoted by lowercase letters: x R n. A vector is an n by 1 matrix for n 1. Scalars are denoted by Greek letters: α R, or subscripted letters: x i, a ij = A ij.

3 Notation and Terminology continued The transpose of matrix A is denoted A T and defined by (A T ) ij = a ji for all i and j. Note that (αa) T = αa T and (A + B) T = A T + B T. The identity matrix I = I n of order n is defined by I ij = δ ij, where δ ij is the Kronecker delta function. The inverse of an order-n matrix A is A 1 defined by AA 1 = A 1 A = I. The scalar product = inner product = dot product of n-vectors x and y is x T y.

4 Multiplication Scalar multiplication is defined by (αa) ij = αa ij. Matrix multiplication is defined by c ij = m a ik b kj k=1 for i = 1,..., l and j = 1,..., n, where A R l m, B R m n C = AB R l n. Theorem: Matrix multiplication is associative but not commutative. In fact, AB may be well-defined while BA is not. Suppose A R l m and B R m n, Then A T R m l, B T R n m, B T A T R n l, and (AB) T R n l, but A T B T is well-defined if and only if l = n. Theorem: (AB) T = B T A T.

5 Special cases of Matrix multiplication A linear system of equations involves a matrix-vector product: Ax = b for A R m n x R n = R n 1, b R m = R m 1. Note that Ax = b iff x T A T = b T. The expression xa is nonsense. For x, y R n, the inner product is x T y, where x T R 1 n, y R n 1 x T y R 1 1 = R, and the outer product (rank-1 matrix) is xy T R n n. Theorem (AB) 1 = B 1 A 1 for invertible matrices A and B.

6 Operation Counts In numerical linear algebra, algorithms are usually compared for complexity by counting multiplies. The number of adds is about the same as the number of multiplies. Asymptotic operation counts are too crude. Exercise Assign operation counts to the following expressions. Assume N-vectors and N by N matrices. αx αa x T y xy T Ax xy T z AB A 2 αi ABx A T A

7 Linear Systems A system of m equations in n unknowns is written as Ax = b, where A R m n, x R n, b R m. Theorem There is a 1-1 correspondence (isomorphism) between a matrix A R m n and a linear transformation L : R n R m s.t. L(αx) = αlx and L(x + y) = Lx + Ly for all x, y R n, α R. A and L are often used interchangeably. Defn A set of vectors {v 1, v 2,..., v k } R n is linearly independent iff k α i v i = 0 α 1 = α 2 =... = α k = 0. i=1

8 Existence and Uniqueness Let R(A) denote the range of A (as a linear transformation). This coincides with the span (set of all linear combinations) of the columns of A and is a linear subspace of R m (closed under linear combinations). If m < n the system is underdetermined and has infinitely many solutions for b R(A). If m > n the system is overdetermined and has no solution (unless there are at most n linearly independent rows in the augmented system). This is a linear least squares problem: Minimize Ax b 2 2 over x Rn.

9 Existence and Uniqueness continued Theorem For m = n, the following are equivalent. 1 Ax = b has a unique solution x for all b R n 2 A is invertible (nonsingular): B s.t. AB = BA = I 3 det(a) 0 4 Ax = 0 x = 0 1 A has linearly independent columns 2 N (A) = {0} : Ax = 0 x = 0 3 null(a) = dim(n (A)) = 0 4 A has no zero eigenvalue: if x 0 s.t. Ax = λx then λ 0 5 As a linear operator, A is one-to-one 6 A solution to Ax = b is unique 5 R(A) {Ax : x R n } = R n 1 The columns of A span R n 2 rank(a) = dim(r(a)) = n 3 As a linear operator, A is onto: x R n s.t. Ax = b b R n 4 There exists a solution to Ax = b for all b R n 6 A T is invertible: (A T ) 1 = (A 1 ) T = A T

10 Existence and Uniqueness continued The above list can be expanded by replacing A by A T, giving additional characterizations such as 7 A has linearly independent rows 8 The rows of A span R n Recall that a function is invertible if and only if it is both one-to-one and onto. In the case of a linear operator, the three properties, invertible, one-to-one, and onto, are equivalent. Also, uniqueness of a solution is equivalent to existence of a solution for all b. In the case n = 1 all of the above characterizations of invertibility reduce to A 0.

11 Existence and Uniqueness continued In the case n = 2 there is a simple geometric interpretation. The equations are a1 T x = b 1 and a2 T x = b 2, where a1 T and at 2 are the rows of A. The equations correspond to lines in the x 1 x 2 plane, and a solution is a point of intersection of the lines. Suppose A is singular. Then a 1 = αa 2 for some α. There are two possibilities: 1 b 1 = αb 2, the lines coincide, and there are infinitely many solutions, or 2 b 1 αb 2, the lines are distinct, and there is no solution. Note the sensitivity of the solution to perturbations of A and b when A is nearly singular.

12 Cramer s Rule Cramer s Rule is a good method for not solving a linear system Ax = b of order n: x i = A i / A, (i = 1,..., n), where A i is the matrix obtained by replacing column i by b in A. Operation count: The determinant can be expressed as A = σ S n sign(σ)a σ(1),1 a σ(2),2 a σ(n),n, where S n is the set of permutations of {1, 2,..., n}, and sign(σ) = ( 1) m, m being the number of transpositions in a decomposition of σ. The number of permutations is n!, and the number of multiplies M is (n 1)n! for each of the n + 1 determinants: M = (n + 1)(n!)(n 1).

13 Timing a Linear Solver Suppose a multiply takes a nanosecond (1 Gigaflops processor). Then, dividing M by multiplies per year, we find that, using Cramer s Rule, we can solve a system of order n = 30 in a mere years. Gaussian elimination, on the other hand has an operation count of M = n 3 /3 + n 2 n/3 multiplies, resulting in a computation time under 10 microseconds for n = 30. n M µs µs sec hrs.

14 Matrix Inverse Using the matrix inverse is a second method for not solving a linear system. The most efficient method for computing A 1 consists of the following three steps. 1 Compute an LU factorization A = LU by Gaussian elimination: n 3 /3. 2 Compute L 1 and U 1 : n 3 /6 + n 3 /6 = n 3 /3. 3 Compute A 1 = U 1 L 1 : n 3 /3. The total operation count is n 3 multiplies to compute A 1 and then n 2 multiplies for the matrix-vector product x = A 1 b. This is three times as much work as is necessary to compute x = A 1 b = (U 1 L 1 )b = U 1 (L 1 b), and consequently involves more accumulated roundoff error. For example, with four decimal digits of precision 7x = 21 fl(x) = (7 1 )(21) = (.1429)(21) = whereas fl(21/7) =

15 Matrix Inverse continued It is often mistakenly thought that multiple right hand sides justifies the additional cost of computing a matrix inverse. If, for example, we need to evaluate C = A 1 B, we can partition B and C by columns to obtain n linear systems Ac j = b j, (j = 1,..., n) for B = [b 1 b 2 b n ], C = [c 1 c 2 c n ]. Given the LU factorization of A, the cost of solving each linear system is the same as it would be with a matrix inverse: n 2 multiplies. This is the reason for computing the factorization, and then solving two triangular systems, rather than carrying along the right hand side during Gaussian elimination applied to an augmented system. The right hand side is treated separately from the matrix so that the factorization does not have to be repeated for a second right hand side with the same matrix.

16 Elementary Row Operations The following operations applied to a system of linear equations or augmented matrix (matrix with right hand side appended as an additional column) leave the solution unaltered. 1 Scale a row (by a nonzero scalar) 2 Add one row to another 3 Interchange a pair of rows The basic idea for automating Gaussian elimination is to apply the elementary row operations in a systematic fashion that zeros out all the elements below the diagonal. The solution components are then easily computed in reverse order.

17 Gaussian Elimination Forward Elimination: elementary row operations that reduce A to an upper triangular matrix U. a 11 a 12 a 13 b 1 a 21 a 22 a 23 b 2 a 31 a 32 a 33 b 3 a 11 a 12 a 13 b 1 0 a 22 a 23 b 2 µ 21 = a 21 /a 11, a 22 = a 22 + µ 21 a 12 0 a 32 a 33 b 3 µ 31 = a 31 /a 11, a 32 = a 32 + µ 31 a 12 a 11 a 12 a 13 b 1 0 a 22 a 23 b a 33 b 3 µ 32 = a 32 /a 22, a 33 = a 33 + µ 32 a 23

18 Gaussian Elimination continued Back Substitution: solution of the upper triangular system with the ordering of the unknowns reversed. x 3 = b 3 /a 33 x 2 = (b 2 a 23 x 3 )/a 22 x 1 = (b 1 a 12 x 2 a 13 x 3 )/a 11 The diagonal elements of U, a 11, a 22, and a 33, are pivot elements and must be nonzero. Gaussian elimination applied to the augmented matrix is equivalent to computing an LU factorization of A, A = LU, where L is unit lower triangular and U is upper triangular, and solving Ax = LUx = b.

19 LU Factorization without Pivoting In order to prove the above assertion, we introduce some notation. An elementary row operation can be applied to a matrix A by applying the operation to the identity matrix I and then left-multiplying A by the resulting matrix. (A column operation is applied by right-multiplying.) For k = 1, 2,..., n 1, define M k as the elementary lower triangular matrix that introduces zeros below a kk. For the order-3 example we have Note that M 1 = µ µ M 1 A = a 11 a 12 a 13 0 a 22 a 23 0 a 32 a 33, M 2 = µ 32 1., and M 2 M 1 A = U.

20 LU Factorization without Pivoting continued We now have so that A = LU if and only if U = M n 1 M n 2 M 1 A L = (M n 1 M n 2 M 1 ) 1 = M 1 1 M 1 2 M 1 n 1. Note that a triangular matrix is nonsingular if and only if its diagonal elements are all nonzero, and hence L is invertible since it is the product of invertible matrices. Thus, A is invertible iff the pivots (elements of diag(u)) are nonzero. We need some additional notation: m T k = [0, 0,..., 0, µ k+1,k, µ k+2,k,..., µ n,k ] (k = 1,..., n 1). Then M k = I + m k e T k, where e k denotes column k of I.

21 LU Factorization without Pivoting continued For example 0 m 1 = µ 21 m 1 e1 T = µ 31 so that 0 µ 21 µ 31 M 1 = I + m 1 e1 T = [1 0 0] = µ µ µ µ Since M k adds multiples of row k to the following rows, its inverse must subtract the same multiples. Lemma 1: M 1 k = (I + m k e T k ) 1 = I m k e T k. proof M 1 k M k = (I m k e T k )(I + m ke T k ) = I + m ke T k m ke T k (m k e T k )(m ke T k ) = I m k(e T k m k)e T k = I..,

22 LU Factorization without Pivoting continued Lemma 2: M 1 j M 1 j+1 = (I m jej T )(I m j+1 ej+1 T ) = I m j ej T m j+1 ej+1 T. More generally, L = M 1 1 M 1 2 M 1 n 1 = I m 1e T 1 m 2 e T 2... m n 1 e T n 1. proof: e T j m k = 0 for k > j. In our example we have LU = L = µ µ 31 µ µ µ 31 µ 32 1 a 11 a 12 a 13 0 a 22 a a 33 = a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33.

23 LU Factorization without Pivoting continued Algorithm 1: overwrite A with L I + U. for k = 1:n-1 for i = k+1:n A(i,k) = A(i,k)/A(k,k); %negative multiplier for j = k+1:n A(i,j) = A(i,j) - A(i,k)*A(k,j); end end end Note that Matlab matrices are stored in column-major order, and the outer loop should therefore be on the column index j in order to maintain spatial locality (and thereby minimize cache misses and page faults). The loops should also be vectorized.

24 Operation Count Counting the divides as multiplies (or multiplies by prestored reciprocals of pivot elements), the number of multiplies is M = n 1 n k=1 i=k+1 n 1 n 1 [1 + (n k)] = (n + 1 k)(n k) = (j + 1)j, k=1 where we have substituted j = n k. Using j=1 we get n i=1 i 2 = 2n3 + 3n 2 + n 6 = n(2n + 1)(n + 1), 6 n 1 n 1 M = j + j 2 = j=1 j=1 (n 1)n 2 + (n 1)(2n 1)n 6 = n3 3 n 3.

25 Back Substitution Given the LU factorization, a solution is obtained from Ax = (LU)x = L(Ux) = b Ly = b for y = Ux. 1 Solve Ly = b unit lower triangular system 2 Solve Ux = y upper triangular system Note that Ly = b y = L 1 b = M n 1 M n 2 M 1 b; i.e,. solution of the lower triangular system is precisely the set of operations applied to b by forward elimination on the augmented matrix. Also, solution of Ux = y is the same back substitution operation used in the case of an augmented matrix. Hence the equivalence.

26 Back Substitution Algorithms Algorithm 2: overwrite b with L 1 b for k = 1:n-1 for i = k+1:n b(i) = b(i) - A(i,k)*b(k); end end The operation count is n 1 n 1 (n k) = j = k=1 j=1 (n 1)n 2 = n2 2 n 2.

27 Back Substitution Algorithms continued Algorithm 3: overwrite b with U 1 b b(n) = b(n)/a(n,n); for k = n-1:-1:1 t = 0; for j = k+1:n t = t + A(k,j)*b(j); end b(k) = (b(k)-t)/a(k,k); end The operation count is n (n k + 1) = n + k=1 (n 1)n 2 so that the total number of multiplies is = n2 2 + n 2, (n 2 /2 n/2) + (n 2 /2 + n/2) = n 2.

28 Back Substitution Algorithms continued The above algorithm with outer loop on column index (not so easily derived) is as follows: Algorithm 3 : overwrite b with U 1 b for j = n:-1:2 b(j) = b(j)/a(j,j); for i = 1:j-1 b(i) = b(i) - A(i,j)*b(j); end end b(1) = b(1)/a(1,1);

29 Stability Consider applying Gaussian elimination to the following matrix: [ ] 0 1 A =. 1 0 The zero pivot element leads to failure with multiplier µ 21 = 1/0 = even though the matrix is perfectly conditioned, and Ax = b has solution x 1 = b 2, x 2 = b 1. Since a zero pivot leads to failure, a small pivot must lead to numerical instability. In fact, small pivots lead to cancellation error caused by large multipliers (relative to A and b ). Consider the order-2 example: a 11 a 12 b 1 a 11 a 12 b 1 a 21 a 22 b 2 0 a 22 + µa 12 b 2 + µb 1 for µ = a 21 /a 11.

30 Stability continued Back substitution gives A large value µ implies x 2 = b 2 + µb 1 a 22 + µa 12, x 1 = b 1 a 12 x 2 a 11. fl(x 2 ) b 1 /a 12 fl(a 12 x 2 ) b 1 implying cancellation error in fl(x 1 ). In general, x n is computed accurately, but computation of the remaining solution components involves cancellation error. Partial Pivoting: row interchanges chosen to maximize pivot element magnitudes. Only full pivoting, involving both row and column interchanges (and requiring reordering of solution components), can guarantee stability, but partial pivoting is almost always sufficient in practice (unless the matrix is poorly scaled).

31 Orthogonal Matrix Defn An orthogonal matrix is a square matrix Q with orthonormal columns: q T i q j = δ ij, (i, j = 1,..., n) for Q = [q 1 q 2 q n ]; i.e., Q T Q = I. Equivalently, an orthogonal matrix is one whose transpose is its inverse. Note that AB = I does not imply that B = A 1 unless A and B are square. The complex analogue of an orthogonal matrix is a unitary matrix. An orthogonal matrix Q has the following properties. Q T is orthogonal, and thus Q has orthonormal rows. Inner products are preserved: (Qx) T (Qy) = x T y x, y R n. Euclidean norms are preserved: Qx 2 = x 2 x R n. The eigenvalues of Q have magnitude 1. Q is perfectly conditioned: κ 2 (Q) = ρ(q T Q) ρ(qq T ) = 1. det(q) = 1 (rotation) or det(q) = 1 (reflection).

32 Pivoting Defn An elementary permutation matrix P i is obtained by interchanging a pair of rows of the identity matrix. P i is symmetric and orthogonal so that P T i P i = P 2 i = I. A permutation matrix P is a product of elementary permutations. This is analogous to a permutation of the integers 1:n being a product of transpositions. P is not symmetric, but it is orthogonal permuting rows of an orthogonal matrix leaves them orthonormal. An LU factorization with partial pivoting produces LU = PA for P = P n 1 P n 2 P 2 P 1, where P k is the elementary permutation that interchanges row k with row m for some m k chosen to maximize the magnitude of the pivot element.

33 LU with Partial Pivoting: Example 1 A = , b = x = P 1 = , M 1 = P 2 = , M 2 =

34 LU with Partial Pivoting: Example 1 continued By storing the negative multipliers below the diagonal, we have overwritten the array with L I + U, from which we can read off L and U, and compute P = P 2 P 1 : L = , U = , P = We can now compute products LU and PA to verify that LU = = PA

35 LU with Partial Pivoting: Example 2 A = , b = Omitting the steps, the factors are L =.5 1 0, U = / / , P = We then obtain the solution by back substitution; i.e., we solve Ux = L 1 Pb, where L 1 Pb was computed by treating the augmented matrix: U = /3 x 1 x 2 x 3 = /3 x =

36 LU with Partial Pivoting Theorem The algorithm produces L, U, and P such that LU = PA. proof We need an expression for the unit lower triangular matrix L obtained by storing negative multipliers below the diagonal and applying each permutation to the entire row (including multipliers). We have negative multipliers in M 1 k, and they are interchanged (along with the rest of their rows) in P k+1 M 1 k. The trick is to then right-multiply by P k+1 in order to restore the correct structure in P k+1 M k P k+1. In the order-3 case, where P 2 interchanges rows 2 and 3, we have L = (P 2 M 1 1 P 2)M 1 2 = and U = M 2 P 2 M 1 P 1 A, so that µ µ 21 µ 32 1, LU = P 2 M 1 1 P 2M 1 2 M 2P 2 M 1 P 1 A = P 2 P 1 A = PA.

37 LU with Partial Pivoting continued The order-5 example is as follows: L = P 4 {P 3 [(P 2 M1 1 2)M2 1 3}M3 1 4M4 1 U = M 4 P 4 M 3 P 3 M 2 P 2 M 1 P 1 A, LU = P 4 P 3 P 2 [M1 1 2M2 1 3M3 1 4M4 1 4P 4 M 3 P 3 M 2 P 2 M 1 ]P 1 A = P 4 P 3 P 2 P 1 A = PA. The conclusion follows by finite induction. Note that an LU factorization is always possible, and L and P are invertible. U is invertible if and only if A is invertible. A zero pivot is encountered at step k iff U already has all zeros on and below the diagonal in column k. The multipliers in column k are taken to be zeros in this case.

38 LU with Partial Pivoting continued The above remarks suggest a test for singularity: perform Gaussian elimination with partial pivoting, and look for a zero pivot. This is a bad test. A zero pivot can be encountered in a nonsingular (but very ill-conditioned) matrix, and all nonzero pivots can be obtained with a singular matrix. In fact, a singular matrix might not even have a small pivot element. The only meaningful computational test is for singular to machine precision based on a condition number estimate.

39 LU with Partial Pivoting continued Back substitution now requires three steps. Ax = b iff PAx = Pb iff LUx = Pb. 1 Overwrite array b with b = Pb 2 Solve Ly = b for y, overwriting array b with y 3 Solve Ux = y for x, overwriting array b with x In Example 1 we have 7 b = 4, b = , y = , x =

40 LU with Partial Pivoting continued The algorithms are easily modified for pivoting. At each step of forward elimination, in addition to interchanging rows, a record of the interchange must be saved for use in back substitution. We initialize an index vector p = 1:n, and apply each interchange to p. Then Pb = b(p). The operation counts are not altered. An alternative method of storing permutations is to let p(k) be the index of the row to be interchanged with row k for k = 1,..., n 1. The last entry p(n) can be used to keep track of the sign of det(p) = ( 1) µ, where µ is the number of row interchanges, each of which changes the sign of det(a). Then det(a) = det(p T LU) = det(p) det(l) det(u) = ( 1) µ det(u), where, using an expression that avoids overflow and underflow, [ n n ] det(u) = u kk = exp log( u kk ). k=1 k=1

41 Vector Norms Defn A vector norm on R n is a function : R n R such that 1 x 0 and x = 0 iff x = 0 x R n 2 αx = α x α R, x R n 3 x + y x + y x, y R n Note that 0 = 0 by property (2). The most important and commonly employed norms are the p-norms with p = 1, p = 2, and p =. ( n ) (1/p) x p x i p, p = 1, 2,... i=1 x 1 = n i=1 x i cheap to compute n x 2 = i=1 x i 2 inner product Euclidean norm associated with an x = max 1 i n x i cheap; uniform approximation

42 Vector Norms continued Theorem x = lim p x p. proof Let x m = x max 1 i n x i. Then ( n ) (1/p) x i p = i=1 ( n i=1 ( n = x m i=1 since 1 x i /x m p n. (1/p) ( x i p xm p xm) p = xm p ) p (1/p) x i x m x m n i=1 ) p (1/p) x i x m The proof that p satisfies the triangle inequality is difficult in the general case, but not for the specific values p = 1, 2,.

43 Vector Norms continued The unit spheres are S p = {x R n : x p = 1}. These are easily sketched for n = 2 and p = 1, 2,. The shapes are diamond, circle, and square, respectively. Theorem All norms on R n are equivalent in the following sense. For norms M and N on R n, there exist constants c 1, c 2 > 0 such that c 1 M(x) N(x) c 2 M(x) x R n. Defn A sequence of vectors x 1, x 2,... converges to x R n iff x x k 0 as k. By the above theorem, the choice of norm is irrelevant.

44 Matrix Norms Defn A matrix norm on R n n is defined by the same three properties characterizing a vector norm along with the additional property 4 AB A B A, B R n n. A matrix norm is compatible with a vector norm v if Ax v A x v x R n, A R n n. The Frobenius norm n A F = i,j=1 a 2 ij (1/2) is compatible with 2.

45 Operator Norms Given a vector norm v, there is a corresponding operator norm induced by v : Ax v A = sup = sup Ax v x 0 x v 0< x 1 = max x 0 Ax v x v. The operator norm is clearly compatible with v, and is easily shown to satisfy the four defining properties. It is a measure of the extent to which a matrix stretches the unit sphere. The important examples are those induced by p-norms. A 1 = max 1 j n n i=1 a ij = max 1 j n a j 1 for A = (a 1 a 2... a n ). A 2 = ρ(a T A) for spectral radius ρ A = max 1 i n n j=1 a ij maximum absolute row sum

46 Operator Norms continued Theorem Let denote any operator norm on R n n. Then ρ A. Also, given ɛ > 0, there exists an operator norm ɛ such that A ɛ ρ(a) + ɛ. Corollary ρ(a) < 1 iff A < 1 for some operator norm. Theorem A m 0 as m iff ρ(a) < 1. Theorem ρ(a) < 1 (I A) 1 = I + A + A converges. Theorem If A < 1 for some operator norm, then (I A) 1 exists and (I A) 1 1/(1 A ). proof I = (I A)(I A) 1 = (I A) 1 A(I A) 1 (I A) 1 = I + A(I A) 1 (I A) 1 I + A(I A) A (I A) 1 (I A) 1 (1 A ) 1.

47 Condition Number Defn The condition number of a nonsingular matrix A is κ(a) = A A 1 for some operator norm. The condition number of a singular (square) matrix is κ(a) =. Note that A 1 = max y 0 A 1 y y = max x 0 [ x Ax = 1/ min x 0 ] Ax x (using y = Ax). Hence [ ] Ax κ(a) = max / x 0 x [ min x 0 ] [ ] [ ] Ax = max x Ax / min Ax 1. x =1 x =1 The condition number of a matrix is thus the extent to which it skews the unit sphere. In the case of the Euclidean norm, the unit sphere is mapped to an ellipsoid, and the condition number is the ratio of largest to smallest half-axis length.

48 Condition Number continued Theorem A bounded linear operator A (such as an order-n matrix) is continuous (and thus maps the unit sphere to a cts. surface). proof Given ɛ > 0, let δ = ɛ/ A. Then x y < δ Ax Ay = A(x y) A x y < A δ = ɛ Theorem Suppose Ax = b and a perturbation b in the data leads to solution x + x; i.e., A(x + x) = b + b. Then the relative change in the solution is bounded by κ(a) times the relative change in the data: x / x κ(a) b / b. proof Ax = b and A(x + x) = b + b A x = b x = A 1 b. Hence x A 1 b and b = Ax A x 1/ x A / b. Thus, x x A 1 b A / b = κ(a) b b

49 Condition Number continued Following are some examples. The identity is perfectly conditioned: κ(i ) = 1. Orthogonal matrices are perfectly conditioned: Q T Q = I κ 2 (Q) = Q 2 Q T 2 = ρ(q T Q) ρ(qq T ) = 1. D = diag(d 1, d 2,..., d n ) κ p (D) = max d i / min d i for p = 1, 2,. If A is symmetric and positive definite, its condition number is the ratio of largest to smallest eigenvalue: κ 2 (A) = λ max /λ min.

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra The two principal problems in linear algebra are: Linear system Given an n n matrix A and an n-vector b, determine x IR n such that A x = b Eigenvalue problem Given an n n matrix

More information

Basic Concepts in Linear Algebra

Basic Concepts in Linear Algebra Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear

More information

Review of Basic Concepts in Linear Algebra

Review of Basic Concepts in Linear Algebra Review of Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University September 7, 2017 Math 565 Linear Algebra Review September 7, 2017 1 / 40 Numerical Linear Algebra

More information

A Review of Linear Algebra

A Review of Linear Algebra A Review of Linear Algebra Gerald Recktenwald Portland State University Mechanical Engineering Department gerry@me.pdx.edu These slides are a supplement to the book Numerical Methods with Matlab: Implementations

More information

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same. Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read

More information

Review Questions REVIEW QUESTIONS 71

Review Questions REVIEW QUESTIONS 71 REVIEW QUESTIONS 71 MATLAB, is [42]. For a comprehensive treatment of error analysis and perturbation theory for linear systems and many other problems in linear algebra, see [126, 241]. An overview of

More information

Scientific Computing

Scientific Computing Scientific Computing Direct solution methods Martin van Gijzen Delft University of Technology October 3, 2018 1 Program October 3 Matrix norms LU decomposition Basic algorithm Cost Stability Pivoting Pivoting

More information

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le

Direct Methods for Solving Linear Systems. Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le Direct Methods for Solving Linear Systems Simon Fraser University Surrey Campus MACM 316 Spring 2005 Instructor: Ha Le 1 Overview General Linear Systems Gaussian Elimination Triangular Systems The LU Factorization

More information

Computational Methods. Systems of Linear Equations

Computational Methods. Systems of Linear Equations Computational Methods Systems of Linear Equations Manfred Huber 2010 1 Systems of Equations Often a system model contains multiple variables (parameters) and contains multiple equations Multiple equations

More information

Foundations of Matrix Analysis

Foundations of Matrix Analysis 1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the

More information

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II)

Lecture 12 (Tue, Mar 5) Gaussian elimination and LU factorization (II) Math 59 Lecture 2 (Tue Mar 5) Gaussian elimination and LU factorization (II) 2 Gaussian elimination - LU factorization For a general n n matrix A the Gaussian elimination produces an LU factorization if

More information

Elementary maths for GMT

Elementary maths for GMT Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1

More information

Scientific Computing: Dense Linear Systems

Scientific Computing: Dense Linear Systems Scientific Computing: Dense Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 February 9th, 2012 A. Donev (Courant Institute)

More information

MATH2210 Notebook 2 Spring 2018

MATH2210 Notebook 2 Spring 2018 MATH2210 Notebook 2 Spring 2018 prepared by Professor Jenny Baglivo c Copyright 2009 2018 by Jenny A. Baglivo. All Rights Reserved. 2 MATH2210 Notebook 2 3 2.1 Matrices and Their Operations................................

More information

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization

Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Numerical Methods I Solving Square Linear Systems: GEM and LU factorization Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 MATH-GA 2011.003 / CSCI-GA 2945.003, Fall 2014 September 18th,

More information

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian

2. Linear algebra. matrices and vectors. linear equations. range and nullspace of matrices. function of vectors, gradient and Hessian FE661 - Statistical Methods for Financial Engineering 2. Linear algebra Jitkomut Songsiri matrices and vectors linear equations range and nullspace of matrices function of vectors, gradient and Hessian

More information

Linear Algebra and Matrix Inversion

Linear Algebra and Matrix Inversion Jim Lambers MAT 46/56 Spring Semester 29- Lecture 2 Notes These notes correspond to Section 63 in the text Linear Algebra and Matrix Inversion Vector Spaces and Linear Transformations Matrices are much

More information

Fundamentals of Engineering Analysis (650163)

Fundamentals of Engineering Analysis (650163) Philadelphia University Faculty of Engineering Communications and Electronics Engineering Fundamentals of Engineering Analysis (6563) Part Dr. Omar R Daoud Matrices: Introduction DEFINITION A matrix is

More information

MAT 610: Numerical Linear Algebra. James V. Lambers

MAT 610: Numerical Linear Algebra. James V. Lambers MAT 610: Numerical Linear Algebra James V Lambers January 16, 2017 2 Contents 1 Matrix Multiplication Problems 7 11 Introduction 7 111 Systems of Linear Equations 7 112 The Eigenvalue Problem 8 12 Basic

More information

A Review of Matrix Analysis

A Review of Matrix Analysis Matrix Notation Part Matrix Operations Matrices are simply rectangular arrays of quantities Each quantity in the array is called an element of the matrix and an element can be either a numerical value

More information

MTH 464: Computational Linear Algebra

MTH 464: Computational Linear Algebra MTH 464: Computational Linear Algebra Lecture Outlines Exam 2 Material Prof. M. Beauregard Department of Mathematics & Statistics Stephen F. Austin State University February 6, 2018 Linear Algebra (MTH

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

Lecture Summaries for Linear Algebra M51A

Lecture Summaries for Linear Algebra M51A These lecture summaries may also be viewed online by clicking the L icon at the top right of any lecture screen. Lecture Summaries for Linear Algebra M51A refers to the section in the textbook. Lecture

More information

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the

More information

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

MATH 240 Spring, Chapter 1: Linear Equations and Matrices MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear

More information

2.1 Gaussian Elimination

2.1 Gaussian Elimination 2. Gaussian Elimination A common problem encountered in numerical models is the one in which there are n equations and n unknowns. The following is a description of the Gaussian elimination method for

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 Instructions Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012 The exam consists of four problems, each having multiple parts. You should attempt to solve all four problems. 1.

More information

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4

Linear Algebra Section 2.6 : LU Decomposition Section 2.7 : Permutations and transposes Wednesday, February 13th Math 301 Week #4 Linear Algebra Section. : LU Decomposition Section. : Permutations and transposes Wednesday, February 1th Math 01 Week # 1 The LU Decomposition We learned last time that we can factor a invertible matrix

More information

What is it we are looking for in these algorithms? We want algorithms that are

What is it we are looking for in these algorithms? We want algorithms that are Fundamentals. Preliminaries The first question we want to answer is: What is computational mathematics? One possible definition is: The study of algorithms for the solution of computational problems in

More information

MATH 3511 Lecture 1. Solving Linear Systems 1

MATH 3511 Lecture 1. Solving Linear Systems 1 MATH 3511 Lecture 1 Solving Linear Systems 1 Dmitriy Leykekhman Spring 2012 Goals Review of basic linear algebra Solution of simple linear systems Gaussian elimination D Leykekhman - MATH 3511 Introduction

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

POLI270 - Linear Algebra

POLI270 - Linear Algebra POLI7 - Linear Algebra Septemer 8th Basics a x + a x +... + a n x n b () is the linear form where a, b are parameters and x n are variables. For a given equation such as x +x you only need a variable and

More information

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b

LU Factorization. LU factorization is the most common way of solving linear systems! Ax = b LUx = b AM 205: lecture 7 Last time: LU factorization Today s lecture: Cholesky factorization, timing, QR factorization Reminder: assignment 1 due at 5 PM on Friday September 22 LU Factorization LU factorization

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8. Linear Algebra M1 - FIB Contents: 5 Matrices, systems of linear equations and determinants 6 Vector space 7 Linear maps 8 Diagonalization Anna de Mier Montserrat Maureso Dept Matemàtica Aplicada II Translation:

More information

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel

LECTURE NOTES ELEMENTARY NUMERICAL METHODS. Eusebius Doedel LECTURE NOTES on ELEMENTARY NUMERICAL METHODS Eusebius Doedel TABLE OF CONTENTS Vector and Matrix Norms 1 Banach Lemma 20 The Numerical Solution of Linear Systems 25 Gauss Elimination 25 Operation Count

More information

Matrices and Matrix Algebra.

Matrices and Matrix Algebra. Matrices and Matrix Algebra 3.1. Operations on Matrices Matrix Notation and Terminology Matrix: a rectangular array of numbers, called entries. A matrix with m rows and n columns m n A n n matrix : a square

More information

Finite Mathematics Chapter 2. where a, b, c, d, h, and k are real numbers and neither a and b nor c and d are both zero.

Finite Mathematics Chapter 2. where a, b, c, d, h, and k are real numbers and neither a and b nor c and d are both zero. Finite Mathematics Chapter 2 Section 2.1 Systems of Linear Equations: An Introduction Systems of Equations Recall that a system of two linear equations in two variables may be written in the general form

More information

Numerical Methods - Numerical Linear Algebra

Numerical Methods - Numerical Linear Algebra Numerical Methods - Numerical Linear Algebra Y. K. Goh Universiti Tunku Abdul Rahman 2013 Y. K. Goh (UTAR) Numerical Methods - Numerical Linear Algebra I 2013 1 / 62 Outline 1 Motivation 2 Solving Linear

More information

Linear Systems of n equations for n unknowns

Linear Systems of n equations for n unknowns Linear Systems of n equations for n unknowns In many application problems we want to find n unknowns, and we have n linear equations Example: Find x,x,x such that the following three equations hold: x

More information

Gaussian Elimination and Back Substitution

Gaussian Elimination and Back Substitution Jim Lambers MAT 610 Summer Session 2009-10 Lecture 4 Notes These notes correspond to Sections 31 and 32 in the text Gaussian Elimination and Back Substitution The basic idea behind methods for solving

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible.

MATH 2331 Linear Algebra. Section 2.1 Matrix Operations. Definition: A : m n, B : n p. Example: Compute AB, if possible. MATH 2331 Linear Algebra Section 2.1 Matrix Operations Definition: A : m n, B : n p ( 1 2 p ) ( 1 2 p ) AB = A b b b = Ab Ab Ab Example: Compute AB, if possible. 1 Row-column rule: i-j-th entry of AB:

More information

Linear Algebra Highlights

Linear Algebra Highlights Linear Algebra Highlights Chapter 1 A linear equation in n variables is of the form a 1 x 1 + a 2 x 2 + + a n x n. We can have m equations in n variables, a system of linear equations, which we want to

More information

Gaussian Elimination for Linear Systems

Gaussian Elimination for Linear Systems Gaussian Elimination for Linear Systems Tsung-Ming Huang Department of Mathematics National Taiwan Normal University October 3, 2011 1/56 Outline 1 Elementary matrices 2 LR-factorization 3 Gaussian elimination

More information

G1110 & 852G1 Numerical Linear Algebra

G1110 & 852G1 Numerical Linear Algebra The University of Sussex Department of Mathematics G & 85G Numerical Linear Algebra Lecture Notes Autumn Term Kerstin Hesse (w aw S w a w w (w aw H(wa = (w aw + w Figure : Geometric explanation of the

More information

Matrices and systems of linear equations

Matrices and systems of linear equations Matrices and systems of linear equations Samy Tindel Purdue University Differential equations and linear algebra - MA 262 Taken from Differential equations and linear algebra by Goode and Annin Samy T.

More information

CHAPTER 6. Direct Methods for Solving Linear Systems

CHAPTER 6. Direct Methods for Solving Linear Systems CHAPTER 6 Direct Methods for Solving Linear Systems. Introduction A direct method for approximating the solution of a system of n linear equations in n unknowns is one that gives the exact solution to

More information

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT

Math Camp II. Basic Linear Algebra. Yiqing Xu. Aug 26, 2014 MIT Math Camp II Basic Linear Algebra Yiqing Xu MIT Aug 26, 2014 1 Solving Systems of Linear Equations 2 Vectors and Vector Spaces 3 Matrices 4 Least Squares Systems of Linear Equations Definition A linear

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017 Math 4A Notes Written by Victoria Kala vtkala@math.ucsb.edu Last updated June 11, 2017 Systems of Linear Equations A linear equation is an equation that can be written in the form a 1 x 1 + a 2 x 2 +...

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

MAT Linear Algebra Collection of sample exams

MAT Linear Algebra Collection of sample exams MAT 342 - Linear Algebra Collection of sample exams A-x. (0 pts Give the precise definition of the row echelon form. 2. ( 0 pts After performing row reductions on the augmented matrix for a certain system

More information

Systems of Linear Equations and Matrices

Systems of Linear Equations and Matrices Chapter 1 Systems of Linear Equations and Matrices System of linear algebraic equations and their solution constitute one of the major topics studied in the course known as linear algebra. In the first

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

Jim Lambers MAT 610 Summer Session Lecture 1 Notes

Jim Lambers MAT 610 Summer Session Lecture 1 Notes Jim Lambers MAT 60 Summer Session 2009-0 Lecture Notes Introduction This course is about numerical linear algebra, which is the study of the approximate solution of fundamental problems from linear algebra

More information

Direct Methods for Solving Linear Systems. Matrix Factorization

Direct Methods for Solving Linear Systems. Matrix Factorization Direct Methods for Solving Linear Systems Matrix Factorization Numerical Analysis (9th Edition) R L Burden & J D Faires Beamer Presentation Slides prepared by John Carroll Dublin City University c 2011

More information

Linear Algebraic Equations

Linear Algebraic Equations Linear Algebraic Equations 1 Fundamentals Consider the set of linear algebraic equations n a ij x i b i represented by Ax b j with [A b ] [A b] and (1a) r(a) rank of A (1b) Then Axb has a solution iff

More information

14.2 QR Factorization with Column Pivoting

14.2 QR Factorization with Column Pivoting page 531 Chapter 14 Special Topics Background Material Needed Vector and Matrix Norms (Section 25) Rounding Errors in Basic Floating Point Operations (Section 33 37) Forward Elimination and Back Substitution

More information

Roundoff Analysis of Gaussian Elimination

Roundoff Analysis of Gaussian Elimination Jim Lambers MAT 60 Summer Session 2009-0 Lecture 5 Notes These notes correspond to Sections 33 and 34 in the text Roundoff Analysis of Gaussian Elimination In this section, we will perform a detailed error

More information

Extra Problems for Math 2050 Linear Algebra I

Extra Problems for Math 2050 Linear Algebra I Extra Problems for Math 5 Linear Algebra I Find the vector AB and illustrate with a picture if A = (,) and B = (,4) Find B, given A = (,4) and [ AB = A = (,4) and [ AB = 8 If possible, express x = 7 as

More information

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems

AMS 209, Fall 2015 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems AMS 209, Fall 205 Final Project Type A Numerical Linear Algebra: Gaussian Elimination with Pivoting for Solving Linear Systems. Overview We are interested in solving a well-defined linear system given

More information

Matrix & Linear Algebra

Matrix & Linear Algebra Matrix & Linear Algebra Jamie Monogan University of Georgia For more information: http://monogan.myweb.uga.edu/teaching/mm/ Jamie Monogan (UGA) Matrix & Linear Algebra 1 / 84 Vectors Vectors Vector: A

More information

ELE/MCE 503 Linear Algebra Facts Fall 2018

ELE/MCE 503 Linear Algebra Facts Fall 2018 ELE/MCE 503 Linear Algebra Facts Fall 2018 Fact N.1 A set of vectors is linearly independent if and only if none of the vectors in the set can be written as a linear combination of the others. Fact N.2

More information

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0. Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

More information

Numerical Analysis FMN011

Numerical Analysis FMN011 Numerical Analysis FMN011 Carmen Arévalo Lund University carmen@maths.lth.se Lecture 4 Linear Systems Ax = b A is n n matrix, b is given n-vector, x is unknown solution n-vector. A n n is non-singular

More information

Jim Lambers MAT 610 Summer Session Lecture 2 Notes

Jim Lambers MAT 610 Summer Session Lecture 2 Notes Jim Lambers MAT 610 Summer Session 2009-10 Lecture 2 Notes These notes correspond to Sections 2.2-2.4 in the text. Vector Norms Given vectors x and y of length one, which are simply scalars x and y, the

More information

MH1200 Final 2014/2015

MH1200 Final 2014/2015 MH200 Final 204/205 November 22, 204 QUESTION. (20 marks) Let where a R. A = 2 3 4, B = 2 3 4, 3 6 a 3 6 0. For what values of a is A singular? 2. What is the minimum value of the rank of A over all a

More information

Basic Elements of Linear Algebra

Basic Elements of Linear Algebra A Basic Review of Linear Algebra Nick West nickwest@stanfordedu September 16, 2010 Part I Basic Elements of Linear Algebra Although the subject of linear algebra is much broader than just vectors and matrices,

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A =

Matrices and Vectors. Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = 30 MATHEMATICS REVIEW G A.1.1 Matrices and Vectors Definition of Matrix. An MxN matrix A is a two-dimensional array of numbers A = a 11 a 12... a 1N a 21 a 22... a 2N...... a M1 a M2... a MN A matrix can

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

NOTES on LINEAR ALGEBRA 1

NOTES on LINEAR ALGEBRA 1 School of Economics, Management and Statistics University of Bologna Academic Year 207/8 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura

More information

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra

CS227-Scientific Computing. Lecture 4: A Crash Course in Linear Algebra CS227-Scientific Computing Lecture 4: A Crash Course in Linear Algebra Linear Transformation of Variables A common phenomenon: Two sets of quantities linearly related: y = 3x + x 2 4x 3 y 2 = 2.7x 2 x

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

Throughout these notes we assume V, W are finite dimensional inner product spaces over C.

Throughout these notes we assume V, W are finite dimensional inner product spaces over C. Math 342 - Linear Algebra II Notes Throughout these notes we assume V, W are finite dimensional inner product spaces over C 1 Upper Triangular Representation Proposition: Let T L(V ) There exists an orthonormal

More information

Chapter 7. Linear Algebra: Matrices, Vectors,

Chapter 7. Linear Algebra: Matrices, Vectors, Chapter 7. Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra includes the theory and application of linear systems of equations, linear transformations, and eigenvalue problems.

More information

Matrix Algebra for Engineers Jeffrey R. Chasnov

Matrix Algebra for Engineers Jeffrey R. Chasnov Matrix Algebra for Engineers Jeffrey R. Chasnov The Hong Kong University of Science and Technology The Hong Kong University of Science and Technology Department of Mathematics Clear Water Bay, Kowloon

More information

Scientific Computing: Solving Linear Systems

Scientific Computing: Solving Linear Systems Scientific Computing: Solving Linear Systems Aleksandar Donev Courant Institute, NYU 1 donev@courant.nyu.edu 1 Course MATH-GA.2043 or CSCI-GA.2112, Spring 2012 September 17th and 24th, 2015 A. Donev (Courant

More information

Systems of Linear Equations and Matrices

Systems of Linear Equations and Matrices Chapter 1 Systems of Linear Equations and Matrices System of linear algebraic equations and their solution constitute one of the major topics studied in the course known as linear algebra. In the first

More information

The Solution of Linear Systems AX = B

The Solution of Linear Systems AX = B Chapter 2 The Solution of Linear Systems AX = B 21 Upper-triangular Linear Systems We will now develop the back-substitution algorithm, which is useful for solving a linear system of equations that has

More information

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations

Chapter 1: Systems of linear equations and matrices. Section 1.1: Introduction to systems of linear equations Chapter 1: Systems of linear equations and matrices Section 1.1: Introduction to systems of linear equations Definition: A linear equation in n variables can be expressed in the form a 1 x 1 + a 2 x 2

More information

Applied Linear Algebra

Applied Linear Algebra Applied Linear Algebra Gábor P. Nagy and Viktor Vígh University of Szeged Bolyai Institute Winter 2014 1 / 262 Table of contents I 1 Introduction, review Complex numbers Vectors and matrices Determinants

More information

2. Every linear system with the same number of equations as unknowns has a unique solution.

2. Every linear system with the same number of equations as unknowns has a unique solution. 1. For matrices A, B, C, A + B = A + C if and only if A = B. 2. Every linear system with the same number of equations as unknowns has a unique solution. 3. Every linear system with the same number of equations

More information

Since the determinant of a diagonal matrix is the product of its diagonal elements it is trivial to see that det(a) = α 2. = max. A 1 x.

Since the determinant of a diagonal matrix is the product of its diagonal elements it is trivial to see that det(a) = α 2. = max. A 1 x. APPM 4720/5720 Problem Set 2 Solutions This assignment is due at the start of class on Wednesday, February 9th. Minimal credit will be given for incomplete solutions or solutions that do not provide details

More information

Chapter 2 - Linear Equations

Chapter 2 - Linear Equations Chapter 2 - Linear Equations 2. Solving Linear Equations One of the most common problems in scientific computing is the solution of linear equations. It is a problem in its own right, but it also occurs

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

Practical Linear Algebra: A Geometry Toolbox

Practical Linear Algebra: A Geometry Toolbox Practical Linear Algebra: A Geometry Toolbox Third edition Chapter 12: Gauss for Linear Systems Gerald Farin & Dianne Hansford CRC Press, Taylor & Francis Group, An A K Peters Book www.farinhansford.com/books/pla

More information

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018

MATH 315 Linear Algebra Homework #1 Assigned: August 20, 2018 Homework #1 Assigned: August 20, 2018 Review the following subjects involving systems of equations and matrices from Calculus II. Linear systems of equations Converting systems to matrix form Pivot entry

More information

Numerical Methods. Elena loli Piccolomini. Civil Engeneering. piccolom. Metodi Numerici M p. 1/??

Numerical Methods. Elena loli Piccolomini. Civil Engeneering.  piccolom. Metodi Numerici M p. 1/?? Metodi Numerici M p. 1/?? Numerical Methods Elena loli Piccolomini Civil Engeneering http://www.dm.unibo.it/ piccolom elena.loli@unibo.it Metodi Numerici M p. 2/?? Least Squares Data Fitting Measurement

More information

Solution of Linear Equations

Solution of Linear Equations Solution of Linear Equations (Com S 477/577 Notes) Yan-Bin Jia Sep 7, 07 We have discussed general methods for solving arbitrary equations, and looked at the special class of polynomial equations A subclass

More information

Review of matrices. Let m, n IN. A rectangle of numbers written like A =

Review of matrices. Let m, n IN. A rectangle of numbers written like A = Review of matrices Let m, n IN. A rectangle of numbers written like a 11 a 12... a 1n a 21 a 22... a 2n A =...... a m1 a m2... a mn where each a ij IR is called a matrix with m rows and n columns or an

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

Numerical Linear Algebra And Its Applications

Numerical Linear Algebra And Its Applications Numerical Linear Algebra And Its Applications Xiao-Qing JIN 1 Yi-Min WEI 2 August 29, 2008 1 Department of Mathematics, University of Macau, Macau, P. R. China. 2 Department of Mathematics, Fudan University,

More information

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form

Chapter 5. Linear Algebra. A linear (algebraic) equation in. unknowns, x 1, x 2,..., x n, is. an equation of the form Chapter 5. Linear Algebra A linear (algebraic) equation in n unknowns, x 1, x 2,..., x n, is an equation of the form a 1 x 1 + a 2 x 2 + + a n x n = b where a 1, a 2,..., a n and b are real numbers. 1

More information