Linear Algebra in Actuarial Science: Slides to the lecture

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Linear Algebra in Actuarial Science: Slides to the lecture"

Transcription

1 Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011

2 Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations systems 1 α α 1 α α 1 α α 1 Existence of solution? Uniqueness of solution? Stability of solution? = V (0) 0. 0 V (n)

3 Linear Algebra is a Tool-Box Minimization Problem Given data vector b and parameter matrix A. Find x such that Ax b gets minimal. Idea: Suitable transform of parameters and data Solution: Decomposition of matrices

4 Linear Algebra is a Tool-Box Bonus-Malus-Systems Car insurance with two different premium-levels a < c. Claim in the last year or in the year before: premium c No claims in the last two years: premium a. P(Claim in one year) = p = 1 q, P = p q 0 p 0 q p 0 q Question: What is right choice for a and c? Markov Chains with stationary distributions: What is lim n P n?

5 Linear Algebra is a Tool-Box Construction of index-linked products with n underlyings Classical models use normal distribution assumptions for assets Basket leads to multivariate normal distribution What is the square root of a matrix? How can simulations be done on covariance matrices? Idea: Transform the matrix Solution: Decomposition of matrices

6 Linear Algebra Syllabus Linear Equation Systems: Gauss Algorithm, Existence and Uniqueness of Solutions, Inverse, Determinants Vector Spaces: Linear (In-)Dependence, Basis, Inner Products, Gram-Schmidt Algorithm Eigenvalues and Eigenvectors: Determination, Diagonalisation, Singular Values Decompositions of Matrices: LU, SVD, QR, Cholesky

7 Field Axioms Definition A field is a set F with two operations, called addition and multiplication, which satisfy the following so-called "field axioms" (A), (M) and (D): (A) Axioms for addition (A1) If x F and y F, then x + y F. (A2) Addition is commutative: x + y = y + x for all x, y F. (A3) Addition is associative: (x + y) + z = x + (y + z) for all x, y, z F. (A4) F contains an element 0 such that 0 + x = x for all x F. (A5) To every x F corresponds an element x F such that x + ( x) = 0.

8 Field Axioms Definition Field definition continued: (M) Axioms for multiplication (M1) If x F and y F, then xy F. (M2) Multiplication is commutative: xy = yx for all x, y F. (M3) Multiplication is associative: (xy)z = x(yz) for all x, y, z F. (M4) F contains an element 1 0 such that 1x = x for all x F. (M5) If x F and x 0 then exists an element 1/x F such that x (1/x) = 1. (D) The distributive law x(y + z) = xy + xz holds for all x, y, z F.

9 Addition of Matrices (A1) A = B a ij = b ij, 1 i m, 1 j n. (A2) A M(m n), B M(m n) A + B := (a ij + b ij ). (A3) Addition is commutative: A + B = B + A. (A4) Addition is associative: (A + B) + C = A + (B + C) (A5) Identity element: A = (A6) Negative element: A = (a ij ).

10 Multiplication of Matrices (M1) A M(m n), B M(n r) C = A B = (c ik ) M(m r), c ik = a i1 b 1k a in b nk = n a ij b jk. j=1 (M2) Multiplication of matrices is in general not commutative! (M3) Multiplication of matrices is associative (M4) Identity matrix: I = I n =..... M(n n) (M5) A I = I A, but A B = A C does not imply in general B = C!

11 Transpose, Inverse, Multiplication with scalars (M) Transpose and Inverse (T) Transpose of A: A T, with (A T ) T = A, (A + B) T = A T + B T and (A B) T = B T A T. (I) Given A M(n n) and X M(n n). If A X = X A = I, then X = A 1 is the inverse of A. (Ms) Multiplication with scalars (Ms1) λ K, A M(m n) λa := (λa ij ). (Ms2) Multiplication with scalars is commutative and associative. (Ms3) The distributive law holds: (λ + µ)a = λa + µa.

12 The inverse of a matrix Theorem If A 1 exists, it is unique. Definition A M(n n) is called singular, if A 1 does not exist. Otherwise it is called non-singular. Theorem Given A, B M(n n) non-singular. Then it follows A B non-singular and (A B) 1 = B 1 A 1. A 1 non-singular and (A 1 ) 1 = A. A T non-singular and (A T ) 1 = (A 1 ) T.

13 Gauss-Algorithm Example Gauss-Algorithm 2u + v + w = 1 4u + v = 2 2u + 2v + w = 7 2u + v + w = 1 1v 2w = 4 4w = 4

14 Theorems: Solutions under Gauss-Algorithm Theorem Given a matrix A M(m n) 1. The Gauss-Algorithm transforms in a finite number of steps A into an upper triangular matrix A. 2. rank(a) = rank(a ). 3. Given the augmented matrix (A, b) and the upper triangular matrix (A, b ). Then x is a solution of Ax = b x is a solution of A x = b.

15 Theorems: Solutions under Gauss-Algorithm Theorem Given A M(m n), b = b 1. b m R m, x = The linear equation system Ax = b has a solution, if rank(a)=rank(a, b). The linear equation system has a unique solution, if rank(a)=rank(a, b)=n. x 1. x n R n.

16 Theorems: Solutions under Gauss-Algorithm Theorem Given x 0 a particulate solution of Ax = b. Then the solutions x of Ax = b have the form x = x 0 + h, where h is the general solution of the homogeneous equation system. Corollary 1. Ax = b has an unique solution { Ax = b has a solution. Ax = 0, has only the solution x = A M(n n): Ax = b has an unique solution rank(a) = n A is non-singular Ax = 0 has only the solution x = 0. The solution of Ax = b is given as x = A 1 b. 3. rank(a) = n A 1 exists.

17 Determination of Determinants A M(2 2, R) ( ) a11 a A = 12 det(a) = a a 21 a 11 a 22 a 12 a A M(3 3, R): Rule of Sarrus A = a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 a 11 a 12 a 21 a 22 a 31 a 32 det(a) = a 11 a 22 a 33 + a 12 a 23 a 31 + a 13 a 21 a 32 a 13 a 22 a 31 a 11 a 23 a 32 a 12 a 21 a 33

18 Determination of Determinants A M(n n, R) Definition Given a matrix A = (a ij ) M(n n). The minor A ij is the determinant of the matrix M(n 1 n 1) obtained by removing the i-th row and the j-th column of A. The associated cofactor C ij = ( 1) i+j A ij. Definition The determinant of a matrix A = (a ij ) M(n n) is defined by n det(a) = ( 1) 1+j a 1j A 1j = j=1 n a 1j C 1j, j=1 i.e. Laplace expansion with first row.

19 Determinants: Properties 1. det(a T ) = det(a) 2. i : a i = (0,..., 0) det(a) = 0 3. det(λa) = λ n det(a), if A M(n n) 4. i, j : i j and a i = a j det(a) = 0 5. (1) det(a B) = deta detb (2) det(a + B) det(a) + det(b) 6. A A through changing of n rows det(a ) = ( 1) n det(a) 7. A A through multiplication of i-th row with λ det(a ) = λdet(a) 8. A A through adding the λ-fold of i-th row to the j-th row det(a ) = det(a) 9. A is non-singular rank(a) = n det(a) 0 det(a 1 ) = 1 det(a) 10. If A = diag(a 11,..., a nn ) det(a) = n k=1 a kk

20 General vector spaces Definition Given K some field. A non-empty set V is called a K-vector space, if for all a, b V a + b V. for all λ K and a V λ a V. The field is closed with respect to addition and multiplication with scalars. Definition Given V a K-vector space. A non-empty subset U V is called a linear subspace of V if for all λ K and a, b U a + b U, λ a U.

21 Linear combination and span(u) Definition Given scalars λ 1,..., λ m K and vectors a 1,..., a m V. A vector a = λ 1 a λ m a m is called a linear combination of a 1,..., a m. The linear combination is called trivial, if λ 1 =... = λ m = 0, otherwise it is called non-trivial. Definition Given U V a non-empty subset. The set of all linear combinations of vectors in U { span(u) = a = } λ i a i, λ i K, a i U is called span(u).

22 Linear independency Definition The vectors a 1,..., a m V are called linearly dependent (l.d.), if there exists a non-trivial linear combination, such that λ 1 a λ m a m = 0, i : λ i 0. The vectors a 1,..., a m V are called linearly independent (l.i.), if they are not linearly dependent. Theorem Given vectors a 1,..., a m V. The following is equivalent: 1. The vectors a 1,..., a m are linearly dependent. 2. At least one vector a i is a linear combination of the other, i.e. i {1,..., m} : a i = j i λ j a j.

23 Linear independency and basis Definition The non-empty vector subset U V is called linearly independent, if every finite set of vectors in U is linearly independent. Definition Given V a K-vector space. A subset U V of linearly independent vectors in called a basis of V, if span(u) = V. A vector space has finite dimension, if it exists a finite basis. Theorem In a vector space with finite dimension every basis has the same number of vectors. This is called the dimension of V, dimv.

24 Algorithms Determination of Basis Given {v 1,..., v m } vectors in K n. Step 1: Write the n vectors as rows in a matrix. Step 2: Apply Gauss algorithm to get a triangular matrix. Step 3: The non-zero rows form a basis of the subset of K n. Extension of Basis Given m l.i. vectors {v 1,..., v m } vectors in K n, with m < n. Step 1: Use a basis of K n, e.g. the unit vectors {e 1,..., e n }. Step 2: Write the vectors v j and e i in a matrix. Step 3: Apply Gauss algorithm to get a triangular matrix. Step 4: The n m non-zero unit vectors complete the m l.i. vectors to a basis of K n.

25 Unitary vector spaces: Scalar product Definition Given V a K-vector space. A mapping, : V V K (v, w) v, w is called scalar product in V, if for all v, w, u V and for all λ K (S1) v, v > 0, if v 0. (S2) v, w = w, v, if v, w K = R. (S3) λv, w = λ v, w. (S4) u + v, w = u, w + v, w. Definition A vector space with a scalar product is called a unitary vector space.

26 Unitary vector spaces: Norm Definition Given V a K-vector space. A mapping : V K is called a norm, if (N1) v 0, v V and v = 0 v = 0. (N2) λv = λ v, v V and λ K. (N3) v + w v + w, v, w V (triangular inequality). Definition Given V a unitary vector space. The mapping : V K v v := v, v is the norm of V. A vector v V is called unit vector if v = 1.

27 Orthogonal and orthonormal vectors Definition Given V an unitary vector space. Two vectors u, v V are called orthogonal, if u, v = 0. A non-empty subset U V is called orthonormal, if all vectors in U are pairwise orthogonal and have norm equal to 1. Definition Given {v 1,..., v n } V linearly independent vectors. The vectors {v 1,..., v n } form an orthogonal basis of V, if dimv = n and v i, v j = { 1, i = j 0, i j.

28 Gram-Schmidt Orthogonalisation Given V an unitary vector space with norm and {v 1,..., v n } a linearly independent subset of V. The set {w 1,..., w n } is orthonormal (i.e. orthogonal and unit norm), with w 1 = w i = 1 v 1 v 1 1 u i u i, i = 2,..., n with i 1 u i = v i v i, w k w k, i = 2,..., n k=1

29 Eigenvalues and Eigenvectors Tacoma Bridge: 4 months after opening it crashed The oscillations of the bridge were caused by the frequency of wind being to close to the natural frequency of the bridge. The natural frequency is the eigenvalue of smallest magnitude of a system that models the bridge.

30 Eigenvalues and Eigenvectors Transformation, such that the central vertical axis does not change direction. The blue vector changes direction and hence is not an eigenvector. The red vector is an eigenvector of the transformation, with eigenvalue 1, since it is neither stretched nor compressed.

31 Eigenvalues and Eigenvectors Definition Given A M(n n), with a ij R. λ is call eigenvalue if v C n : Av = λv. The vector v 0 is called eigenvector corresponding to λ. Determination Av = λv Av λiv = 0 (A λi )v = 0 The homogenous linear equation system has a non-trivial solution det(a λi ) = 0.

32 Eigenvalues and Eigenvectors Multiplicities If λ j is a zero of order r of the characteristic polynomial P(λ), the algebraic multiplicity is r = µ(p, λ j ). Given an eigenvalue λ, the geometric multiplicity of λ is defined as the dimension of the eigenspace E λ. Theorem A matrix A M(n n) has n linearly independent eigenvectors dime λ = µ(p, λ). Theorem A matrix A M(n n) can be diagonalised A has n linearly independent eigenvectors

33 Eigenvalues and Eigenvectors Theorem Given A M(n n) symmetric (i.e A T = A) and with real entries all eigenvalues are real. Theorem Given λ 1 and λ 2 distinct eigenvalues of a real and symmetric matrix, with corresponding eigenvectors v 1 and v 2 v 1 and v 2 are orthogonal. Theorem A real and symmetric matrix A M(n n) has always n orthonormal eigenvectors. Theorem Given a matrix A M(n n). It exists always a non-singular matrix C, such that C 1 AC = J, where J is a Jordan matrix.

34 Eigenvalues and Eigenvectors Diagonalisation of a matrix C 1 AC = D: dime λ = µ(p, λ) D = diag(λ 1,..., λ n ), where λ i are the eigenvalues of A and C = (v 1,, v n ), where v i are the linearly independent eigenvectors of A. Orthogonal diagonalisation of a matrix Q T AQ = D: dime λ = µ(p, λ) Given a real and symmetric matrix A. D = diag(λ 1,..., λ n ), where λ i are the real eigenvalues of A and Q = (v 1,, v n ), where v i are the orthonormal eigenvectors of A. Jordan form C 1 AC = J: dime λ < µ(p, λ) J = diag(b 1 (λ 1 ),..., B r (λ r )), where B i (λ i ) is the Jordan block-matrix corresponding to λ i and C = (v 1,, v n ), where v i are the linearly independent eigenvectors of A.

35 Singular Value Decomposition Singular Value Decomposition For a matrix A M(m n) it always exists a decomposition A = UΣV T, where U M(m m) and V M(n ( n) ) are orthonormal matrices D 0 and Σ M(m n), where Σ = and D is a diagonal 0 0 matrix diag(σ 1,..., σ r ). σ 1... σ r and σ r+1 =... = σ n = 0 are the singular values of A, i.e. the square roots of the eigenvalues of A T A.

36 Singular Value Decomposition Determination Step 1: Determine the eigenvalues and eigenvectors of A T A. Step 2: Determine the orthonormal eigenvectors {v 1,..., v n } (Gram-Schmidt). The orthonormal eigenvectors are the columns of V. Step 3.1: Calculate u i = 1 σ i Av i, i = 1,..., r. Step 3.2: If r < m: Complete the vectors {u 1,..., u r } to an orthonormal basis {u 1,..., u m }. The orthonormal vectors u i are the columns of U.

37 Singular Value Decomposition Application Determination of the rank: Given σ 1,..., σ r the non-zero singular values of A rank(a) = r. Determination of the Moore-Penrose inverse: Given A M(m n) with linearly dependent columns, the Moore-Penrose inverse can be determined by A # = V Σ # U T, where Σ # M(n m) and Σ # = D 1 = diag( 1 σ 1,..., 1 σ r ). ( D ), with

38 Decomposition of matrices Cholesky Decomposition Given A M(n n) a positive definite matrix (symmetric and all eigenvalues positive). Then it exists a lower triangular matrix L, such that A = LL T. Advantages Fast column wise algorithm, starting with first column. Formulas for the calculation of the l ij s easy to get. The Cholesky decomposition determines A. Applications in finance and actuarial science, e.g. estimation of the covariance matrix of several asset classes.

39 Decomposition of matrices LU Decomposition Given A M(n n) a non-singular matrix. Then it exists a lower triangular matrix L, an upper triangular matrix U and a permutation matrix P, such that PA = LU or equivalently A = P T LU. Definitions U is the resulting matrix when applying Gauss on A. L contains 1 on the diagonal and the l ij s with i > j are the ( 1)factors used in Gauss to reduce the corresponding a ij s to 0. If rows in A are changed the corresponding factors l ij in L have to change as well.

40 Decomposition of matrices LU decomposition cont d: Definitions P is the identity matrix with permutated rows i and j if the Gauss algorithm permutates the rows i and j in A. Advantages If the equation system Ax = b has to be solved for different vectors b, the decomposition has to be done only once. In this case the LU decomposition is faster than Gauss. The equation system can be solved via solving Ly = b first and then Ux = y.

41 Decomposition of matrices QR Decomposition Given A M(m n). Then it exists an orthogonal matrix Q M(m n) (containing the orthonormalized columns of A), an upper triangular matrix R M(n n), such that A = QR. Definitions Q = (q 1,..., q n ) is obtained by applying Gram-Schmidt to the columns of A = (a 1,..., a n ). q 1, a 1 q 1, a 2 q 1, a n R = Q T 0 q 2, a 2 q 2, a n A = q n, a n

42 Decomposition of matrices QR decomposition cont d: Advantages Solving Ax = b is equivalent to solving Q T Ax = Q T b = b. Therefore it is sufficient to solve Rx = b. This decomposition can ce applied to every matrix A M(m n). Problems with numerical stability of Gram-Schmidt more efficient and stable algorithms available.

43 Norms of Matrices Norms of matrices The norm A of a matrix A = (a ij ) M(n n) can be defined by n A := max a ij, i = 1,..., n j=1 This the maximum of the row s sums. Alternatively the norm can be defined by the maximum of the column s sums and also other definitions are available in literature.

44 Norms of Matrices cont d Properties The following list is valid for all different definitions of norms of matrices. A 0 αa = α A A + B A + B A B A B A A 1 = I = 1

45 Norms of Matrices cont d Condition numbers The condition number conda of a non-singular matrix A is defined by conda := A A 1. Due to the properties of the norm of A it follows conda 1. Definition A matrix A is not sensitive to the changing of entries, if the condition number is close to 1. If the condition number of a matrix A is 1, then a small changing in A or b can have a huge impact on the solution x of Ax = b.

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v ) Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

MATH 304 Linear Algebra Lecture 34: Review for Test 2. MATH 304 Linear Algebra Lecture 34: Review for Test 2. Topics for Test 2 Linear transformations (Leon 4.1 4.3) Matrix transformations Matrix of a linear mapping Similar matrices Orthogonality (Leon 5.1

More information

Math Linear Algebra Final Exam Review Sheet

Math Linear Algebra Final Exam Review Sheet Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of

More information

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET This is a (not quite comprehensive) list of definitions and theorems given in Math 1553. Pay particular attention to the ones in red. Study Tip For each

More information

1 Last time: least-squares problems

1 Last time: least-squares problems MATH Linear algebra (Fall 07) Lecture Last time: least-squares problems Definition. If A is an m n matrix and b R m, then a least-squares solution to the linear system Ax = b is a vector x R n such that

More information

1. General Vector Spaces

1. General Vector Spaces 1.1. Vector space axioms. 1. General Vector Spaces Definition 1.1. Let V be a nonempty set of objects on which the operations of addition and scalar multiplication are defined. By addition we mean a rule

More information

SUMMARY OF MATH 1600

SUMMARY OF MATH 1600 SUMMARY OF MATH 1600 Note: The following list is intended as a study guide for the final exam. It is a continuation of the study guide for the midterm. It does not claim to be a comprehensive list. You

More information

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax = . (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)

More information

Calculating determinants for larger matrices

Calculating determinants for larger matrices Day 26 Calculating determinants for larger matrices We now proceed to define det A for n n matrices A As before, we are looking for a function of A that satisfies the product formula det(ab) = det A det

More information

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2 EE/ACM 150 - Applications of Convex Optimization in Signal Processing and Communications Lecture 2 Andre Tkacenko Signal Processing Research Group Jet Propulsion Laboratory April 5, 2012 Andre Tkacenko

More information

5.6. PSEUDOINVERSES 101. A H w.

5.6. PSEUDOINVERSES 101. A H w. 5.6. PSEUDOINVERSES 0 Corollary 5.6.4. If A is a matrix such that A H A is invertible, then the least-squares solution to Av = w is v = A H A ) A H w. The matrix A H A ) A H is the left inverse of A and

More information

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by Linear Algebra Determinants and Eigenvalues Introduction: Many important geometric and algebraic properties of square matrices are associated with a single real number revealed by what s known as the determinant.

More information

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS KOLMAN & HILL NOTES BY OTTO MUTZBAUER 11 Systems of Linear Equations 1 Linear Equations and Matrices Numbers in our context are either real numbers or complex

More information

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2. MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2. Diagonalization Let L be a linear operator on a finite-dimensional vector space V. Then the following conditions are equivalent:

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

Math 102 Final Exam - Dec 14 - PCYNH pm Fall Name Student No. Section A0

Math 102 Final Exam - Dec 14 - PCYNH pm Fall Name Student No. Section A0 Math 12 Final Exam - Dec 14 - PCYNH 122-6pm Fall 212 Name Student No. Section A No aids allowed. Answer all questions on test paper. Total Marks: 4 8 questions (plus a 9th bonus question), 5 points per

More information

MA 265 FINAL EXAM Fall 2012

MA 265 FINAL EXAM Fall 2012 MA 265 FINAL EXAM Fall 22 NAME: INSTRUCTOR S NAME:. There are a total of 25 problems. You should show work on the exam sheet, and pencil in the correct answer on the scantron. 2. No books, notes, or calculators

More information

Online Exercises for Linear Algebra XM511

Online Exercises for Linear Algebra XM511 This document lists the online exercises for XM511. The section ( ) numbers refer to the textbook. TYPE I are True/False. Lecture 02 ( 1.1) Online Exercises for Linear Algebra XM511 1) The matrix [3 2

More information

ANSWERS. E k E 2 E 1 A = B

ANSWERS. E k E 2 E 1 A = B MATH 7- Final Exam Spring ANSWERS Essay Questions points Define an Elementary Matrix Display the fundamental matrix multiply equation which summarizes a sequence of swap, combination and multiply operations,

More information

1 9/5 Matrices, vectors, and their applications

1 9/5 Matrices, vectors, and their applications 1 9/5 Matrices, vectors, and their applications Algebra: study of objects and operations on them. Linear algebra: object: matrices and vectors. operations: addition, multiplication etc. Algorithms/Geometric

More information

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88 Math Camp 2010 Lecture 4: Linear Algebra Xiao Yu Wang MIT Aug 2010 Xiao Yu Wang (MIT) Math Camp 2010 08/10 1 / 88 Linear Algebra Game Plan Vector Spaces Linear Transformations and Matrices Determinant

More information

(f + g)(s) = f(s) + g(s) for f, g V, s S (cf)(s) = cf(s) for c F, f V, s S

(f + g)(s) = f(s) + g(s) for f, g V, s S (cf)(s) = cf(s) for c F, f V, s S 1 Vector spaces 1.1 Definition (Vector space) Let V be a set with a binary operation +, F a field, and (c, v) cv be a mapping from F V into V. Then V is called a vector space over F (or a linear space

More information

Linear Algebra- Final Exam Review

Linear Algebra- Final Exam Review Linear Algebra- Final Exam Review. Let A be invertible. Show that, if v, v, v 3 are linearly independent vectors, so are Av, Av, Av 3. NOTE: It should be clear from your answer that you know the definition.

More information

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION)

HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) HOMEWORK PROBLEMS FROM STRANG S LINEAR ALGEBRA AND ITS APPLICATIONS (4TH EDITION) PROFESSOR STEVEN MILLER: BROWN UNIVERSITY: SPRING 2007 1. CHAPTER 1: MATRICES AND GAUSSIAN ELIMINATION Page 9, # 3: Describe

More information

Mathematical Methods wk 2: Linear Operators

Mathematical Methods wk 2: Linear Operators John Magorrian, magog@thphysoxacuk These are work-in-progress notes for the second-year course on mathematical methods The most up-to-date version is available from http://www-thphysphysicsoxacuk/people/johnmagorrian/mm

More information

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition

AM 205: lecture 8. Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition AM 205: lecture 8 Last time: Cholesky factorization, QR factorization Today: how to compute the QR factorization, the Singular Value Decomposition QR Factorization A matrix A R m n, m n, can be factorized

More information

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors.

MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. MATH 304 Linear Algebra Lecture 20: The Gram-Schmidt process (continued). Eigenvalues and eigenvectors. Orthogonal sets Let V be a vector space with an inner product. Definition. Nonzero vectors v 1,v

More information

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and

A matrix is a rectangular array of. objects arranged in rows and columns. The objects are called the entries. is called the size of the matrix, and Section 5.5. Matrices and Vectors A matrix is a rectangular array of objects arranged in rows and columns. The objects are called the entries. A matrix with m rows and n columns is called an m n matrix.

More information

Real symmetric matrices/1. 1 Eigenvalues and eigenvectors

Real symmetric matrices/1. 1 Eigenvalues and eigenvectors Real symmetric matrices 1 Eigenvalues and eigenvectors We use the convention that vectors are row vectors and matrices act on the right. Let A be a square matrix with entries in a field F; suppose that

More information

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

More information

Basic Concepts in Linear Algebra

Basic Concepts in Linear Algebra Basic Concepts in Linear Algebra Grady B Wright Department of Mathematics Boise State University February 2, 2015 Grady B Wright Linear Algebra Basics February 2, 2015 1 / 39 Numerical Linear Algebra Linear

More information

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015

Solutions to Final Practice Problems Written by Victoria Kala Last updated 12/5/2015 Solutions to Final Practice Problems Written by Victoria Kala vtkala@math.ucsb.edu Last updated /5/05 Answers This page contains answers only. See the following pages for detailed solutions. (. (a x. See

More information

18.06SC Final Exam Solutions

18.06SC Final Exam Solutions 18.06SC Final Exam Solutions 1 (4+7=11 pts.) Suppose A is 3 by 4, and Ax = 0 has exactly 2 special solutions: 1 2 x 1 = 1 and x 2 = 1 1 0 0 1 (a) Remembering that A is 3 by 4, find its row reduced echelon

More information

and let s calculate the image of some vectors under the transformation T.

and let s calculate the image of some vectors under the transformation T. Chapter 5 Eigenvalues and Eigenvectors 5. Eigenvalues and Eigenvectors Let T : R n R n be a linear transformation. Then T can be represented by a matrix (the standard matrix), and we can write T ( v) =

More information

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x

BASIC ALGORITHMS IN LINEAR ALGEBRA. Matrices and Applications of Gaussian Elimination. A 2 x. A T m x. A 1 x A T 1. A m x BASIC ALGORITHMS IN LINEAR ALGEBRA STEVEN DALE CUTKOSKY Matrices and Applications of Gaussian Elimination Systems of Equations Suppose that A is an n n matrix with coefficents in a field F, and x = (x,,

More information

4. Determinants.

4. Determinants. 4. Determinants 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 2 2 determinant 4.1. Determinants; Cofactor Expansion Determinants of 2 2 and 3 3 Matrices 3 3 determinant 4.1.

More information

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007

Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007 Math 224, Fall 2007 Exam 3 Thursday, December 6, 2007 You have 1 hour and 20 minutes. No notes, books, or other references. You are permitted to use Maple during this exam, but you must start with a blank

More information

Math 314/ Exam 2 Blue Exam Solutions December 4, 2008 Instructor: Dr. S. Cooper. Name:

Math 314/ Exam 2 Blue Exam Solutions December 4, 2008 Instructor: Dr. S. Cooper. Name: Math 34/84 - Exam Blue Exam Solutions December 4, 8 Instructor: Dr. S. Cooper Name: Read each question carefully. Be sure to show all of your work and not just your final conclusion. You may not use your

More information

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic

Applied Mathematics 205. Unit II: Numerical Linear Algebra. Lecturer: Dr. David Knezevic Applied Mathematics 205 Unit II: Numerical Linear Algebra Lecturer: Dr. David Knezevic Unit II: Numerical Linear Algebra Chapter II.3: QR Factorization, SVD 2 / 66 QR Factorization 3 / 66 QR Factorization

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =

More information

Math 315: Linear Algebra Solutions to Assignment 7

Math 315: Linear Algebra Solutions to Assignment 7 Math 5: Linear Algebra s to Assignment 7 # Find the eigenvalues of the following matrices. (a.) 4 0 0 0 (b.) 0 0 9 5 4. (a.) The characteristic polynomial det(λi A) = (λ )(λ )(λ ), so the eigenvalues are

More information

Computational math: Assignment 1

Computational math: Assignment 1 Computational math: Assignment 1 Thanks Ting Gao for her Latex file 11 Let B be a 4 4 matrix to which we apply the following operations: 1double column 1, halve row 3, 3add row 3 to row 1, 4interchange

More information

Definition (T -invariant subspace) Example. Example

Definition (T -invariant subspace) Example. Example Eigenvalues, Eigenvectors, Similarity, and Diagonalization We now turn our attention to linear transformations of the form T : V V. To better understand the effect of T on the vector space V, we begin

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Vector Spaces, Affine Spaces, and Metric Spaces

Vector Spaces, Affine Spaces, and Metric Spaces Vector Spaces, Affine Spaces, and Metric Spaces 2 This chapter is only meant to give a short overview of the most important concepts in linear algebra, affine spaces, and metric spaces and is not intended

More information

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220)

1 Vectors. Notes for Bindel, Spring 2017 Numerical Analysis (CS 4220) Notes for 2017-01-30 Most of mathematics is best learned by doing. Linear algebra is no exception. You have had a previous class in which you learned the basics of linear algebra, and you will have plenty

More information

Mathematical Foundations

Mathematical Foundations Chapter 1 Mathematical Foundations 1.1 Big-O Notations In the description of algorithmic complexity, we often have to use the order notations, often in terms of big O and small o. Loosely speaking, for

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Chapter 1 Eigenvalues and Eigenvectors Among problems in numerical linear algebra, the determination of the eigenvalues and eigenvectors of matrices is second in importance only to the solution of linear

More information

Quiz ) Locate your 1 st order neighbors. 1) Simplify. Name Hometown. Name Hometown. Name Hometown.

Quiz ) Locate your 1 st order neighbors. 1) Simplify. Name Hometown. Name Hometown. Name Hometown. Quiz 1) Simplify 9999 999 9999 998 9999 998 2) Locate your 1 st order neighbors Name Hometown Me Name Hometown Name Hometown Name Hometown Solving Linear Algebraic Equa3ons Basic Concepts Here only real

More information

Pseudoinverse & Moore-Penrose Conditions

Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego p. 1/1 Lecture 7 ECE 275A Pseudoinverse & Moore-Penrose Conditions ECE 275AB Lecture 7 Fall 2008 V1.0 c K. Kreutz-Delgado, UC San Diego

More information

MATH 369 Linear Algebra

MATH 369 Linear Algebra Assignment # Problem # A father and his two sons are together 00 years old. The father is twice as old as his older son and 30 years older than his younger son. How old is each person? Problem # 2 Determine

More information

LEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach

LEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach LEC 2: Principal Component Analysis (PCA) A First Dimensionality Reduction Approach Dr. Guangliang Chen February 9, 2016 Outline Introduction Review of linear algebra Matrix SVD PCA Motivation The digits

More information

I. Multiple Choice Questions (Answer any eight)

I. Multiple Choice Questions (Answer any eight) Name of the student : Roll No : CS65: Linear Algebra and Random Processes Exam - Course Instructor : Prashanth L.A. Date : Sep-24, 27 Duration : 5 minutes INSTRUCTIONS: The test will be evaluated ONLY

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters, transforms,

More information

Computational Methods. Eigenvalues and Singular Values

Computational Methods. Eigenvalues and Singular Values Computational Methods Eigenvalues and Singular Values Manfred Huber 2010 1 Eigenvalues and Singular Values Eigenvalues and singular values describe important aspects of transformations and of data relations

More information

Lecture 3: Review of Linear Algebra

Lecture 3: Review of Linear Algebra ECE 83 Fall 2 Statistical Signal Processing instructor: R Nowak, scribe: R Nowak Lecture 3: Review of Linear Algebra Very often in this course we will represent signals as vectors and operators (eg, filters,

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition An Important topic in NLA Radu Tiberiu Trîmbiţaş Babeş-Bolyai University February 23, 2009 Radu Tiberiu Trîmbiţaş ( Babeş-Bolyai University)The Singular Value Decomposition

More information

Chapter 0 Miscellaneous Preliminaries

Chapter 0 Miscellaneous Preliminaries EE 520: Topics Compressed Sensing Linear Algebra Review Notes scribed by Kevin Palmowski, Spring 2013, for Namrata Vaswani s course Notes on matrix spark courtesy of Brian Lois More notes added by Namrata

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

Linear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form

Linear algebra II Homework #1 solutions A = This means that every eigenvector with eigenvalue λ = 1 must have the form Linear algebra II Homework # solutions. Find the eigenvalues and the eigenvectors of the matrix 4 6 A =. 5 Since tra = 9 and deta = = 8, the characteristic polynomial is f(λ) = λ (tra)λ+deta = λ 9λ+8 =

More information

Review of some mathematical tools

Review of some mathematical tools MATHEMATICAL FOUNDATIONS OF SIGNAL PROCESSING Fall 2016 Benjamín Béjar Haro, Mihailo Kolundžija, Reza Parhizkar, Adam Scholefield Teaching assistants: Golnoosh Elhami, Hanjie Pan Review of some mathematical

More information

2. Review of Linear Algebra

2. Review of Linear Algebra 2. Review of Linear Algebra ECE 83, Spring 217 In this course we will represent signals as vectors and operators (e.g., filters, transforms, etc) as matrices. This lecture reviews basic concepts from linear

More information

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015 Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 205. If A is a 3 3 triangular matrix, explain why det(a) is equal to the product of entries on the diagonal. If A is a lower triangular or diagonal

More information

Basic Concepts in Matrix Algebra

Basic Concepts in Matrix Algebra Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1

More information

Matrices A brief introduction

Matrices A brief introduction Matrices A brief introduction Basilio Bona DAUIN Politecnico di Torino September 2013 Basilio Bona (DAUIN) Matrices September 2013 1 / 74 Definitions Definition A matrix is a set of N real or complex numbers

More information

Computational Linear Algebra

Computational Linear Algebra Computational Linear Algebra PD Dr. rer. nat. habil. Ralf Peter Mundani Computation in Engineering / BGU Scientific Computing in Computer Science / INF Winter Term 2017/18 Part 2: Direct Methods PD Dr.

More information

COMP 558 lecture 18 Nov. 15, 2010

COMP 558 lecture 18 Nov. 15, 2010 Least squares We have seen several least squares problems thus far, and we will see more in the upcoming lectures. For this reason it is good to have a more general picture of these problems and how to

More information

Linear Algebra Massoud Malek

Linear Algebra Massoud Malek CSUEB Linear Algebra Massoud Malek Inner Product and Normed Space In all that follows, the n n identity matrix is denoted by I n, the n n zero matrix by Z n, and the zero vector by θ n An inner product

More information

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions

Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Linear Algebra, part 2 Eigenvalues, eigenvectors and least squares solutions Anna-Karin Tornberg Mathematical Models, Analysis and Simulation Fall semester, 2013 Main problem of linear algebra 2: Given

More information

Spectral radius, symmetric and positive matrices

Spectral radius, symmetric and positive matrices Spectral radius, symmetric and positive matrices Zdeněk Dvořák April 28, 2016 1 Spectral radius Definition 1. The spectral radius of a square matrix A is ρ(a) = max{ λ : λ is an eigenvalue of A}. For an

More information

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 1 x 2. x n 8 (4) 3 4 2 MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS SYSTEMS OF EQUATIONS AND MATRICES Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a

More information

Matrices and Matrix Algebra.

Matrices and Matrix Algebra. Matrices and Matrix Algebra 3.1. Operations on Matrices Matrix Notation and Terminology Matrix: a rectangular array of numbers, called entries. A matrix with m rows and n columns m n A n n matrix : a square

More information

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1)

EXERCISE SET 5.1. = (kx + kx + k, ky + ky + k ) = (kx + kx + 1, ky + ky + 1) = ((k + )x + 1, (k + )y + 1) EXERCISE SET 5. 6. The pair (, 2) is in the set but the pair ( )(, 2) = (, 2) is not because the first component is negative; hence Axiom 6 fails. Axiom 5 also fails. 8. Axioms, 2, 3, 6, 9, and are easily

More information

EE731 Lecture Notes: Matrix Computations for Signal Processing

EE731 Lecture Notes: Matrix Computations for Signal Processing EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University October 17, 005 Lecture 3 3 he Singular Value Decomposition

More information

4. Linear transformations as a vector space 17

4. Linear transformations as a vector space 17 4 Linear transformations as a vector space 17 d) 1 2 0 0 1 2 0 0 1 0 0 0 1 2 3 4 32 Let a linear transformation in R 2 be the reflection in the line = x 2 Find its matrix 33 For each linear transformation

More information

Generalized Eigenvectors and Jordan Form

Generalized Eigenvectors and Jordan Form Generalized Eigenvectors and Jordan Form We have seen that an n n matrix A is diagonalizable precisely when the dimensions of its eigenspaces sum to n. So if A is not diagonalizable, there is at least

More information

Ph.D. Katarína Bellová Page 1 Mathematics 2 (10-PHY-BIPMA2) EXAM - Solutions, 20 July 2017, 10:00 12:00 All answers to be justified.

Ph.D. Katarína Bellová Page 1 Mathematics 2 (10-PHY-BIPMA2) EXAM - Solutions, 20 July 2017, 10:00 12:00 All answers to be justified. PhD Katarína Bellová Page 1 Mathematics 2 (10-PHY-BIPMA2 EXAM - Solutions, 20 July 2017, 10:00 12:00 All answers to be justified Problem 1 [ points]: For which parameters λ R does the following system

More information

Linear System Theory

Linear System Theory Linear System Theory Wonhee Kim Lecture 4 Apr. 4, 2018 1 / 40 Recap Vector space, linear space, linear vector space Subspace Linearly independence and dependence Dimension, Basis, Change of Basis 2 / 40

More information

Extra Problems for Math 2050 Linear Algebra I

Extra Problems for Math 2050 Linear Algebra I Extra Problems for Math 5 Linear Algebra I Find the vector AB and illustrate with a picture if A = (,) and B = (,4) Find B, given A = (,4) and [ AB = A = (,4) and [ AB = 8 If possible, express x = 7 as

More information

Linear Algebra Practice Problems

Linear Algebra Practice Problems Linear Algebra Practice Problems Math 24 Calculus III Summer 25, Session II. Determine whether the given set is a vector space. If not, give at least one axiom that is not satisfied. Unless otherwise stated,

More information

1 Determinants. 1.1 Determinant

1 Determinants. 1.1 Determinant 1 Determinants [SB], Chapter 9, p.188-196. [SB], Chapter 26, p.719-739. Bellow w ll study the central question: which additional conditions must satisfy a quadratic matrix A to be invertible, that is to

More information

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra

SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to Linear Algebra SAMPLE OF THE STUDY MATERIAL PART OF CHAPTER 1 Introduction to 1.1. Introduction Linear algebra is a specific branch of mathematics dealing with the study of vectors, vector spaces with functions that

More information

Math 240 Calculus III

Math 240 Calculus III The Calculus III Summer 2015, Session II Wednesday, July 8, 2015 Agenda 1. of the determinant 2. determinants 3. of determinants What is the determinant? Yesterday: Ax = b has a unique solution when A

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

The Spectral Theorem for normal linear maps

The Spectral Theorem for normal linear maps MAT067 University of California, Davis Winter 2007 The Spectral Theorem for normal linear maps Isaiah Lankham, Bruno Nachtergaele, Anne Schilling (March 14, 2007) In this section we come back to the question

More information

Linear Algebra & Geometry why is linear algebra useful in computer vision?

Linear Algebra & Geometry why is linear algebra useful in computer vision? Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia

More information

Chapter 7. Linear Algebra: Matrices, Vectors,

Chapter 7. Linear Algebra: Matrices, Vectors, Chapter 7. Linear Algebra: Matrices, Vectors, Determinants. Linear Systems Linear algebra includes the theory and application of linear systems of equations, linear transformations, and eigenvalue problems.

More information

Background Mathematics (2/2) 1. David Barber

Background Mathematics (2/2) 1. David Barber Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and

More information

1. Select the unique answer (choice) for each problem. Write only the answer.

1. Select the unique answer (choice) for each problem. Write only the answer. MATH 5 Practice Problem Set Spring 7. Select the unique answer (choice) for each problem. Write only the answer. () Determine all the values of a for which the system has infinitely many solutions: x +

More information

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i )

1 Multiply Eq. E i by λ 0: (λe i ) (E i ) 2 Multiply Eq. E j by λ and add to Eq. E i : (E i + λe j ) (E i ) Direct Methods for Linear Systems Chapter Direct Methods for Solving Linear Systems Per-Olof Persson persson@berkeleyedu Department of Mathematics University of California, Berkeley Math 18A Numerical

More information

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix.

MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix. MATH 323 Linear Algebra Lecture 12: Basis of a vector space (continued). Rank and nullity of a matrix. Basis Definition. Let V be a vector space. A linearly independent spanning set for V is called a basis.

More information

Numerical Linear Algebra

Numerical Linear Algebra Numerical Linear Algebra Direct Methods Philippe B. Laval KSU Fall 2017 Philippe B. Laval (KSU) Linear Systems: Direct Solution Methods Fall 2017 1 / 14 Introduction The solution of linear systems is one

More information

The Eigenvalue Problem: Perturbation Theory

The Eigenvalue Problem: Perturbation Theory Jim Lambers MAT 610 Summer Session 2009-10 Lecture 13 Notes These notes correspond to Sections 7.2 and 8.1 in the text. The Eigenvalue Problem: Perturbation Theory The Unsymmetric Eigenvalue Problem Just

More information

Example Linear Algebra Competency Test

Example Linear Algebra Competency Test Example Linear Algebra Competency Test The 4 questions below are a combination of True or False, multiple choice, fill in the blank, and computations involving matrices and vectors. In the latter case,

More information

Linear Algebra. Workbook

Linear Algebra. Workbook Linear Algebra Workbook Paul Yiu Department of Mathematics Florida Atlantic University Last Update: November 21 Student: Fall 2011 Checklist Name: A B C D E F F G H I J 1 2 3 4 5 6 7 8 9 10 xxx xxx xxx

More information

av 1 x 2 + 4y 2 + xy + 4z 2 = 16.

av 1 x 2 + 4y 2 + xy + 4z 2 = 16. 74 85 Eigenanalysis The subject of eigenanalysis seeks to find a coordinate system, in which the solution to an applied problem has a simple expression Therefore, eigenanalysis might be called the method

More information

Linear Algebra & Analysis Review UW EE/AA/ME 578 Convex Optimization

Linear Algebra & Analysis Review UW EE/AA/ME 578 Convex Optimization Linear Algebra & Analysis Review UW EE/AA/ME 578 Convex Optimization January 9, 2015 1 Notation 1. Book pages without explicit citation refer to [1]. 2. R n denotes the set of real n-column vectors. 3.

More information

Quick Tour of Linear Algebra and Graph Theory

Quick Tour of Linear Algebra and Graph Theory Quick Tour of Linear Algebra and Graph Theory CS224W: Social and Information Network Analysis Fall 2014 David Hallac Based on Peter Lofgren, Yu Wayne Wu, and Borja Pelato s previous versions Matrices and

More information