Linear Algebra in Actuarial Science: Slides to the lecture

Similar documents
Lecture Summaries for Linear Algebra M51A

Ir O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

MATH 240 Spring, Chapter 1: Linear Equations and Matrices

235 Final exam review questions

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

MATH 423 Linear Algebra II Lecture 33: Diagonalization of normal operators.

Matrix decompositions

Algebra C Numerical Linear Algebra Sample Exam Problems

Math Linear Algebra Final Exam Review Sheet

CS 143 Linear Algebra Review

Linear Algebra M1 - FIB. Contents: 5. Matrices, systems of linear equations and determinants 6. Vector space 7. Linear maps 8.

MATH 304 Linear Algebra Lecture 34: Review for Test 2.

1. General Vector Spaces

IMPORTANT DEFINITIONS AND THEOREMS REFERENCE SHEET

1 Last time: least-squares problems

Lecture notes: Applied linear algebra Part 1. Version 2

UNIT 6: The singular value decomposition.

Matrices and Linear Algebra

SUMMARY OF MATH 1600

SYLLABUS. 1 Linear maps and matrices

Quantum Computing Lecture 2. Review of Linear Algebra

Review problems for MA 54, Fall 2004.

Linear algebra II Tutorial solutions #1 A = x 1

Numerical Methods - Numerical Linear Algebra

A Review of Linear Algebra

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

Definitions for Quizzes

EE/ACM Applications of Convex Optimization in Signal Processing and Communications Lecture 2

Preliminary/Qualifying Exam in Numerical Analysis (Math 502a) Spring 2012

Conceptual Questions for Review

Linear Algebra, part 3 QR and SVD

Linear Algebra, part 3. Going back to least squares. Mathematical Models, Analysis and Simulation = 0. a T 1 e. a T n e. Anna-Karin Tornberg

(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =

Linear Algebra Practice Problems

5.6. PSEUDOINVERSES 101. A H w.

Cheat Sheet for MATH461

Review of Some Concepts from Linear Algebra: Part 2

Linear Algebra Highlights

Calculating determinants for larger matrices

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

ELE/MCE 503 Linear Algebra Facts Fall 2018

c c c c c c c c c c a 3x3 matrix C= has a determinant determined by

33AH, WINTER 2018: STUDY GUIDE FOR FINAL EXAM

Glossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB

Linear Algebra Review. Vectors

ELEMENTARY LINEAR ALGEBRA WITH APPLICATIONS. 1. Linear Equations and Matrices

Math 102 Final Exam - Dec 14 - PCYNH pm Fall Name Student No. Section A0

Chapter 3 Transformations

EXAM. Exam 1. Math 5316, Fall December 2, 2012

Fundamentals of Engineering Analysis (650163)

Elementary linear algebra

Linear Algebra Primer

Knowledge Discovery and Data Mining 1 (VO) ( )

LINEAR ALGEBRA 1, 2012-I PARTIAL EXAM 3 SOLUTIONS TO PRACTICE PROBLEMS

Problem # Max points possible Actual score Total 120

MATH 304 Linear Algebra Lecture 23: Diagonalization. Review for Test 2.

3 (Maths) Linear Algebra

Math 4A Notes. Written by Victoria Kala Last updated June 11, 2017

PRACTICE PROBLEMS FOR THE FINAL

LinGloss. A glossary of linear algebra

EE731 Lecture Notes: Matrix Computations for Signal Processing

STA141C: Big Data & High Performance Statistical Computing

LINEAR ALGEBRA SUMMARY SHEET.

Lecture 7: Positive Semidefinite Matrices

OHSx XM511 Linear Algebra: Solutions to Online True/False Exercises

MA 265 FINAL EXAM Fall 2012

MAT Linear Algebra Collection of sample exams

TBP MATH33A Review Sheet. November 24, 2018

ANSWERS. E k E 2 E 1 A = B

Notes on Linear Algebra

1 9/5 Matrices, vectors, and their applications

Fall TMA4145 Linear Methods. Exercise set Given the matrix 1 2

Linear Algebra problems

Online Exercises for Linear Algebra XM511

Econ Slides from Lecture 7

Math Camp Lecture 4: Linear Algebra. Xiao Yu Wang. Aug 2010 MIT. Xiao Yu Wang (MIT) Math Camp /10 1 / 88

Equality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.

A Brief Outline of Math 355

DS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.

There are six more problems on the next two pages

MATH 23a, FALL 2002 THEORETICAL LINEAR ALGEBRA AND MULTIVARIABLE CALCULUS Solutions to Final Exam (in-class portion) January 22, 2003

MTH 464: Computational Linear Algebra

(f + g)(s) = f(s) + g(s) for f, g V, s S (cf)(s) = cf(s) for c F, f V, s S

NOTES on LINEAR ALGEBRA 1

Matrix Factorization and Analysis

MATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL

Lecture 1: Review of linear algebra

MATRICES ARE SIMILAR TO TRIANGULAR MATRICES

STA141C: Big Data & High Performance Statistical Computing

2. Every linear system with the same number of equations as unknowns has a unique solution.

6 Inner Product Spaces

HOMEWORK 9 solutions

Basic Elements of Linear Algebra

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

Mathematical Methods wk 2: Linear Operators

Chapter 7: Symmetric Matrices and Quadratic Forms

Multivariate Statistical Analysis

1. In this problem, if the statement is always true, circle T; otherwise, circle F.

Linear Algebra Lecture Notes-II

Transcription:

Linear Algebra in Actuarial Science: Slides to the lecture Fall Semester 2010/2011

Linear Algebra is a Tool-Box Linear Equation Systems Discretization of differential equations: solving linear equations systems 1 α 0 0 0... 0 α 1 α 0 0... 0 0 α 1 α 0... 0.. 0 0 0... 0 α 1 Existence of solution? Uniqueness of solution? Stability of solution? = V (0) 0. 0 V (n)

Linear Algebra is a Tool-Box Minimization Problem Given data vector b and parameter matrix A. Find x such that Ax b gets minimal. Idea: Suitable transform of parameters and data Solution: Decomposition of matrices

Linear Algebra is a Tool-Box Bonus-Malus-Systems Car insurance with two different premium-levels a < c. Claim in the last year or in the year before: premium c No claims in the last two years: premium a. P(Claim in one year) = p = 1 q, P = p q 0 p 0 q p 0 q Question: What is right choice for a and c? Markov Chains with stationary distributions: What is lim n P n?

Linear Algebra is a Tool-Box Construction of index-linked products with n underlyings Classical models use normal distribution assumptions for assets Basket leads to multivariate normal distribution What is the square root of a matrix? How can simulations be done on covariance matrices? Idea: Transform the matrix Solution: Decomposition of matrices

Linear Algebra Syllabus Linear Equation Systems: Gauss Algorithm, Existence and Uniqueness of Solutions, Inverse, Determinants Vector Spaces: Linear (In-)Dependence, Basis, Inner Products, Gram-Schmidt Algorithm Eigenvalues and Eigenvectors: Determination, Diagonalisation, Singular Values Decompositions of Matrices: LU, SVD, QR, Cholesky

Field Axioms Definition A field is a set F with two operations, called addition and multiplication, which satisfy the following so-called "field axioms" (A), (M) and (D): (A) Axioms for addition (A1) If x F and y F, then x + y F. (A2) Addition is commutative: x + y = y + x for all x, y F. (A3) Addition is associative: (x + y) + z = x + (y + z) for all x, y, z F. (A4) F contains an element 0 such that 0 + x = x for all x F. (A5) To every x F corresponds an element x F such that x + ( x) = 0.

Field Axioms Definition Field definition continued: (M) Axioms for multiplication (M1) If x F and y F, then xy F. (M2) Multiplication is commutative: xy = yx for all x, y F. (M3) Multiplication is associative: (xy)z = x(yz) for all x, y, z F. (M4) F contains an element 1 0 such that 1x = x for all x F. (M5) If x F and x 0 then exists an element 1/x F such that x (1/x) = 1. (D) The distributive law x(y + z) = xy + xz holds for all x, y, z F.

Addition of Matrices (A1) A = B a ij = b ij, 1 i m, 1 j n. (A2) A M(m n), B M(m n) A + B := (a ij + b ij ). (A3) Addition is commutative: A + B = B + A. (A4) Addition is associative: (A + B) + C = A + (B + C). 0... 0 (A5) Identity element: A =...... 0... 0 (A6) Negative element: A = (a ij ).

Multiplication of Matrices (M1) A M(m n), B M(n r) C = A B = (c ik ) M(m r), c ik = a i1 b 1k +... + a in b nk = n a ij b jk. j=1 (M2) Multiplication of matrices is in general not commutative! (M3) Multiplication of matrices is associative. 1... 0 (M4) Identity matrix: I = I n =..... M(n n). 0... 1 (M5) A I = I A, but A B = A C does not imply in general B = C!

Transpose, Inverse, Multiplication with scalars (M) Transpose and Inverse (T) Transpose of A: A T, with (A T ) T = A, (A + B) T = A T + B T and (A B) T = B T A T. (I) Given A M(n n) and X M(n n). If A X = X A = I, then X = A 1 is the inverse of A. (Ms) Multiplication with scalars (Ms1) λ K, A M(m n) λa := (λa ij ). (Ms2) Multiplication with scalars is commutative and associative. (Ms3) The distributive law holds: (λ + µ)a = λa + µa.

The inverse of a matrix Theorem If A 1 exists, it is unique. Definition A M(n n) is called singular, if A 1 does not exist. Otherwise it is called non-singular. Theorem Given A, B M(n n) non-singular. Then it follows A B non-singular and (A B) 1 = B 1 A 1. A 1 non-singular and (A 1 ) 1 = A. A T non-singular and (A T ) 1 = (A 1 ) T.

Gauss-Algorithm Example Gauss-Algorithm 2u + v + w = 1 4u + v = 2 2u + 2v + w = 7 2u + v + w = 1 1v 2w = 4 4w = 4

Theorems: Solutions under Gauss-Algorithm Theorem Given a matrix A M(m n) 1. The Gauss-Algorithm transforms in a finite number of steps A into an upper triangular matrix A. 2. rank(a) = rank(a ). 3. Given the augmented matrix (A, b) and the upper triangular matrix (A, b ). Then x is a solution of Ax = b x is a solution of A x = b.

Theorems: Solutions under Gauss-Algorithm Theorem Given A M(m n), b = b 1. b m R m, x = The linear equation system Ax = b has a solution, if rank(a)=rank(a, b). The linear equation system has a unique solution, if rank(a)=rank(a, b)=n. x 1. x n R n.

Theorems: Solutions under Gauss-Algorithm Theorem Given x 0 a particulate solution of Ax = b. Then the solutions x of Ax = b have the form x = x 0 + h, where h is the general solution of the homogeneous equation system. Corollary 1. Ax = b has an unique solution { Ax = b has a solution. Ax = 0, has only the solution x = 0. 2. A M(n n): Ax = b has an unique solution rank(a) = n A is non-singular Ax = 0 has only the solution x = 0. The solution of Ax = b is given as x = A 1 b. 3. rank(a) = n A 1 exists.

Determination of Determinants A M(2 2, R) ( ) a11 a A = 12 det(a) = a a 21 a 11 a 22 a 12 a 21 22 A M(3 3, R): Rule of Sarrus A = a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 a 11 a 12 a 21 a 22 a 31 a 32 det(a) = a 11 a 22 a 33 + a 12 a 23 a 31 + a 13 a 21 a 32 a 13 a 22 a 31 a 11 a 23 a 32 a 12 a 21 a 33

Determination of Determinants A M(n n, R) Definition Given a matrix A = (a ij ) M(n n). The minor A ij is the determinant of the matrix M(n 1 n 1) obtained by removing the i-th row and the j-th column of A. The associated cofactor C ij = ( 1) i+j A ij. Definition The determinant of a matrix A = (a ij ) M(n n) is defined by n det(a) = ( 1) 1+j a 1j A 1j = j=1 n a 1j C 1j, j=1 i.e. Laplace expansion with first row.

Determinants: Properties 1. det(a T ) = det(a) 2. i : a i = (0,..., 0) det(a) = 0 3. det(λa) = λ n det(a), if A M(n n) 4. i, j : i j and a i = a j det(a) = 0 5. (1) det(a B) = deta detb (2) det(a + B) det(a) + det(b) 6. A A through changing of n rows det(a ) = ( 1) n det(a) 7. A A through multiplication of i-th row with λ det(a ) = λdet(a) 8. A A through adding the λ-fold of i-th row to the j-th row det(a ) = det(a) 9. A is non-singular rank(a) = n det(a) 0 det(a 1 ) = 1 det(a) 10. If A = diag(a 11,..., a nn ) det(a) = n k=1 a kk

General vector spaces Definition Given K some field. A non-empty set V is called a K-vector space, if for all a, b V a + b V. for all λ K and a V λ a V. The field is closed with respect to addition and multiplication with scalars. Definition Given V a K-vector space. A non-empty subset U V is called a linear subspace of V if for all λ K and a, b U a + b U, λ a U.

Linear combination and span(u) Definition Given scalars λ 1,..., λ m K and vectors a 1,..., a m V. A vector a = λ 1 a 1 +... + λ m a m is called a linear combination of a 1,..., a m. The linear combination is called trivial, if λ 1 =... = λ m = 0, otherwise it is called non-trivial. Definition Given U V a non-empty subset. The set of all linear combinations of vectors in U { span(u) = a = } λ i a i, λ i K, a i U is called span(u).

Linear independency Definition The vectors a 1,..., a m V are called linearly dependent (l.d.), if there exists a non-trivial linear combination, such that λ 1 a 1 +... + λ m a m = 0, i : λ i 0. The vectors a 1,..., a m V are called linearly independent (l.i.), if they are not linearly dependent. Theorem Given vectors a 1,..., a m V. The following is equivalent: 1. The vectors a 1,..., a m are linearly dependent. 2. At least one vector a i is a linear combination of the other, i.e. i {1,..., m} : a i = j i λ j a j.

Linear independency and basis Definition The non-empty vector subset U V is called linearly independent, if every finite set of vectors in U is linearly independent. Definition Given V a K-vector space. A subset U V of linearly independent vectors in called a basis of V, if span(u) = V. A vector space has finite dimension, if it exists a finite basis. Theorem In a vector space with finite dimension every basis has the same number of vectors. This is called the dimension of V, dimv.

Algorithms Determination of Basis Given {v 1,..., v m } vectors in K n. Step 1: Write the n vectors as rows in a matrix. Step 2: Apply Gauss algorithm to get a triangular matrix. Step 3: The non-zero rows form a basis of the subset of K n. Extension of Basis Given m l.i. vectors {v 1,..., v m } vectors in K n, with m < n. Step 1: Use a basis of K n, e.g. the unit vectors {e 1,..., e n }. Step 2: Write the vectors v j and e i in a matrix. Step 3: Apply Gauss algorithm to get a triangular matrix. Step 4: The n m non-zero unit vectors complete the m l.i. vectors to a basis of K n.

Unitary vector spaces: Scalar product Definition Given V a K-vector space. A mapping, : V V K (v, w) v, w is called scalar product in V, if for all v, w, u V and for all λ K (S1) v, v > 0, if v 0. (S2) v, w = w, v, if v, w K = R. (S3) λv, w = λ v, w. (S4) u + v, w = u, w + v, w. Definition A vector space with a scalar product is called a unitary vector space.

Unitary vector spaces: Norm Definition Given V a K-vector space. A mapping : V K is called a norm, if (N1) v 0, v V and v = 0 v = 0. (N2) λv = λ v, v V and λ K. (N3) v + w v + w, v, w V (triangular inequality). Definition Given V a unitary vector space. The mapping : V K v v := v, v is the norm of V. A vector v V is called unit vector if v = 1.

Orthogonal and orthonormal vectors Definition Given V an unitary vector space. Two vectors u, v V are called orthogonal, if u, v = 0. A non-empty subset U V is called orthonormal, if all vectors in U are pairwise orthogonal and have norm equal to 1. Definition Given {v 1,..., v n } V linearly independent vectors. The vectors {v 1,..., v n } form an orthogonal basis of V, if dimv = n and v i, v j = { 1, i = j 0, i j.

Gram-Schmidt Orthogonalisation Given V an unitary vector space with norm and {v 1,..., v n } a linearly independent subset of V. The set {w 1,..., w n } is orthonormal (i.e. orthogonal and unit norm), with w 1 = w i = 1 v 1 v 1 1 u i u i, i = 2,..., n with i 1 u i = v i v i, w k w k, i = 2,..., n k=1

Eigenvalues and Eigenvectors Tacoma Bridge: 4 months after opening it crashed The oscillations of the bridge were caused by the frequency of wind being to close to the natural frequency of the bridge. The natural frequency is the eigenvalue of smallest magnitude of a system that models the bridge.

Eigenvalues and Eigenvectors Transformation, such that the central vertical axis does not change direction. The blue vector changes direction and hence is not an eigenvector. The red vector is an eigenvector of the transformation, with eigenvalue 1, since it is neither stretched nor compressed.

Eigenvalues and Eigenvectors Definition Given A M(n n), with a ij R. λ is call eigenvalue if v C n : Av = λv. The vector v 0 is called eigenvector corresponding to λ. Determination Av = λv Av λiv = 0 (A λi )v = 0 The homogenous linear equation system has a non-trivial solution det(a λi ) = 0.

Eigenvalues and Eigenvectors Multiplicities If λ j is a zero of order r of the characteristic polynomial P(λ), the algebraic multiplicity is r = µ(p, λ j ). Given an eigenvalue λ, the geometric multiplicity of λ is defined as the dimension of the eigenspace E λ. Theorem A matrix A M(n n) has n linearly independent eigenvectors dime λ = µ(p, λ). Theorem A matrix A M(n n) can be diagonalised A has n linearly independent eigenvectors

Eigenvalues and Eigenvectors Theorem Given A M(n n) symmetric (i.e A T = A) and with real entries all eigenvalues are real. Theorem Given λ 1 and λ 2 distinct eigenvalues of a real and symmetric matrix, with corresponding eigenvectors v 1 and v 2 v 1 and v 2 are orthogonal. Theorem A real and symmetric matrix A M(n n) has always n orthonormal eigenvectors. Theorem Given a matrix A M(n n). It exists always a non-singular matrix C, such that C 1 AC = J, where J is a Jordan matrix.

Eigenvalues and Eigenvectors Diagonalisation of a matrix C 1 AC = D: dime λ = µ(p, λ) D = diag(λ 1,..., λ n ), where λ i are the eigenvalues of A and C = (v 1,, v n ), where v i are the linearly independent eigenvectors of A. Orthogonal diagonalisation of a matrix Q T AQ = D: dime λ = µ(p, λ) Given a real and symmetric matrix A. D = diag(λ 1,..., λ n ), where λ i are the real eigenvalues of A and Q = (v 1,, v n ), where v i are the orthonormal eigenvectors of A. Jordan form C 1 AC = J: dime λ < µ(p, λ) J = diag(b 1 (λ 1 ),..., B r (λ r )), where B i (λ i ) is the Jordan block-matrix corresponding to λ i and C = (v 1,, v n ), where v i are the linearly independent eigenvectors of A.

Singular Value Decomposition Singular Value Decomposition For a matrix A M(m n) it always exists a decomposition A = UΣV T, where U M(m m) and V M(n ( n) ) are orthonormal matrices D 0 and Σ M(m n), where Σ = and D is a diagonal 0 0 matrix diag(σ 1,..., σ r ). σ 1... σ r and σ r+1 =... = σ n = 0 are the singular values of A, i.e. the square roots of the eigenvalues of A T A.

Singular Value Decomposition Determination Step 1: Determine the eigenvalues and eigenvectors of A T A. Step 2: Determine the orthonormal eigenvectors {v 1,..., v n } (Gram-Schmidt). The orthonormal eigenvectors are the columns of V. Step 3.1: Calculate u i = 1 σ i Av i, i = 1,..., r. Step 3.2: If r < m: Complete the vectors {u 1,..., u r } to an orthonormal basis {u 1,..., u m }. The orthonormal vectors u i are the columns of U.

Singular Value Decomposition Application Determination of the rank: Given σ 1,..., σ r the non-zero singular values of A rank(a) = r. Determination of the Moore-Penrose inverse: Given A M(m n) with linearly dependent columns, the Moore-Penrose inverse can be determined by A # = V Σ # U T, where Σ # M(n m) and Σ # = D 1 = diag( 1 σ 1,..., 1 σ r ). ( D 1 0 0 0 ), with

Decomposition of matrices Cholesky Decomposition Given A M(n n) a positive definite matrix (symmetric and all eigenvalues positive). Then it exists a lower triangular matrix L, such that A = LL T. Advantages Fast column wise algorithm, starting with first column. Formulas for the calculation of the l ij s easy to get. The Cholesky decomposition determines A. Applications in finance and actuarial science, e.g. estimation of the covariance matrix of several asset classes.

Decomposition of matrices LU Decomposition Given A M(n n) a non-singular matrix. Then it exists a lower triangular matrix L, an upper triangular matrix U and a permutation matrix P, such that PA = LU or equivalently A = P T LU. Definitions U is the resulting matrix when applying Gauss on A. L contains 1 on the diagonal and the l ij s with i > j are the ( 1)factors used in Gauss to reduce the corresponding a ij s to 0. If rows in A are changed the corresponding factors l ij in L have to change as well.

Decomposition of matrices LU decomposition cont d: Definitions P is the identity matrix with permutated rows i and j if the Gauss algorithm permutates the rows i and j in A. Advantages If the equation system Ax = b has to be solved for different vectors b, the decomposition has to be done only once. In this case the LU decomposition is faster than Gauss. The equation system can be solved via solving Ly = b first and then Ux = y.

Decomposition of matrices QR Decomposition Given A M(m n). Then it exists an orthogonal matrix Q M(m n) (containing the orthonormalized columns of A), an upper triangular matrix R M(n n), such that A = QR. Definitions Q = (q 1,..., q n ) is obtained by applying Gram-Schmidt to the columns of A = (a 1,..., a n ). q 1, a 1 q 1, a 2 q 1, a n R = Q T 0 q 2, a 2 q 2, a n A =....... 0 0 0 q n, a n

Decomposition of matrices QR decomposition cont d: Advantages Solving Ax = b is equivalent to solving Q T Ax = Q T b = b. Therefore it is sufficient to solve Rx = b. This decomposition can ce applied to every matrix A M(m n). Problems with numerical stability of Gram-Schmidt more efficient and stable algorithms available.

Norms of Matrices Norms of matrices The norm A of a matrix A = (a ij ) M(n n) can be defined by n A := max a ij, i = 1,..., n j=1 This the maximum of the row s sums. Alternatively the norm can be defined by the maximum of the column s sums and also other definitions are available in literature.

Norms of Matrices cont d Properties The following list is valid for all different definitions of norms of matrices. A 0 αa = α A A + B A + B A B A B A A 1 = I = 1

Norms of Matrices cont d Condition numbers The condition number conda of a non-singular matrix A is defined by conda := A A 1. Due to the properties of the norm of A it follows conda 1. Definition A matrix A is not sensitive to the changing of entries, if the condition number is close to 1. If the condition number of a matrix A is 1, then a small changing in A or b can have a huge impact on the solution x of Ax = b.