More Linear Algebra. Edps/Soc 584, Psych 594. Carolyn J. Anderson

Size: px
Start display at page:

Download "More Linear Algebra. Edps/Soc 584, Psych 594. Carolyn J. Anderson"

Transcription

1 More Linear Algebra Edps/Soc 584, Psych 594 Carolyn J. Anderson Department of Educational Psychology I L L I N O I S university of illinois at urbana-champaign c Board of Trustees, University of Illinois Spring 2017

2 Overview Eigensystems: decomposition of square matrix Singular Value Decompositions: decomposition of rectangular matrix Maximization: Reading: Johnson & Wichern pages 60 66, 73 75, C.J. Anderson (Illinois) More Linear Algebra Spring / 40

3 Eigensystems Let A be a p p square matrix, then the scalars λ 1,λ 2,...,λ p that satisfy the polynomial equation A λi = 0 are called eigenvalues (or characteristic roots ) of matrix A. The equation A λi = 0 is called the characteristic equation. Example: A = A λi = ( ) (1 λ) 5 5 (1 λ) = 0 (1 λ) 2 ( 5)( 5)) = 0 λ 2 2λ 24 = 0 (λ 6)(λ+4) = 0 λ 1 = 6 and λ 2 = 4 Quadratic Formula: ax 2 +bx +c = 0 ( b ± b 2 4ac)/(2a) C.J. Anderson (Illinois) More Linear Algebra Spring / 40

4 Eigenvectors A square matrix A is said to have eigenvalues λ with a corresponding eigenvector x 0 if Ax = λx or (A λi)x = 0 We usually normalize x so that it has length = 1. e = x Lx = x x x e is also an eigenvector of A because Ae = λe A(Lxe) = λ(lxe) Ax = λx so e e = 1 Any multiple of x is an eigenvector associated with λ. All that matters is the direction and not the length of x. C.J. Anderson (Illinois) More Linear Algebra Spring / 40

5 Example: Eigenvectors continued ( ) 1 5 A = 5 1 ( )( ) 1 5 x1 = λ 5 1 x 2 ( x1 x 2 ) x 1 5x 2 = λx 1 5x 1 +x 2 = λx 2 So we have 2 equations and 3 unknowns (x 1, x 2 and λ). Set λ = 6, now there are 2 equations with 2 unknowns: x 1 5x 2 = 6x 1 5x 1 +x 2 = 6x 2 x = e = ( ) 1/ 2 1/ 2 C.J. Anderson (Illinois) More Linear Algebra Spring / 40

6 Symmetric Matrix Now A is (p p) symmetric Let A (p p) be a symmetric matrix. Then A has p pairs of eigenvalues and eigenvectors λ 1,e 1 ; λ 2,e 2 ; ; λ p,e p. The eigenvectors are chosen to have length= 1: e 1 e 1 = e 2 e 2 = = e p e p = 1. The eigenvectors are also chosen to be mutually orthogonal (perpendicular): e i e k that is e ie k = 0 for all i k The eigenvectors are all unique if no 2 eigenvalues are equal. Typically the eigenvalues are ordered from largest to smallest C.J. Anderson (Illinois) More Linear Algebra Spring / 40

7 Little Example continued A = ( ) and λ 1 = 6 λ 2 = 4 e 1 = ( 1/ 2 1/ 2 ) e 2 = ( 1/ 2 1/ 2 ) Note that e 1 e 2 = 0 and Le 1 = Le 2 = 1. C.J. Anderson (Illinois) More Linear Algebra Spring / 40

8 Spectral Decomposition of A The Spectral Decomposition of A where A (p p) symmetric. A = λ 1 e 1 e 1+λ }{{} 2 e 2 e 2+ +λ }{{} p e k e k }{{} p p p p p p If A is also positive definite, then k = p. Matrix A is decomposed into p (p p) component matrices. where e i e i = 1 for all i, and e i e j = 0 for all i j. ( ) ( ) 1 5 λ A = 1 = 6 1/ 2 e 5 1 λ 2 = 4 1 = 1/ e 2 = ( ) ( 2 ) λ 1 e 1 e 1 +λ 2e 2 e 1/2 1/2 1/2 1/2 2 = 6 4 1/2 1/2 1/2 1/2 ( ) 1 5 = = A 5 1 ( 1/ 2 1/ 2 ) C.J. Anderson (Illinois) More Linear Algebra Spring / 40

9 A Bigger Example A = e 1 = e 2 = λ 1 = λ 2 = 9,λ 3 = e 3 = Note that since λ 1 = λ 2 the labeling of e 1 and e 2 is arbitrary. The lengths: e 1 e 1 = e 2 e 2 = e 3 e 3 = 1. Orthogonality: e 1 e 2 = e 1 e 3 = e 2 e 3 = 0. Decomposition: A = 9e 1 e 1 +9e 2 e 2 +18e 3 e 3 C.J. Anderson (Illinois) More Linear Algebra Spring / 40

10 Decomposition of (3 3) A = ) 1 ( , 1 2, ( = = ) = ( ) C.J. Anderson (Illinois) More Linear Algebra Spring / 40

11 Recall: Quadratic Form is defined as x Ax for x p and A p p symmetric The terms of x Ax are squares of x i (i.e., xi 2 ) and cross-products of x i and x k (i.e., x i x k ): p p x Ax = a ik x i x k e.g., ( a11 a (x 1,x 2 ) 12 a 21 a 22 i=1 k=1 )( x1 x 2 = ((a 11 x 1 +a 21 x 2 ),(a 12 x 1 +a 22 x 2 )) ) ( x1 = a 11 x 2 1 +a 21x 1 x 2 +a 12 x 1 x 2 +a 22 x 2 2 = 2 C.J. Anderson (Illinois) More Linear Algebra i=1 k=1 Spring / 40 x 2 ) 2 a ik x i x k

12 Eigenvalues and Definiteness IF x Ax > 0 for all x, matrix A is positive definite. IF x A x 0 for all x, matrix A is non-negative definite. Important: All eigenvalues of A > 0 A is positive definite. All eigenvalues of A 0 A is non-negative definite Implication: If A is positive definite, then the diagonal elements of A must be positive. If x = (0,..., }{{} 1,...0) then x Ax = a ii xi 2 > 0 i th position C.J. Anderson (Illinois) More Linear Algebra Spring / 40

13 More on Spectral Decomposition When A p p symmetric and positive definite, (i.e., diagonals of A are all > 0, and λ i > 0 for all i). We can write the spectral decomposition of A as the sum of the weighted vector products, p A p p = λ i e i e i i=1 In matrix form this is A = PΛP where λ λ 2 0 Λ p p = diag(λ i ) = λ p and P p p = (e 1,e 2,,e p ). C.J. Anderson (Illinois) More Linear Algebra Spring / 40

14 Showing that A = PΛP A p p = P p p Λ p p P p p λ λ 2 0 = (e 1,e 2,,e p ) λ p e 1 e 2 = (λ 1 e 1,λ 2 e 2,,λ p e p ). e p p = λ i e i e i i=1 e 1 e 2. e p C.J. Anderson (Illinois) More Linear Algebra Spring / 40

15 More about P Since The lengths of e i equal 1 (i.e., e i e i = 1), and e i and e k are orthogonal for all i k (i.e., e i e k = 0). e P e P = 2. (e ,e 2,,e p ) = e p = I = PP P is an orthogonal matrix. C.J. Anderson (Illinois) More Linear Algebra Spring / 40

16 Rank r decompositions If A is non-negative definite (semi-definite): So λ i > 0 for i = 1,...,r < p λ i = 0 for i = r +1,...,p A p p = P p r Λ r r P r p. If A is positive or positive semi-definite, we sometimes want to approximate A by a rank r decomposition, where r < Rank of A, B = λ 1 e 1 e λ r e r e r This decomposition minimized the loss function p p (a ik b ik ) 2 = λ 2 r+1 +λ 2 r+2 + λ 2 p i=1 k=1 C.J. Anderson (Illinois) More Linear Algebra Spring / 40

17 Inverse of A If A is positive definite, the inverse of A equals where Why: A 1 = PΛ 1 P 1/λ ( ) 1 0 1/λ 2 0 diag = λ i /λ p AA 1 = (PΛ P ) (P Λ 1 P ) = PΛΛ }{{}}{{ 1 } P = PP = I I I What does A 1 A equal? C.J. Anderson (Illinois) More Linear Algebra Spring / 40

18 Square Root Matrix If A is symmetric, the Square Root Matrix of A is A 1/2 = p λi e i e i = PΛ 1/2 P i=1 Common mistake: A 1/2 = { a ij }. Properties of A 1/2 : (A 1/2 ) = A 1/2...since A 1/2 is symmetric. A 1/2 A 1/2 = A (A 1/2 ) 1 = p i=1 (1/ λ i )e i e i = PΛ 1/2 P = A 1/2 A 1/2 A 1/2 = A 1/2 A 1/2 = I A 1/2 A 1/2 = A 1 C.J. Anderson (Illinois) More Linear Algebra Spring / 40

19 Determinant, Trace and Eigenvalues A = p λ i = λ 1 λ 2 λ p. i=1 Implication: A positive definite matrix has A > 0, because λ 1 > λ 2 > > λ p > 0 p a ii = trace(a) = i=1 p i=1 Now let s consider what s true for Σ and S. λ i C.J. Anderson (Illinois) More Linear Algebra Spring / 40

20 Numerical Example We ll use the psychological test data from Rencher (2002) who got it from Beall (1945) to illustrate these properties 32 males and 32 females had measures on four psychological tests. The tests were x 1 = pictorial inconsistencies x 2 = paper form board x 3 = tool recognition x 4 = vocabulary S = Note that the total sample variance = trace(s) = and that the generalize sample variance = det(s) = C.J. Anderson (Illinois) More Linear Algebra Spring / 40

21 Numerical Example continued Eigenvalue of S are Λ = and the eigenvectors are P = Note that (for example) = (e 1, e 2, e 3, e 4 ) e 1 e 1 = ( ) = 1 = L 2 e 1 = Le 1. e 1 e 2 = (.274(.002)+.284(.185)+.856(.409)+.333(.894)) = 0. C.J. Anderson (Illinois) More Linear Algebra Spring / 40

22 Example: eigenvalues of S Sum of eigenvalues: λ 1 +λ 2 +λ 3 +λ 4 = = = trace(s) = Total sample variance Product of the eigenvalues: 4 λ i = i=1 = = det(s) = GSV C.J. Anderson (Illinois) More Linear Algebra Spring / 40

23 Properties of Covariance Matrices Σ p p & S p p symmetric population and sample covariance matrices, respectively. Most of following holds true for both. Eigenvalues and eigenvectors: S has p pairs of eigenvalues and eigenvectors λ 1,e 1 ; λ 2,e 2 ; ; λ p,e p The λ i s are the roots of the characteristic equation S λi = 0 Eigenvectors are the solutions of the equation Se i = λ i e i C.J. Anderson (Illinois) More Linear Algebra Spring / 40

24 Properties of Covariance Matrices (continued) Since any multiple of e i will solve the above equation, we (usually) set the length of e i = 1 (i.e., L 2 e i = Le i = e i e i = 1). Eigenvectors are orthogonal: e i e k = 0 for all i k. Convention to order eigenvalues: λ 1 λ 2 λ p. Since S (& Σ) are symmetric, eigenvalues are Real numbers. C.J. Anderson (Illinois) More Linear Algebra Spring / 40

25 More about Covariance Matrices Spectral Decomposition: S = λ 1 e 1 e 1 +λ 2 e 2 e 2 + +λ p e p e p = PΛP Pp p = (e 1,e 2,...,e p ) Λp p = diag(λ i ). P P = {e i e k} = PP = I, which implies that P = P 1. Implications for quadratic forms: If x Sx > 0 for all x 0, then S is positive definite and λ i > 0 for all i. If x Sx 0 for all x 0, then S is non-negative or positive semi-definite and λ i 0 for all i. The inverse of S (if S is non-singular, i.e., λ i > 0 for all i) is S 1 = PΛ 1 P = P{diag{1/λ i }P C.J. Anderson (Illinois) More Linear Algebra Spring / 40

26 Numerical Example & Spectral Decomposition S = PΛP = Do SAS/IML Demonstration of this and S 1 = PΛ 1 P. C.J. Anderson (Illinois) More Linear Algebra Spring / 40

27 and Even More about Covariance Matrices If {λ i,e i ;i = 1,...,p} for Σ and Σ is non-singular, then {1/λ i,e i ;i = 1,...p} for Σ 1 That is, Σ and Σ 1 have the same eigenvectors and their eigenvalues are the inverses of each other. S = λ 1 λ 2 λ p = p i=1 λ i. This is the generalized sample variance (GSV). p i=1 s ii = trace(s) = tr(s) = p i=1 λ i. This is the Total Sample Variance. If λ p, the smallest eigenvalue, is greater than 0, then S > 0. If S is singular, then there is at least 1 or more eigenvalues equal to 0. C.J. Anderson (Illinois) More Linear Algebra Spring / 40

28 The Rank of S (and Σ) Definition of rank: The Rank of S = the number of linearly independent rows (columns) = the number of non-zero eigenvalues If S p p is of Full Rank (i.e., rank = p), then λ p > 0 S is positive definite S > 0 S 1 exists S is non-singular definition: p linearly independent rows/columns C.J. Anderson (Illinois) More Linear Algebra Spring / 40

29 Singular Value Decomposition Given matrix A n p, the Singular Value Decomposition (SVD) of A is where A n p = P n r r r Q r p The r columns of P = (p 1,p 2,...p r ) are orthogonal: p i p i = 1 and p i p k = 0 for i k; that is, P P = I r. The r columns of Q = (q 1,q 2,...,q r ) are orthogonal: q i q i = 1 and q i q k = 0 for i k; that is, Q Q = I r. is a diagonal matrix with ordered positive values δ 1 δ 2 δ r r is the rank of A, which must be r min(n,p). C.J. Anderson (Illinois) More Linear Algebra Spring / 40

30 Singular Value Decomposition (continued) A n p = P n r r r Q r p Terminology: P are the left singular vectors Q are the right singular vectors The elements of are the singular values C.J. Anderson (Illinois) More Linear Algebra Spring / 40

31 Relationship between Eigensystems and SVD To show this let X n p which has rank p, and X n p = P n p p p Q p p. The product X p nx n p is a square and symmetric matrix. X p n X n p = (P n p p p Q p p ) (P n p p p Q p p ) = ( Q p p p p P p n )( P n p p p Q p p ) }{{} I = Q p p p p p p Q p p = Q p p 2 p p Q p p }{{}}{{}}{{} vectors valuesvectors If A (e.g., X p nx n p ) is square and symmetric, then SVD gives the same as eigenvector/value decomposition. C.J. Anderson (Illinois) More Linear Algebra Spring / 40

32 Lower Rank SVD Sometimes we want to summarize or approximate the basic structure of a matrix. In particular, let A n p = P n r r r Q r p, then B n p = P n r r r Q r p where r < r (note: r =rank of matrix A). This Lower Rank Decomposition minimizes the loss function n p (a ji b ji ) 2 = δr δr 2 j=1 i=1 This result of the least squared approximation of one matrix by another of lower rank is known as the Eckart-Young theorem. See Eckart, C. & Young, G. (1936). The approximation of one matrix by another of lower rank. Psychometrika, 1, C.J. Anderson (Illinois) More Linear Algebra Spring / 40

33 So What can I do with SVD? Biplot: Lower rank representation of a data matrix. Correspondence Analysis: Lower rank representation of the relationship between two categorical variables. Multiple Correspondence Analysis: Lower rank representations of the relationship between multiple categorical variables. Multidimensional Scaling Reduce the number of parameters in a complex model. and Many other scaling and data analytic methods. We ll examine what a Biplot can give us... Consider the psychological test data: The rank of the data matrix is 4, so X c = (X x) = P Q 4 4 = (P ) Q }{{} cases }{{} variables C.J. Anderson (Illinois) More Linear Algebra Spring / 40

34 Biplot Example: Singular Values Cumulative i δ i δi 2 percent sum percent where percent = (δi 2/ ) 100%, sum = ik=1 δ2 i, and cumulative percent = ( ik=1 δ2 k / ) 100%. If we take a rank 2 decomposition, B = 2 l=1 δ lp l q l = {δ 1 p j1 q i1 +δ 2 p j2 q i2 } = {b ji } and the value of the loss function is loss = n j=1 i=1 4 (x c,ji b ji ) 2 = = Only losing (1096/6692) 100% = 16.39% of the information in the data matrix (loosely speaking). C.J. Anderson (Illinois) More Linear Algebra Spring / 40

35 Biplot Example: Singular Vectors Left Singular Vectors: P 64 4 Right Singular Vectors: Q 4 4 p 1 p 2 p 3 p 4 q 1 q 2 q 3 q etc. C.J. Anderson (Illinois) More Linear Algebra Spring / 40

36 Biplot: Representing Cases First let s look at the rank 2 solution/approximation X c }{{} (64 4) = P }{{} }{{} Q }{{} (64 2) (2 2)(2 4) For our rank 2 solution, to represent subjects or cases, we ll plot the rows of the product P as points in a 2-dimensional space. Let q il = the value in the i th row of q l, so post-multiplying both side by Q gives P = X c,(64 4) Q (4 4) = 4i=1 q i1 x 4i=1 c,1i q i2 x 4i=1 c,1i q i3 x 4i=1 c,1i q i4 x c,1i 4i=1 q i1 x 4i=1 c,2i q i2 x 4i=1 c,2i q i3 x 4i=1 c,2i q i4 x c,2i.... 4i=1 q i1 x 4i=1 c,64i q i2 x 4i=1 c,64i q i3 x 4i=1 c,64i q i4 x c,64i C.J. Anderson (Illinois) More Linear Algebra Spring / 40

37 Biplot: Representing Cases & Variables For cases, what we are plotting are linear combination of the data (mean centered) matrix. For example, for subject one, we plot the point (p j1 δ 1,p j2 δ 2 ) = (( 0.002)(67.685),( 0.248)(31.859))= ( 0.135, 7.901). To represent variables, we ll plot the rows of Q 4 2 as vectors in the 2-dimensional space. For example, for variable one, we ll plot (0.274, 0.001). For the plot, I actually plotted variable vectors multiplied by 30 for cosmetic purposes it doesn t effect the interpretation. C.J. Anderson (Illinois) More Linear Algebra Spring / 40

38 The Graph & Foreshadowing of Things to Come C.J. Anderson (Illinois) More Linear Algebra Spring / 40

39 Maximization of Quadratic Forms for Points on the Unit Sphere In multivariate analyses, we have different goals and purposes different criteria to maximize (or minimize). Let B p p be a positive definite matrix with eigenvalues λ 1 λ 2 λ p and eigenvectors e 1, e 2,..., e p. x Bx Maximization: max x 0 x x = λ 1 is obtained when x = e 1 x Bx Minimization: min x 0 x x = λ p is obtained when x = e p Maximization under an orthogonality constraint: x Bx max x e 1,...,e k x x = λ k+1 is obtained when x = e k+1 C.J. Anderson (Illinois) More Linear Algebra Spring / 40

40 Overview of the Rest of the Semester See pages on web-site... C.J. Anderson (Illinois) More Linear Algebra Spring / 40

Sample Geometry. Edps/Soc 584, Psych 594. Carolyn J. Anderson

Sample Geometry. Edps/Soc 584, Psych 594. Carolyn J. Anderson Sample Geometry Edps/Soc 584, Psych 594 Carolyn J. Anderson Department of Educational Psychology I L L I N O I S university of illinois at urbana-champaign c Board of Trustees, University of Illinois Spring

More information

2. Matrix Algebra and Random Vectors

2. Matrix Algebra and Random Vectors 2. Matrix Algebra and Random Vectors 2.1 Introduction Multivariate data can be conveniently display as array of numbers. In general, a rectangular array of numbers with, for instance, n rows and p columns

More information

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN Introduction Edps/Psych/Stat/ 584 Applied Multivariate Statistics Carolyn J Anderson Department of Educational Psychology I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN c Board of Trustees,

More information

MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y.

MATRIX ALGEBRA. or x = (x 1,..., x n ) R n. y 1 y 2. x 2. x m. y m. y = cos θ 1 = x 1 L x. sin θ 1 = x 2. cos θ 2 = y 1 L y. as Basics Vectors MATRIX ALGEBRA An array of n real numbers x, x,, x n is called a vector and it is written x = x x n or x = x,, x n R n prime operation=transposing a column to a row Basic vector operations

More information

CS 143 Linear Algebra Review

CS 143 Linear Algebra Review CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see

More information

ICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization

ICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization ICS 6N Computational Linear Algebra Symmetric Matrices and Orthogonal Diagonalization Xiaohui Xie University of California, Irvine xhx@uci.edu Xiaohui Xie (UCI) ICS 6N 1 / 21 Symmetric matrices An n n

More information

Chapter 1. Matrix Algebra

Chapter 1. Matrix Algebra ST4233, Linear Models, Semester 1 2008-2009 Chapter 1. Matrix Algebra 1 Matrix and vector notation Definition 1.1 A matrix is a rectangular or square array of numbers of variables. We use uppercase boldface

More information

Knowledge Discovery and Data Mining 1 (VO) ( )

Knowledge Discovery and Data Mining 1 (VO) ( ) Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory

More information

Econ Slides from Lecture 7

Econ Slides from Lecture 7 Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for

More information

Introduction to Matrix Algebra

Introduction to Matrix Algebra Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p

More information

Basic Concepts in Matrix Algebra

Basic Concepts in Matrix Algebra Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1

More information

Inferences about a Mean Vector

Inferences about a Mean Vector Inferences about a Mean Vector Edps/Soc 584, Psych 594 Carolyn J. Anderson Department of Educational Psychology I L L I N O I S university of illinois at urbana-champaign c Board of Trustees, University

More information

Vectors and Matrices Statistics with Vectors and Matrices

Vectors and Matrices Statistics with Vectors and Matrices Vectors and Matrices Statistics with Vectors and Matrices Lecture 3 September 7, 005 Analysis Lecture #3-9/7/005 Slide 1 of 55 Today s Lecture Vectors and Matrices (Supplement A - augmented with SAS proc

More information

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models

Bare minimum on matrix algebra. Psychology 588: Covariance structure and factor models Bare minimum on matrix algebra Psychology 588: Covariance structure and factor models Matrix multiplication 2 Consider three notations for linear combinations y11 y1 m x11 x 1p b11 b 1m y y x x b b n1

More information

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.

Linear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0. Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the

More information

Principal Component Analysis (PCA) Our starting point consists of T observations from N variables, which will be arranged in an T N matrix R,

Principal Component Analysis (PCA) Our starting point consists of T observations from N variables, which will be arranged in an T N matrix R, Principal Component Analysis (PCA) PCA is a widely used statistical tool for dimension reduction. The objective of PCA is to find common factors, the so called principal components, in form of linear combinations

More information

Maths for Signals and Systems Linear Algebra in Engineering

Maths for Signals and Systems Linear Algebra in Engineering Maths for Signals and Systems Linear Algebra in Engineering Lectures 13 15, Tuesday 8 th and Friday 11 th November 016 DR TANIA STATHAKI READER (ASSOCIATE PROFFESOR) IN SIGNAL PROCESSING IMPERIAL COLLEGE

More information

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN Principal Analysis Edps/Soc 584 and Psych 594 Applied Multivariate Statistics Carolyn J. Anderson Department of Educational Psychology I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN c Board

More information

Multivariate Statistical Analysis

Multivariate Statistical Analysis Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions

More information

Linear Algebra Review. Vectors

Linear Algebra Review. Vectors Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors

More information

Review of Linear Algebra

Review of Linear Algebra Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=

More information

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p

Math Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

MATH 583A REVIEW SESSION #1

MATH 583A REVIEW SESSION #1 MATH 583A REVIEW SESSION #1 BOJAN DURICKOVIC 1. Vector Spaces Very quick review of the basic linear algebra concepts (see any linear algebra textbook): (finite dimensional) vector space (or linear space),

More information

The Singular Value Decomposition

The Singular Value Decomposition The Singular Value Decomposition Philippe B. Laval KSU Fall 2015 Philippe B. Laval (KSU) SVD Fall 2015 1 / 13 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Linear Algebra: Matrix Eigenvalue Problems

Linear Algebra: Matrix Eigenvalue Problems CHAPTER8 Linear Algebra: Matrix Eigenvalue Problems Chapter 8 p1 A matrix eigenvalue problem considers the vector equation (1) Ax = λx. 8.0 Linear Algebra: Matrix Eigenvalue Problems Here A is a given

More information

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes

22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information

Background Mathematics (2/2) 1. David Barber

Background Mathematics (2/2) 1. David Barber Background Mathematics (2/2) 1 David Barber University College London Modified by Samson Cheung (sccheung@ieee.org) 1 These slides accompany the book Bayesian Reasoning and Machine Learning. The book and

More information

3 (Maths) Linear Algebra

3 (Maths) Linear Algebra 3 (Maths) Linear Algebra References: Simon and Blume, chapters 6 to 11, 16 and 23; Pemberton and Rau, chapters 11 to 13 and 25; Sundaram, sections 1.3 and 1.5. The methods and concepts of linear algebra

More information

Math Matrix Algebra

Math Matrix Algebra Math 44 - Matrix Algebra Review notes - (Alberto Bressan, Spring 7) sec: Orthogonal diagonalization of symmetric matrices When we seek to diagonalize a general n n matrix A, two difficulties may arise:

More information

Linear Algebra: Characteristic Value Problem

Linear Algebra: Characteristic Value Problem Linear Algebra: Characteristic Value Problem . The Characteristic Value Problem Let < be the set of real numbers and { be the set of complex numbers. Given an n n real matrix A; does there exist a number

More information

CS 246 Review of Linear Algebra 01/17/19

CS 246 Review of Linear Algebra 01/17/19 1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector

More information

Def. The euclidian distance between two points x = (x 1,...,x p ) t and y = (y 1,...,y p ) t in the p-dimensional space R p is defined as

Def. The euclidian distance between two points x = (x 1,...,x p ) t and y = (y 1,...,y p ) t in the p-dimensional space R p is defined as MAHALANOBIS DISTANCE Def. The euclidian distance between two points x = (x 1,...,x p ) t and y = (y 1,...,y p ) t in the p-dimensional space R p is defined as d E (x, y) = (x 1 y 1 ) 2 + +(x p y p ) 2

More information

Matrix Algebra, part 2

Matrix Algebra, part 2 Matrix Algebra, part 2 Ming-Ching Luoh 2005.9.12 1 / 38 Diagonalization and Spectral Decomposition of a Matrix Optimization 2 / 38 Diagonalization and Spectral Decomposition of a Matrix Also called Eigenvalues

More information

Eigenvalues, Eigenvectors, and an Intro to PCA

Eigenvalues, Eigenvectors, and an Intro to PCA Eigenvalues, Eigenvectors, and an Intro to PCA Eigenvalues, Eigenvectors, and an Intro to PCA Changing Basis We ve talked so far about re-writing our data using a new set of variables, or a new basis.

More information

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition

Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing

More information

Eigenvalues, Eigenvectors, and an Intro to PCA

Eigenvalues, Eigenvectors, and an Intro to PCA Eigenvalues, Eigenvectors, and an Intro to PCA Eigenvalues, Eigenvectors, and an Intro to PCA Changing Basis We ve talked so far about re-writing our data using a new set of variables, or a new basis.

More information

Properties of Matrices and Operations on Matrices

Properties of Matrices and Operations on Matrices Properties of Matrices and Operations on Matrices A common data structure for statistical analysis is a rectangular array or matris. Rows represent individual observational units, or just observations,

More information

Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA

Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA Principle Components Analysis: Uses one group of variables (we will call this X) In

More information

Ma/CS 6b Class 20: Spectral Graph Theory

Ma/CS 6b Class 20: Spectral Graph Theory Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Eigenvalues and Eigenvectors A an n n matrix of real numbers. The eigenvalues of A are the numbers λ such that Ax = λx for some nonzero vector x

More information

Quick Tour of Linear Algebra and Graph Theory

Quick Tour of Linear Algebra and Graph Theory Quick Tour of Linear Algebra and Graph Theory CS224W: Social and Information Network Analysis Fall 2014 David Hallac Based on Peter Lofgren, Yu Wayne Wu, and Borja Pelato s previous versions Matrices and

More information

Study Notes on Matrices & Determinants for GATE 2017

Study Notes on Matrices & Determinants for GATE 2017 Study Notes on Matrices & Determinants for GATE 2017 Matrices and Determinates are undoubtedly one of the most scoring and high yielding topics in GATE. At least 3-4 questions are always anticipated from

More information

Stat 159/259: Linear Algebra Notes

Stat 159/259: Linear Algebra Notes Stat 159/259: Linear Algebra Notes Jarrod Millman November 16, 2015 Abstract These notes assume you ve taken a semester of undergraduate linear algebra. In particular, I assume you are familiar with the

More information

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN Comparisons of Two Means Edps/Soc 584 and Psych 594 Applied Multivariate Statistics Carolyn J. Anderson Department of Educational Psychology I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN c

More information

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)

1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued) 1 A linear system of equations of the form Sections 75, 78 & 81 a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2 a m1 x 1 + a m2 x 2 + + a mn x n = b m can be written in matrix

More information

Eigenvalues, Eigenvectors, and an Intro to PCA

Eigenvalues, Eigenvectors, and an Intro to PCA Eigenvalues, Eigenvectors, and an Intro to PCA Eigenvalues, Eigenvectors, and an Intro to PCA Changing Basis We ve talked so far about re-writing our data using a new set of variables, or a new basis.

More information

q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis from any basis.

q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis from any basis. Exam Review Material covered by the exam [ Orthogonal matrices Q = q 1... ] q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis

More information

1. Introduction to Multivariate Analysis

1. Introduction to Multivariate Analysis 1. Introduction to Multivariate Analysis Isabel M. Rodrigues 1 / 44 1.1 Overview of multivariate methods and main objectives. WHY MULTIVARIATE ANALYSIS? Multivariate statistical analysis is concerned with

More information

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015

Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 2015 Chapters 5 & 6: Theory Review: Solutions Math 308 F Spring 205. If A is a 3 3 triangular matrix, explain why det(a) is equal to the product of entries on the diagonal. If A is a lower triangular or diagonal

More information

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas

Dimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx

More information

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice

linearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice 3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is

More information

Matrices and Deformation

Matrices and Deformation ES 111 Mathematical Methods in the Earth Sciences Matrices and Deformation Lecture Outline 13 - Thurs 9th Nov 2017 Strain Ellipse and Eigenvectors One way of thinking about a matrix is that it operates

More information

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra

Massachusetts Institute of Technology Department of Economics Statistics. Lecture Notes on Matrix Algebra Massachusetts Institute of Technology Department of Economics 14.381 Statistics Guido Kuersteiner Lecture Notes on Matrix Algebra These lecture notes summarize some basic results on matrix algebra used

More information

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)

MATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP) MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m

More information

Lecture II: Linear Algebra Revisited

Lecture II: Linear Algebra Revisited Lecture II: Linear Algebra Revisited Overview Vector spaces, Hilbert & Banach Spaces, etrics & Norms atrices, Eigenvalues, Orthogonal Transformations, Singular Values Operators, Operator Norms, Function

More information

Review problems for MA 54, Fall 2004.

Review problems for MA 54, Fall 2004. Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on

More information

Numerical Linear Algebra Homework Assignment - Week 2

Numerical Linear Algebra Homework Assignment - Week 2 Numerical Linear Algebra Homework Assignment - Week 2 Đoàn Trần Nguyên Tùng Student ID: 1411352 8th October 2016 Exercise 2.1: Show that if a matrix A is both triangular and unitary, then it is diagonal.

More information

Appendix A: Matrices

Appendix A: Matrices Appendix A: Matrices A matrix is a rectangular array of numbers Such arrays have rows and columns The numbers of rows and columns are referred to as the dimensions of a matrix A matrix with, say, 5 rows

More information

CHAPTER 3. Matrix Eigenvalue Problems

CHAPTER 3. Matrix Eigenvalue Problems A SERIES OF CLASS NOTES FOR 2005-2006 TO INTRODUCE LINEAR AND NONLINEAR PROBLEMS TO ENGINEERS, SCIENTISTS, AND APPLIED MATHEMATICIANS DE CLASS NOTES 3 A COLLECTION OF HANDOUTS ON SYSTEMS OF ORDINARY DIFFERENTIAL

More information

Eigenvalues and Eigenvectors

Eigenvalues and Eigenvectors Contents Eigenvalues and Eigenvectors. Basic Concepts. Applications of Eigenvalues and Eigenvectors 8.3 Repeated Eigenvalues and Symmetric Matrices 3.4 Numerical Determination of Eigenvalues and Eigenvectors

More information

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =

Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x = Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.

More information

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN

I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN Linear Combinations of Variables Edps/Soc 584 and Psych 594 Applied Multivariate Statistics Carolyn J Anderson Department of Educational Psychology I L L I N O I S UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN

More information

Chapter 7: Symmetric Matrices and Quadratic Forms

Chapter 7: Symmetric Matrices and Quadratic Forms Chapter 7: Symmetric Matrices and Quadratic Forms (Last Updated: December, 06) These notes are derived primarily from Linear Algebra and its applications by David Lay (4ed). A few theorems have been moved

More information

Cheng Soon Ong & Christian Walder. Canberra February June 2017

Cheng Soon Ong & Christian Walder. Canberra February June 2017 Cheng Soon Ong & Christian Walder Research Group and College of Engineering and Computer Science Canberra February June 2017 (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 141 Part III

More information

Repeated Eigenvalues and Symmetric Matrices

Repeated Eigenvalues and Symmetric Matrices Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one

More information

ICS 6N Computational Linear Algebra Eigenvalues and Eigenvectors

ICS 6N Computational Linear Algebra Eigenvalues and Eigenvectors ICS 6N Computational Linear Algebra Eigenvalues and Eigenvectors Xiaohui Xie University of California, Irvine xhx@uci.edu Xiaohui Xie (UCI) ICS 6N 1 / 34 The powers of matrix Consider the following dynamic

More information

Matrices and Linear Algebra

Matrices and Linear Algebra Contents Quantitative methods for Economics and Business University of Ferrara Academic year 2017-2018 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2

More information

Linear Algebra Review

Linear Algebra Review January 29, 2013 Table of contents Metrics Metric Given a space X, then d : X X R + 0 and z in X if: d(x, y) = 0 is equivalent to x = y d(x, y) = d(y, x) d(x, y) d(x, z) + d(z, y) is a metric is for all

More information

Chapter 3 Transformations

Chapter 3 Transformations Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases

More information

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms

forms Christopher Engström November 14, 2014 MAA704: Matrix factorization and canonical forms Matrix properties Matrix factorization Canonical forms Christopher Engström November 14, 2014 Hermitian LU QR echelon Contents of todays lecture Some interesting / useful / important of matrices Hermitian LU QR echelon Rewriting a as a product of several matrices.

More information

Matrix Solutions to Linear Systems of ODEs

Matrix Solutions to Linear Systems of ODEs Matrix Solutions to Linear Systems of ODEs James K. Peterson Department of Biological Sciences and Department of Mathematical Sciences Clemson University November 3, 216 Outline 1 Symmetric Systems of

More information

Linear Algebra and Matrices

Linear Algebra and Matrices Linear Algebra and Matrices 4 Overview In this chapter we studying true matrix operations, not element operations as was done in earlier chapters. Working with MAT- LAB functions should now be fairly routine.

More information

A Little Necessary Matrix Algebra for Doctoral Studies in Business & Economics. Matrix Algebra

A Little Necessary Matrix Algebra for Doctoral Studies in Business & Economics. Matrix Algebra A Little Necessary Matrix Algebra for Doctoral Studies in Business & Economics James J. Cochran Department of Marketing & Analysis Louisiana Tech University Jcochran@cab.latech.edu Matrix Algebra Matrix

More information

Quick Tour of Linear Algebra and Graph Theory

Quick Tour of Linear Algebra and Graph Theory Quick Tour of Linear Algebra and Graph Theory CS224w: Social and Information Network Analysis Fall 2012 Yu Wayne Wu Based on Borja Pelato s version in Fall 2011 Matrices and Vectors Matrix: A rectangular

More information

Singular Value Decomposition

Singular Value Decomposition Singular Value Decomposition Motivatation The diagonalization theorem play a part in many interesting applications. Unfortunately not all matrices can be factored as A = PDP However a factorization A =

More information

Linear Algebra - Part II

Linear Algebra - Part II Linear Algebra - Part II Projection, Eigendecomposition, SVD (Adapted from Sargur Srihari s slides) Brief Review from Part 1 Symmetric Matrix: A = A T Orthogonal Matrix: A T A = AA T = I and A 1 = A T

More information

Stat 206: Linear algebra

Stat 206: Linear algebra Stat 206: Linear algebra James Johndrow (adapted from Iain Johnstone s notes) 2016-11-02 Vectors We have already been working with vectors, but let s review a few more concepts. The inner product of two

More information

Principal Components Theory Notes

Principal Components Theory Notes Principal Components Theory Notes Charles J. Geyer August 29, 2007 1 Introduction These are class notes for Stat 5601 (nonparametrics) taught at the University of Minnesota, Spring 2006. This not a theory

More information

B553 Lecture 5: Matrix Algebra Review

B553 Lecture 5: Matrix Algebra Review B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations

More information

Ma/CS 6b Class 20: Spectral Graph Theory

Ma/CS 6b Class 20: Spectral Graph Theory Ma/CS 6b Class 20: Spectral Graph Theory By Adam Sheffer Recall: Parity of a Permutation S n the set of permutations of 1,2,, n. A permutation σ S n is even if it can be written as a composition of an

More information

Announcements Wednesday, November 01

Announcements Wednesday, November 01 Announcements Wednesday, November 01 WeBWorK 3.1, 3.2 are due today at 11:59pm. The quiz on Friday covers 3.1, 3.2. My office is Skiles 244. Rabinoffice hours are Monday, 1 3pm and Tuesday, 9 11am. Section

More information

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12

a 11 a 12 a 11 a 12 a 13 a 21 a 22 a 23 . a 31 a 32 a 33 a 12 a 21 a 23 a 31 a = = = = 12 24 8 Matrices Determinant of 2 2 matrix Given a 2 2 matrix [ ] a a A = 2 a 2 a 22 the real number a a 22 a 2 a 2 is determinant and denoted by det(a) = a a 2 a 2 a 22 Example 8 Find determinant of 2 2

More information

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3

ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ANALYTICAL MATHEMATICS FOR APPLICATIONS 2018 LECTURE NOTES 3 ISSUED 24 FEBRUARY 2018 1 Gaussian elimination Let A be an (m n)-matrix Consider the following row operations on A (1) Swap the positions any

More information

Lecture 2: Linear Algebra Review

Lecture 2: Linear Algebra Review EE 227A: Convex Optimization and Applications January 19 Lecture 2: Linear Algebra Review Lecturer: Mert Pilanci Reading assignment: Appendix C of BV. Sections 2-6 of the web textbook 1 2.1 Vectors 2.1.1

More information

Chapter 3. Matrices. 3.1 Matrices

Chapter 3. Matrices. 3.1 Matrices 40 Chapter 3 Matrices 3.1 Matrices Definition 3.1 Matrix) A matrix A is a rectangular array of m n real numbers {a ij } written as a 11 a 12 a 1n a 21 a 22 a 2n A =.... a m1 a m2 a mn The array has m rows

More information

Mathematical foundations - linear algebra

Mathematical foundations - linear algebra Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar

More information

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)

The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will

More information

Large Scale Data Analysis Using Deep Learning

Large Scale Data Analysis Using Deep Learning Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset

More information

Review of Some Concepts from Linear Algebra: Part 2

Review of Some Concepts from Linear Algebra: Part 2 Review of Some Concepts from Linear Algebra: Part 2 Department of Mathematics Boise State University January 16, 2019 Math 566 Linear Algebra Review: Part 2 January 16, 2019 1 / 22 Vector spaces A set

More information

Systems of Algebraic Equations and Systems of Differential Equations

Systems of Algebraic Equations and Systems of Differential Equations Systems of Algebraic Equations and Systems of Differential Equations Topics: 2 by 2 systems of linear equations Matrix expression; Ax = b Solving 2 by 2 homogeneous systems Functions defined on matrices

More information

Linear Models Review

Linear Models Review Linear Models Review Vectors in IR n will be written as ordered n-tuples which are understood to be column vectors, or n 1 matrices. A vector variable will be indicted with bold face, and the prime sign

More information

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where:

VAR Model. (k-variate) VAR(p) model (in the Reduced Form): Y t-2. Y t-1 = A + B 1. Y t + B 2. Y t-p. + ε t. + + B p. where: VAR Model (k-variate VAR(p model (in the Reduced Form: where: Y t = A + B 1 Y t-1 + B 2 Y t-2 + + B p Y t-p + ε t Y t = (y 1t, y 2t,, y kt : a (k x 1 vector of time series variables A: a (k x 1 vector

More information

Notes for Applied Multivariate Analysis with MATLAB

Notes for Applied Multivariate Analysis with MATLAB Version: July 9, 2013 Notes for Applied Multivariate Analysis with MATLAB These notes were written for use in the Quantitative Psychology courses at the University of Illinois, Champaign The expectation

More information

Chapter Two Elements of Linear Algebra

Chapter Two Elements of Linear Algebra Chapter Two Elements of Linear Algebra Previously, in chapter one, we have considered single first order differential equations involving a single unknown function. In the next chapter we will begin to

More information

Quantum Computing Lecture 2. Review of Linear Algebra

Quantum Computing Lecture 2. Review of Linear Algebra Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces

More information

1 Linearity and Linear Systems

1 Linearity and Linear Systems Mathematical Tools for Neuroscience (NEU 34) Princeton University, Spring 26 Jonathan Pillow Lecture 7-8 notes: Linear systems & SVD Linearity and Linear Systems Linear system is a kind of mapping f( x)

More information

Linear Algebra Primer

Linear Algebra Primer Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................

More information

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent.

j=1 u 1jv 1j. 1/ 2 Lemma 1. An orthogonal set of vectors must be linearly independent. Lecture Notes: Orthogonal and Symmetric Matrices Yufei Tao Department of Computer Science and Engineering Chinese University of Hong Kong taoyf@cse.cuhk.edu.hk Orthogonal Matrix Definition. Let u = [u

More information

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination

Math 102, Winter Final Exam Review. Chapter 1. Matrices and Gaussian Elimination Math 0, Winter 07 Final Exam Review Chapter. Matrices and Gaussian Elimination { x + x =,. Different forms of a system of linear equations. Example: The x + 4x = 4. [ ] [ ] [ ] vector form (or the column

More information

Image Registration Lecture 2: Vectors and Matrices

Image Registration Lecture 2: Vectors and Matrices Image Registration Lecture 2: Vectors and Matrices Prof. Charlene Tsai Lecture Overview Vectors Matrices Basics Orthogonal matrices Singular Value Decomposition (SVD) 2 1 Preliminary Comments Some of this

More information