Singular Value Decomposition and Principal Component Analysis (PCA) I
|
|
- Sabrina Davis
- 6 years ago
- Views:
Transcription
1 Singular Value Decomposition and Principal Component Analysis (PCA) I Prof Ned Wingreen MOL 40/50 Microarray review Data per array: 0000 genes, I (green) i,i (red) i data points! The expression matrix has entries of the form log 2 ( X = g i a j With X 00 arrays, this yields, ) I (green) ij : I (red) ij where the i th row g i of the m rows is the transcriptional response of the gene i, and where the j th column a j of the n columns is the expression profile assay j This is an m n matrix, where the number of rows m 0000 and the number of columns n 00 Example Assays taken every 0 minutes after addition of some compound (nutrient, poison, signal molecule) how to characterize the response of the genes? For any 2 log 2 I(t) I(0) t 2 Figure : Noisy genes individual gene, it may be impossible to see the signal through the noise Individual genes may respond with a superposition of different patterns and noise How to extract the important patterns from the noise? We will use SVD and
2 PCA to analyze the expression matrix X But first we need to learn (or review) some linear algebra, so we can manipulate matrices like X Linear Algebra Matrices a a a 2 a a m n m n matrix A =, the transpose A T a 2 =, a m a m2 a mn a m where the transpose interchanges columns and rows If m = n, then A is called square Matrix multiplication If A is a m n matrix and B is a n l matrix, then AB is a m l matrix where (AB) ij = n a ik b kj k= We can think of this as dot products of vectors: a a 2 ( ) b b2 bl AB = a m Identity matrix (AB) ij = a i b j = a T b square matrix I = 0 0 I is the identity matrix because AI = A IA = A, for all matrices A 2
3 Inverse of a square matrix AA = A A = I Note that (A ) = A Finding the inverse of a matrix is slow in practice In general: C C 2 A = A (C ij) T = A C 2, C ji where A = deta Therefore A is /deta times the transpose of the cofactor matrix (aka the adjoint of A ) C ij = ( ) i+j M ij, where the minor M ij is the determinant of the submatrix obtained from A by deleting row i and column j Determinants of square matrices Example 2 2 matrix: ( ) a b det = ad bc c d The determinant of larger matrices can be defined by induction Example) 3 3 matrix: i j k det a b c = i(bf ce) j(af cd) + k(ae bd) d e f More generally, A = deta = n a ij ( ) i+j M ij, j= where as above M ij is the determinant of the submatrix formed by A without row i and column j: a a 2 a n A = a 2 ( ), so that A = a submatrix M a 2 M 2 + Example: Inverse of a 2 2 matrix A = A = A ( ) a b A = c d ( ) C C 2 = ( ) M M 2 A M 2 M 22 C 2 C 22 ( ) d b = c a ad bc ( d b c a ) 3
4 Check: ( )( ) A d b a b A = ad bc c a c d ( ) ad bc 0 = ad bc 0 ad bc ( ) 0 = = I 0 Linear independence c v + c 2 v 2 + c 3 v c n v n = 0 c = c 2 = = c n = 0 If n vectors are not linearly independent, then they span a subspace of dimension < n Eg Two vectors v and v 2 are linearly dependent if and only if they are multiples of each other: v = a v 2 In this case v + v 2 only span a dimensional subspace Orthogonality Figure 2: Linearly dependent vectors Two vectors are orthogonal if x y = 0 ( ) ( ) 4 x =, y = : x y = 4 + 2( 2) = Orthogonal vectors are perpendicular x y Figure 3: Orthogonal vectors Eigenvalues and eigenvectors of symmetric matrices Eigen means characteristic, so these are also sometimes called characteristic values and vectors Eigenvalues and vectors are defined and related by the equation A v = λ v, 4
5 where v is an eigenvector and λ is an eigenvalue (A consequence of this definition is that if v is an eigenvector, then so is c v) To find eigenvalues, we use the theorem (A λi) v = 0 det(a λi) = 0 This follows from a fact which is good to know: detx = i λ i Similarly, tr X = i λ i We can therefore find eigenvalues by solving a λ a 2 a 2 a 22 λ det(a λi) = = 0, ann λ which is an n th order polynomial in λ, with n roots, ie solutions for λ (these are either real or complex conjugate pairs) Once a λ i is known, A v = λ i v gives a set of linear equations which can be solved for the associated eigenvector v i Diagonalization If A is an n n matrix with n distinct eigenvalues λ,λ 2,,λ n and associated eigenvectors v i, then let ( ) v v U = 2 v n, where the eigenvectors v are the columns of U Now AU = A( v, v 2,, v n ) = (A v,a v 2,,A v n ) = (λ v,λ 2 v 2,,λ n v n ) λ 0 λ 2 = U = UD, 0 λn where D is the diagonal matrix of eigenvalues Manipulating the expression AU = U D yields the identity A = UDU This is the eigen-decomposition theorem, and it is an example of a similarity transform When A is a symmetric matrix, ie, when A = A T, then it is possible to make the vs orthonormal, in which case U = U T, so that A = UDU T 5
6 (Principal Component Analysis (PCA) is equivalent to diagonalization of a particular symmetric matrix the covariance matrix) The eigendecomposition allows some nice results: A 2 = AA = (UDU )(UDU ) = UDDU = UD 2 U = U λ 2 λ 2 2 U, λ 2 n and in the same way A m = UD m U = U λ m λ m 2 λ m n U So if we known the eigenvalues of A, we immediately know the eigenvalues of A m In particular, It is easy to check that A = (UDU ) = (U ) D U = UD U A A = (UD U )(UDU ) = UD DU = UU = I Therefore /λ A /λ 2 = U U /λn Singular Value Decomposition (SVD) SVD b A v A v s v 2 s 2 A b v 2 Figure 4: Singular Value Decomposition (SVD) SVD is a generalization of diagonalization for non-symmetric matrices If A is an m n matrix, then U,D,V : A = UDV T, 6
7 where U is an m m matrix with orthonormal columns and V is an n n matrix with orthonormal rows D is an m n diagonal matrix, where S S 2 D =, S S 2 S n 0 Sn These S i are the singular values 7
8 SVD and PCA II Prof Ned Wingreen MOL 40/50 Let s see how we can first apply PCA and then SVD to extract information about gene regulation from a gene expression matrix Recall a j X = g i, where the i th row g i of the m rows is the transcriptional response of the gene i, and where the j th column a j of the n columns is the expression profile assay j Consider the expression profile for each assay This will be the vector a k =(x k,x 2k,,x mk ) Each assay is therefore associated with a point in the m-dimensional space of genes The graph, of course, shows only 2 dimensions of a m-dimensional space, g 2 a 2 a g Figure : Gene space but one can immediately see that there is a correlation between the expression of gene g and g 2 How can we find correlations among multiple genes? Eg how would we know if the expression of genes g, g 2, g 37,andg 64 are all correlated? We can learn this from PCA and/or SVD First, center the data by subtracting out the mean value for each gene: y ik = x ik x i, where x i = n n x ik, the average having been taken over the n assays This yields the vector a k = a k x =(y k,y 2k,,y mk ) k=
9 g 2 a 2 a g Figure 2: Re-centered gene space Notice in this example that even after subtracting out the means, the expression values for genes and 2 are correlated over the set of assays Biologically, this suggests that genes and 2 are coregulated How can we quantify this? In particular how can we recognize if many sets of genes are coregulated (or counter-regulated)? Graphically, we d like to find the directions in gene space (weighed combinations of genes) that best capture the observed correlations in the data How do we do this mathematically, allowing for many correlated genes? Answer these directions are the principal components of a particular matrix, the covariance matrix With n assays, this takes the form: C ij = n y ik y jk = n (x ik x i )(x jk x j ) n n k= k= = E[(x i x i )(x j x j )] = E[x i x j ] x i x j =cov(x i,x j ) From the covariance matrix we can also define cor(x i,x j )= cov(x i,x j ) σ i σ j, the correlation coefficient of x i and x j (often called r), which satisfies r If x i and x j are independent, Two other easy relations are cov(x i,x j )=E[x i x j ] x i x j = x i x j x i x j =0 cov(x i,x i )=E[(x i x i ) 2 ]=σi 2 (cor(x i,x i )=r =) cov(x i, x i )= σi 2 (cor(x i, x i )=r = ) The covariance matrix is symmetric: cov(x,x ) cov(x 2,x ) cov = cov(x,x 2 ) 2
10 Diagonalizing the covariance matrix is called a Principal Component Analysis λ u ( ) cov = UDU T u u = 2 u m λ 2 u 2, λm u m where λ λ 2 λ m 0 The us are called orthonormal eigenvectors called the principal component vectors and the λs, the eigenvalues of the covariance matrix, are called principal component values The eigenvectors give the variance of the data along the principal component directions, and the covariance between these directions is zero So u gives the direction in which the data is most stretched and λ is the variance of the data in this direction Notice that Total variance of data = i cov(x i,x i )=trcov= k λ k where we ve used the fact that the trace of a square matrix is the sum of its eigenvalues In terms of gene expression, u gives the weighted set of genes that are most strongly coregulated (or counter-regulated!), u 2 gives the second strongest set, etc A plot of the eigenvalues λ i is order from largest to smallest may reveal a transition from signal to noise: 3 Fraction of total variance 2 signal noise Figure 3: Eigenvalue plot Dimension reduction If the PCA separates signal from noise, we can clean up data by considering projection of data onto principal components For each assay: a k =(x k,x 2k,,x mk )= a k + x, 3
11 g 2 ỹ ỹ 2 ỹ 3 g ỹ 4 ỹ 5 Figure 4: Values of ỹ we can find the projection of a k onto the principal components: ỹ k = u a k ỹ k2 = u 2 a k Keeping only the first few values of ỹ for each assay accounts for most of the signal in the data Plotting the assay data projected onto the first few principal components can also reveal correlations beyond linear Sometimes, ỹ and ỹ 2 will show no additional correlations (Figure 5), but sometimes correlations will be apparent even though ỹ, ỹ 2 = 0 (Figure 6) ỹ 2 ỹ Figure 5: Uncorrelated Other applications of PCA in biology: Molecular dynamics reconstructing flexible modes of biomolecules from snapshots Synthetic lethality Immunology antigen-antibody Residue-residue potentials Almost any large data set 4
12 ỹ 2 ỹ Figure 6: Correlated SVD By summing over assays to produce the covariance matrix, we have thrown away information Imagine that the assays were taken at fixed time intervals how can we recapture the time dependence of the gene expression? Any one gene may have a noisy time course of expression: x t t Figure 7: Single gene expression over time But maybe there is a coherent response that contributes a little bit to many genes: This coherent signal will contribute to the correlation of many genes, and so these genes will co-vary So these genes will form a principal component in the PCA of the covariance matrix So after performing a PCA, imagine putting time labels back on the data: By finding the principal components and plotting the projections on these components for each assay vs time, we reconstruct the coherent signal hidden in noisy data SVD allows us to do this in one step: m n matrix X = UDV T = ( u u 2 u m S ) S 2 v v 2 Sn v n, where S S 2 S n 0 In this decomposition, the us represent correlated sets of genes and the vs represent coherent patterns of genes expression 5
13 many genes slightly positive x many genes slightly negative t Figure 8: Overall x signal g g Figure 9: Relabeled data The singular values S are simply related to the principal components of the covariance matrix: λ k = Sk 2 The matrix constructed from the leading l singular values, and the associated right { v} and left { u} singular vectors is the best rank l approximation to X, that is, choosing l X (l) = u k S k v k T minimizes i,j k= x ij x (l) ij 2 6
14 ỹ t Figure 0: Explicit time plot 7
Conceptual Questions for Review
Conceptual Questions for Review Chapter 1 1.1 Which vectors are linear combinations of v = (3, 1) and w = (4, 3)? 1.2 Compare the dot product of v = (3, 1) and w = (4, 3) to the product of their lengths.
More information(a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? Solution: dim N(A) 1, since rank(a) 3. Ax =
. (5 points) (a) If A is a 3 by 4 matrix, what does this tell us about its nullspace? dim N(A), since rank(a) 3. (b) If we also know that Ax = has no solution, what do we know about the rank of A? C(A)
More informationLinear Algebra Review. Vectors
Linear Algebra Review 9/4/7 Linear Algebra Review By Tim K. Marks UCSD Borrows heavily from: Jana Kosecka http://cs.gmu.edu/~kosecka/cs682.html Virginia de Sa (UCSD) Cogsci 8F Linear Algebra review Vectors
More informationLinear Algebra review Powers of a diagonalizable matrix Spectral decomposition
Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2018 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing
More informationLinear Algebra review Powers of a diagonalizable matrix Spectral decomposition
Linear Algebra review Powers of a diagonalizable matrix Spectral decomposition Prof. Tesler Math 283 Fall 2016 Also see the separate version of this with Matlab and R commands. Prof. Tesler Diagonalizing
More informationCS 246 Review of Linear Algebra 01/17/19
1 Linear algebra In this section we will discuss vectors and matrices. We denote the (i, j)th entry of a matrix A as A ij, and the ith entry of a vector as v i. 1.1 Vectors and vector operations A vector
More informationLinear Algebra & Geometry why is linear algebra useful in computer vision?
Linear Algebra & Geometry why is linear algebra useful in computer vision? References: -Any book on linear algebra! -[HZ] chapters 2, 4 Some of the slides in this lecture are courtesy to Prof. Octavia
More informationMath Linear Algebra Final Exam Review Sheet
Math 15-1 Linear Algebra Final Exam Review Sheet Vector Operations Vector addition is a component-wise operation. Two vectors v and w may be added together as long as they contain the same number n of
More informationReview of Linear Algebra
Review of Linear Algebra Definitions An m n (read "m by n") matrix, is a rectangular array of entries, where m is the number of rows and n the number of columns. 2 Definitions (Con t) A is square if m=
More informationKnowledge Discovery and Data Mining 1 (VO) ( )
Knowledge Discovery and Data Mining 1 (VO) (707.003) Review of Linear Algebra Denis Helic KTI, TU Graz Oct 9, 2014 Denis Helic (KTI, TU Graz) KDDM1 Oct 9, 2014 1 / 74 Big picture: KDDM Probability Theory
More informationReview of Linear Algebra
Review of Linear Algebra Dr Gerhard Roth COMP 40A Winter 05 Version Linear algebra Is an important area of mathematics It is the basis of computer vision Is very widely taught, and there are many resources
More informationMATRICES ARE SIMILAR TO TRIANGULAR MATRICES
MATRICES ARE SIMILAR TO TRIANGULAR MATRICES 1 Complex matrices Recall that the complex numbers are given by a + ib where a and b are real and i is the imaginary unity, ie, i 2 = 1 In what we describe below,
More informationVectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1. x 2. x =
Linear Algebra Review Vectors To begin, let us describe an element of the state space as a point with numerical coordinates, that is x 1 x x = 2. x n Vectors of up to three dimensions are easy to diagram.
More informationLinear Algebra (Review) Volker Tresp 2018
Linear Algebra (Review) Volker Tresp 2018 1 Vectors k, M, N are scalars A one-dimensional array c is a column vector. Thus in two dimensions, ( ) c1 c = c 2 c i is the i-th component of c c T = (c 1, c
More informationUNIT 6: The singular value decomposition.
UNIT 6: The singular value decomposition. María Barbero Liñán Universidad Carlos III de Madrid Bachelor in Statistics and Business Mathematical methods II 2011-2012 A square matrix is symmetric if A T
More informationLinear Algebra for Machine Learning. Sargur N. Srihari
Linear Algebra for Machine Learning Sargur N. srihari@cedar.buffalo.edu 1 Overview Linear Algebra is based on continuous math rather than discrete math Computer scientists have little experience with it
More informationQuantum Computing Lecture 2. Review of Linear Algebra
Quantum Computing Lecture 2 Review of Linear Algebra Maris Ozols Linear algebra States of a quantum system form a vector space and their transformations are described by linear operators Vector spaces
More informationMATH 240 Spring, Chapter 1: Linear Equations and Matrices
MATH 240 Spring, 2006 Chapter Summaries for Kolman / Hill, Elementary Linear Algebra, 8th Ed. Sections 1.1 1.6, 2.1 2.2, 3.2 3.8, 4.3 4.5, 5.1 5.3, 5.5, 6.1 6.5, 7.1 7.2, 7.4 DEFINITIONS Chapter 1: Linear
More informationLecture 7 Spectral methods
CSE 291: Unsupervised learning Spring 2008 Lecture 7 Spectral methods 7.1 Linear algebra review 7.1.1 Eigenvalues and eigenvectors Definition 1. A d d matrix M has eigenvalue λ if there is a d-dimensional
More informationMath Bootcamp An p-dimensional vector is p numbers put together. Written as. x 1 x =. x p
Math Bootcamp 2012 1 Review of matrix algebra 1.1 Vectors and rules of operations An p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the
More informationMultivariate Statistical Analysis
Multivariate Statistical Analysis Fall 2011 C. L. Williams, Ph.D. Lecture 4 for Applied Multivariate Analysis Outline 1 Eigen values and eigen vectors Characteristic equation Some properties of eigendecompositions
More informationCS 143 Linear Algebra Review
CS 143 Linear Algebra Review Stefan Roth September 29, 2003 Introductory Remarks This review does not aim at mathematical rigor very much, but instead at ease of understanding and conciseness. Please see
More information2 b 3 b 4. c c 2 c 3 c 4
OHSx XM511 Linear Algebra: Multiple Choice Questions for Chapter 4 a a 2 a 3 a 4 b b 1. What is the determinant of 2 b 3 b 4 c c 2 c 3 c 4? d d 2 d 3 d 4 (a) abcd (b) abcd(a b)(b c)(c d)(d a) (c) abcd(a
More informationMathematical foundations - linear algebra
Mathematical foundations - linear algebra Andrea Passerini passerini@disi.unitn.it Machine Learning Vector space Definition (over reals) A set X is called a vector space over IR if addition and scalar
More informationElementary maths for GMT
Elementary maths for GMT Linear Algebra Part 2: Matrices, Elimination and Determinant m n matrices The system of m linear equations in n variables x 1, x 2,, x n a 11 x 1 + a 12 x 2 + + a 1n x n = b 1
More informationLinear Algebra. Matrices Operations. Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0.
Matrices Operations Linear Algebra Consider, for example, a system of equations such as x + 2y z + 4w = 0, 3x 4y + 2z 6w = 0, x 3y 2z + w = 0 The rectangular array 1 2 1 4 3 4 2 6 1 3 2 1 in which the
More informationMath 215 HW #9 Solutions
Math 5 HW #9 Solutions. Problem 4.4.. If A is a 5 by 5 matrix with all a ij, then det A. Volumes or the big formula or pivots should give some upper bound on the determinant. Answer: Let v i be the ith
More informationFoundations of Computer Vision
Foundations of Computer Vision Wesley. E. Snyder North Carolina State University Hairong Qi University of Tennessee, Knoxville Last Edited February 8, 2017 1 3.2. A BRIEF REVIEW OF LINEAR ALGEBRA Apply
More informationBare minimum on matrix algebra. Psychology 588: Covariance structure and factor models
Bare minimum on matrix algebra Psychology 588: Covariance structure and factor models Matrix multiplication 2 Consider three notations for linear combinations y11 y1 m x11 x 1p b11 b 1m y y x x b b n1
More informationStudy Notes on Matrices & Determinants for GATE 2017
Study Notes on Matrices & Determinants for GATE 2017 Matrices and Determinates are undoubtedly one of the most scoring and high yielding topics in GATE. At least 3-4 questions are always anticipated from
More informationMatrix Algebra Determinant, Inverse matrix. Matrices. A. Fabretti. Mathematics 2 A.Y. 2015/2016. A. Fabretti Matrices
Matrices A. Fabretti Mathematics 2 A.Y. 2015/2016 Table of contents Matrix Algebra Determinant Inverse Matrix Introduction A matrix is a rectangular array of numbers. The size of a matrix is indicated
More informationIntroduction to Machine Learning
10-701 Introduction to Machine Learning PCA Slides based on 18-661 Fall 2018 PCA Raw data can be Complex, High-dimensional To understand a phenomenon we measure various related quantities If we knew what
More informationGlossary of Linear Algebra Terms. Prepared by Vince Zaccone For Campus Learning Assistance Services at UCSB
Glossary of Linear Algebra Terms Basis (for a subspace) A linearly independent set of vectors that spans the space Basic Variable A variable in a linear system that corresponds to a pivot column in the
More informationLinear Algebra - Part II
Linear Algebra - Part II Projection, Eigendecomposition, SVD (Adapted from Sargur Srihari s slides) Brief Review from Part 1 Symmetric Matrix: A = A T Orthogonal Matrix: A T A = AA T = I and A 1 = A T
More informationA Tutorial on Data Reduction. Principal Component Analysis Theoretical Discussion. By Shireen Elhabian and Aly Farag
A Tutorial on Data Reduction Principal Component Analysis Theoretical Discussion By Shireen Elhabian and Aly Farag University of Louisville, CVIP Lab November 2008 PCA PCA is A backbone of modern data
More informationMATH 20F: LINEAR ALGEBRA LECTURE B00 (T. KEMP)
MATH 20F: LINEAR ALGEBRA LECTURE B00 (T KEMP) Definition 01 If T (x) = Ax is a linear transformation from R n to R m then Nul (T ) = {x R n : T (x) = 0} = Nul (A) Ran (T ) = {Ax R m : x R n } = {b R m
More informationMath 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008
Math 520 Exam 2 Topic Outline Sections 1 3 (Xiao/Dumas/Liaw) Spring 2008 Exam 2 will be held on Tuesday, April 8, 7-8pm in 117 MacMillan What will be covered The exam will cover material from the lectures
More informationLecture 15 Review of Matrix Theory III. Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore
Lecture 15 Review of Matrix Theory III Dr. Radhakant Padhi Asst. Professor Dept. of Aerospace Engineering Indian Institute of Science - Bangalore Matrix An m n matrix is a rectangular or square array of
More informationq n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis from any basis.
Exam Review Material covered by the exam [ Orthogonal matrices Q = q 1... ] q n. Q T Q = I. Projections Least Squares best fit solution to Ax = b. Gram-Schmidt process for getting an orthonormal basis
More informationPrincipal Components Analysis (PCA) and Singular Value Decomposition (SVD) with applications to Microarrays
Principal Components Analysis (PCA) and Singular Value Decomposition (SVD) with applications to Microarrays Prof. Tesler Math 283 Fall 2015 Prof. Tesler Principal Components Analysis Math 283 / Fall 2015
More information18.06SC Final Exam Solutions
18.06SC Final Exam Solutions 1 (4+7=11 pts.) Suppose A is 3 by 4, and Ax = 0 has exactly 2 special solutions: 1 2 x 1 = 1 and x 2 = 1 1 0 0 1 (a) Remembering that A is 3 by 4, find its row reduced echelon
More informationLinear Algebra Primer
Linear Algebra Primer David Doria daviddoria@gmail.com Wednesday 3 rd December, 2008 Contents Why is it called Linear Algebra? 4 2 What is a Matrix? 4 2. Input and Output.....................................
More information1 Matrices and Systems of Linear Equations. a 1n a 2n
March 31, 2013 16-1 16. Systems of Linear Equations 1 Matrices and Systems of Linear Equations An m n matrix is an array A = (a ij ) of the form a 11 a 21 a m1 a 1n a 2n... a mn where each a ij is a real
More informationLecture 13. Principal Component Analysis. Brett Bernstein. April 25, CDS at NYU. Brett Bernstein (CDS at NYU) Lecture 13 April 25, / 26
Principal Component Analysis Brett Bernstein CDS at NYU April 25, 2017 Brett Bernstein (CDS at NYU) Lecture 13 April 25, 2017 1 / 26 Initial Question Intro Question Question Let S R n n be symmetric. 1
More informationIntroduction to Matrix Algebra
Introduction to Matrix Algebra August 18, 2010 1 Vectors 1.1 Notations A p-dimensional vector is p numbers put together. Written as x 1 x =. x p. When p = 1, this represents a point in the line. When p
More informationlinearly indepedent eigenvectors as the multiplicity of the root, but in general there may be no more than one. For further discussion, assume matrice
3. Eigenvalues and Eigenvectors, Spectral Representation 3.. Eigenvalues and Eigenvectors A vector ' is eigenvector of a matrix K, if K' is parallel to ' and ' 6, i.e., K' k' k is the eigenvalue. If is
More information1. Linear systems of equations. Chapters 7-8: Linear Algebra. Solution(s) of a linear system of equations (continued)
1 A linear system of equations of the form Sections 75, 78 & 81 a 11 x 1 + a 12 x 2 + + a 1n x n = b 1 a 21 x 1 + a 22 x 2 + + a 2n x n = b 2 a m1 x 1 + a m2 x 2 + + a mn x n = b m can be written in matrix
More informationBasic Concepts in Matrix Algebra
Basic Concepts in Matrix Algebra An column array of p elements is called a vector of dimension p and is written as x p 1 = x 1 x 2. x p. The transpose of the column vector x p 1 is row vector x = [x 1
More informationIntroduction to Matrices
214 Analysis and Design of Feedback Control Systems Introduction to Matrices Derek Rowell October 2002 Modern system dynamics is based upon a matrix representation of the dynamic equations governing the
More information. a m1 a mn. a 1 a 2 a = a n
Biostat 140655, 2008: Matrix Algebra Review 1 Definition: An m n matrix, A m n, is a rectangular array of real numbers with m rows and n columns Element in the i th row and the j th column is denoted by
More informationDimensionality Reduction: PCA. Nicholas Ruozzi University of Texas at Dallas
Dimensionality Reduction: PCA Nicholas Ruozzi University of Texas at Dallas Eigenvalues λ is an eigenvalue of a matrix A R n n if the linear system Ax = λx has at least one non-zero solution If Ax = λx
More informationTopic 1: Matrix diagonalization
Topic : Matrix diagonalization Review of Matrices and Determinants Definition A matrix is a rectangular array of real numbers a a a m a A = a a m a n a n a nm The matrix is said to be of order n m if it
More informationReview problems for MA 54, Fall 2004.
Review problems for MA 54, Fall 2004. Below are the review problems for the final. They are mostly homework problems, or very similar. If you are comfortable doing these problems, you should be fine on
More informationLinear Algebra (Review) Volker Tresp 2017
Linear Algebra (Review) Volker Tresp 2017 1 Vectors k is a scalar (a number) c is a column vector. Thus in two dimensions, c = ( c1 c 2 ) (Advanced: More precisely, a vector is defined in a vector space.
More informationLecture Notes in Linear Algebra
Lecture Notes in Linear Algebra Dr. Abdullah Al-Azemi Mathematics Department Kuwait University February 4, 2017 Contents 1 Linear Equations and Matrices 1 1.2 Matrices............................................
More informationUnit 3: Matrices. Juan Luis Melero and Eduardo Eyras. September 2018
Unit 3: Matrices Juan Luis Melero and Eduardo Eyras September 2018 1 Contents 1 Matrices and operations 4 1.1 Definition of a matrix....................... 4 1.2 Addition and subtraction of matrices..............
More informationEquality: Two matrices A and B are equal, i.e., A = B if A and B have the same order and the entries of A and B are the same.
Introduction Matrix Operations Matrix: An m n matrix A is an m-by-n array of scalars from a field (for example real numbers) of the form a a a n a a a n A a m a m a mn The order (or size) of A is m n (read
More informationPrinciple Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA
Principle Components Analysis (PCA) Relationship Between a Linear Combination of Variables and Axes Rotation for PCA Principle Components Analysis: Uses one group of variables (we will call this X) In
More informationQuick Tour of Linear Algebra and Graph Theory
Quick Tour of Linear Algebra and Graph Theory CS224W: Social and Information Network Analysis Fall 2014 David Hallac Based on Peter Lofgren, Yu Wayne Wu, and Borja Pelato s previous versions Matrices and
More informationNOTES on LINEAR ALGEBRA 1
School of Economics, Management and Statistics University of Bologna Academic Year 207/8 NOTES on LINEAR ALGEBRA for the students of Stats and Maths This is a modified version of the notes by Prof Laura
More informationEcon Slides from Lecture 7
Econ 205 Sobel Econ 205 - Slides from Lecture 7 Joel Sobel August 31, 2010 Linear Algebra: Main Theory A linear combination of a collection of vectors {x 1,..., x k } is a vector of the form k λ ix i for
More informationPCA, Kernel PCA, ICA
PCA, Kernel PCA, ICA Learning Representations. Dimensionality Reduction. Maria-Florina Balcan 04/08/2015 Big & High-Dimensional Data High-Dimensions = Lot of Features Document classification Features per
More informationSpectral Theorem for Self-adjoint Linear Operators
Notes for the undergraduate lecture by David Adams. (These are the notes I would write if I was teaching a course on this topic. I have included more material than I will cover in the 45 minute lecture;
More information14 Singular Value Decomposition
14 Singular Value Decomposition For any high-dimensional data analysis, one s first thought should often be: can I use an SVD? The singular value decomposition is an invaluable analysis tool for dealing
More informationIntroduc)on to linear algebra
Introduc)on to linear algebra Vector A vector, v, of dimension n is an n 1 rectangular array of elements v 1 v v = 2 " v n % vectors will be column vectors. They may also be row vectors, when transposed
More informationDATA MINING LECTURE 8. Dimensionality Reduction PCA -- SVD
DATA MINING LECTURE 8 Dimensionality Reduction PCA -- SVD The curse of dimensionality Real data usually have thousands, or millions of dimensions E.g., web documents, where the dimensionality is the vocabulary
More informationLINEAR ALGEBRA REVIEW
LINEAR ALGEBRA REVIEW SPENCER BECKER-KAHN Basic Definitions Domain and Codomain. Let f : X Y be any function. This notation means that X is the domain of f and Y is the codomain of f. This means that for
More informationVectors and Matrices Statistics with Vectors and Matrices
Vectors and Matrices Statistics with Vectors and Matrices Lecture 3 September 7, 005 Analysis Lecture #3-9/7/005 Slide 1 of 55 Today s Lecture Vectors and Matrices (Supplement A - augmented with SAS proc
More informationIr O D = D = ( ) Section 2.6 Example 1. (Bottom of page 119) dim(v ) = dim(l(v, W )) = dim(v ) dim(f ) = dim(v )
Section 3.2 Theorem 3.6. Let A be an m n matrix of rank r. Then r m, r n, and, by means of a finite number of elementary row and column operations, A can be transformed into the matrix ( ) Ir O D = 1 O
More informationChapter 3 Transformations
Chapter 3 Transformations An Introduction to Optimization Spring, 2014 Wei-Ta Chu 1 Linear Transformations A function is called a linear transformation if 1. for every and 2. for every If we fix the bases
More informationEE731 Lecture Notes: Matrix Computations for Signal Processing
EE731 Lecture Notes: Matrix Computations for Signal Processing James P. Reilly c Department of Electrical and Computer Engineering McMaster University September 22, 2005 0 Preface This collection of ten
More informationLarge Scale Data Analysis Using Deep Learning
Large Scale Data Analysis Using Deep Learning Linear Algebra U Kang Seoul National University U Kang 1 In This Lecture Overview of linear algebra (but, not a comprehensive survey) Focused on the subset
More informationLecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε,
2. REVIEW OF LINEAR ALGEBRA 1 Lecture 1 Review: Linear models have the form (in matrix notation) Y = Xβ + ε, where Y n 1 response vector and X n p is the model matrix (or design matrix ) with one row for
More informationMatrices and Linear Algebra
Contents Quantitative methods for Economics and Business University of Ferrara Academic year 2017-2018 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2 3 4 5 Contents 1 Basics 2
More informationThe Singular Value Decomposition (SVD) and Principal Component Analysis (PCA)
Chapter 5 The Singular Value Decomposition (SVD) and Principal Component Analysis (PCA) 5.1 Basics of SVD 5.1.1 Review of Key Concepts We review some key definitions and results about matrices that will
More informationB553 Lecture 5: Matrix Algebra Review
B553 Lecture 5: Matrix Algebra Review Kris Hauser January 19, 2012 We have seen in prior lectures how vectors represent points in R n and gradients of functions. Matrices represent linear transformations
More informationLinear Algebra and Matrices
Linear Algebra and Matrices 4 Overview In this chapter we studying true matrix operations, not element operations as was done in earlier chapters. Working with MAT- LAB functions should now be fairly routine.
More informationMathematics. EC / EE / IN / ME / CE. for
Mathematics for EC / EE / IN / ME / CE By www.thegateacademy.com Syllabus Syllabus for Mathematics Linear Algebra: Matrix Algebra, Systems of Linear Equations, Eigenvalues and Eigenvectors. Probability
More informationMatrices A brief introduction
Matrices A brief introduction Basilio Bona DAUIN Politecnico di Torino Semester 1, 2014-15 B. Bona (DAUIN) Matrices Semester 1, 2014-15 1 / 41 Definitions Definition A matrix is a set of N real or complex
More informationTutorial on Principal Component Analysis
Tutorial on Principal Component Analysis Copyright c 1997, 2003 Javier R. Movellan. This is an open source document. Permission is granted to copy, distribute and/or modify this document under the terms
More informationPhys 201. Matrices and Determinants
Phys 201 Matrices and Determinants 1 1.1 Matrices 1.2 Operations of matrices 1.3 Types of matrices 1.4 Properties of matrices 1.5 Determinants 1.6 Inverse of a 3 3 matrix 2 1.1 Matrices A 2 3 7 =! " 1
More informationFinal Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson
Final Exam, Linear Algebra, Fall, 2003, W. Stephen Wilson Name: TA Name and section: NO CALCULATORS, SHOW ALL WORK, NO OTHER PAPERS ON DESK. There is very little actual work to be done on this exam if
More information1 Matrices and Systems of Linear Equations
March 3, 203 6-6. Systems of Linear Equations Matrices and Systems of Linear Equations An m n matrix is an array A = a ij of the form a a n a 2 a 2n... a m a mn where each a ij is a real or complex number.
More informationDS-GA 1002 Lecture notes 0 Fall Linear Algebra. These notes provide a review of basic concepts in linear algebra.
DS-GA 1002 Lecture notes 0 Fall 2016 Linear Algebra These notes provide a review of basic concepts in linear algebra. 1 Vector spaces You are no doubt familiar with vectors in R 2 or R 3, i.e. [ ] 1.1
More informationChapter 3. Determinants and Eigenvalues
Chapter 3. Determinants and Eigenvalues 3.1. Determinants With each square matrix we can associate a real number called the determinant of the matrix. Determinants have important applications to the theory
More information1 Linearity and Linear Systems
Mathematical Tools for Neuroscience (NEU 34) Princeton University, Spring 26 Jonathan Pillow Lecture 7-8 notes: Linear systems & SVD Linearity and Linear Systems Linear system is a kind of mapping f( x)
More informationMath 408 Advanced Linear Algebra
Math 408 Advanced Linear Algebra Chi-Kwong Li Chapter 4 Hermitian and symmetric matrices Basic properties Theorem Let A M n. The following are equivalent. Remark (a) A is Hermitian, i.e., A = A. (b) x
More informationMA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS
MA 1B ANALYTIC - HOMEWORK SET 7 SOLUTIONS 1. (7 pts)[apostol IV.8., 13, 14] (.) Let A be an n n matrix with characteristic polynomial f(λ). Prove (by induction) that the coefficient of λ n 1 in f(λ) is
More informationLinear Algebra and Eigenproblems
Appendix A A Linear Algebra and Eigenproblems A working knowledge of linear algebra is key to understanding many of the issues raised in this work. In particular, many of the discussions of the details
More informationMath 102 Final Exam - Dec 14 - PCYNH pm Fall Name Student No. Section A0
Math 12 Final Exam - Dec 14 - PCYNH 122-6pm Fall 212 Name Student No. Section A No aids allowed. Answer all questions on test paper. Total Marks: 4 8 questions (plus a 9th bonus question), 5 points per
More informationLECTURE 16: PCA AND SVD
Instructor: Sael Lee CS549 Computational Biology LECTURE 16: PCA AND SVD Resource: PCA Slide by Iyad Batal Chapter 12 of PRML Shlens, J. (2003). A tutorial on principal component analysis. CONTENT Principal
More informationPHYS 705: Classical Mechanics. Rigid Body Motion Introduction + Math Review
1 PHYS 705: Classical Mechanics Rigid Body Motion Introduction + Math Review 2 How to describe a rigid body? Rigid Body - a system of point particles fixed in space i r ij j subject to a holonomic constraint:
More informationMATH 31 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL
MATH 3 - ADDITIONAL PRACTICE PROBLEMS FOR FINAL MAIN TOPICS FOR THE FINAL EXAM:. Vectors. Dot product. Cross product. Geometric applications. 2. Row reduction. Null space, column space, row space, left
More informationLinear algebra for computational statistics
University of Seoul May 3, 2018 Vector and Matrix Notation Denote 2-dimensional data array (n p matrix) by X. Denote the element in the ith row and the jth column of X by x ij or (X) ij. Denote by X j
More informationANSWERS (5 points) Let A be a 2 2 matrix such that A =. Compute A. 2
MATH 7- Final Exam Sample Problems Spring 7 ANSWERS ) ) ). 5 points) Let A be a matrix such that A =. Compute A. ) A = A ) = ) = ). 5 points) State ) the definition of norm, ) the Cauchy-Schwartz inequality
More informationMATRICES The numbers or letters in any given matrix are called its entries or elements
MATRICES A matrix is defined as a rectangular array of numbers. Examples are: 1 2 4 a b 1 4 5 A : B : C 0 1 3 c b 1 6 2 2 5 8 The numbers or letters in any given matrix are called its entries or elements
More informationFoundations of Matrix Analysis
1 Foundations of Matrix Analysis In this chapter we recall the basic elements of linear algebra which will be employed in the remainder of the text For most of the proofs as well as for the details, the
More information22.3. Repeated Eigenvalues and Symmetric Matrices. Introduction. Prerequisites. Learning Outcomes
Repeated Eigenvalues and Symmetric Matrices. Introduction In this Section we further develop the theory of eigenvalues and eigenvectors in two distinct directions. Firstly we look at matrices where one
More informationPRACTICE FINAL EXAM. why. If they are dependent, exhibit a linear dependence relation among them.
Prof A Suciu MTH U37 LINEAR ALGEBRA Spring 2005 PRACTICE FINAL EXAM Are the following vectors independent or dependent? If they are independent, say why If they are dependent, exhibit a linear dependence
More informationLecture Notes 2: Matrices
Optimization-based data analysis Fall 2017 Lecture Notes 2: Matrices Matrices are rectangular arrays of numbers, which are extremely useful for data analysis. They can be interpreted as vectors in a vector
More information